24
Sensing / Perception (con’.t) February 22, 2007 Class Meeting 13

Sensing / Perception (con’.t)

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sensing / Perception (con’.t)

Sensing / Perception (con’.t)

February 22, 2007

Class Meeting 13

Page 2: Sensing / Perception (con’.t)

Infrared (IR)

• Active proximity sensor• Emit near-infrared energy and measure amount of IR light returned• Range: inches to several feet, depending on light frequency and receiver

sensitivity• Typical IR: constructed from LEDs, which have a range of 3-5 inches• Issues:

– Light can be “washed out” by bright ambient lighting– Light can be absorbed by dark materials

Page 3: Sensing / Perception (con’.t)

Bump and Feeler (Tactile) Sensors

• Tactile (touch) sensors: wired so that when robot touches object, electrical signal is generated using a binary switch

• Sensitivity can be tuned (“light” vs. “heavy” touch), although it is tricky

• Placement is important (height, angular placement)

Whiskers on Genghis

Bump sensors

Page 4: Sensing / Perception (con’.t)

Computer Vision

• Computer vision: processing data from any modality that uses the electromagnetic spectrum which produces an image

• Image:– A way of representing data in a picture-like format where there

is a direct physical correspondence to the scene being imaged– Results in a 2D array or grid of readings– Every element in array maps onto a small region of space– Elements in image array are called pixels

• Modality determines what image measures:– Visible light measures value of light (e.g. color or gray

level)– Thermal measures heat in the given region

• Image function: converts signal into a pixel valueCMU Cam

(for color blob tracking)

Pan-Tilt-Zoom camera

Stereo vision

OmnidirectionalCamera

Page 5: Sensing / Perception (con’.t)

Types of Computer Vision

• Computer vision includes:– Cameras (produce images over same

electromagnetic spectrum that humans see)– Thermal sensors– X-rays– Laser range finders– Synthetic aperature radar (SAR)

Thermal image

SAR image (of U.S. capitol building)3D Laser scanner image

Page 6: Sensing / Perception (con’.t)

Computer Vision is a Field of Study on its Own

• Computer vision field has developed algorithms for:– Noise filtering– Compensating for illumination problems– Enhancing images– Finding lines– Matching lines to models– Extracting shapes and building 3D representations

• However, behavior-based/reactive robots tend not to use these algorithms, due to high computational complexity

Page 7: Sensing / Perception (con’.t)

CCD (Charge Coupled Device) Cameras

• CCD technology: Typically, computer vision on reactive/behavior-based robots is from a video camera, which uses CCD technology to detect visible light

• Output of most cameras: analog; therefore, must be digitized for computer use

• Framegrabber:– Card that is used by the computer, which accepts an analog camera signal

and outputs the digitized results– Can produce gray-scale or color digital image– Have become fairly cheap – color framegrabbers cost about $200-$500.

Page 8: Sensing / Perception (con’.t)

Representation of Color

• Color measurements expressed as three color planes – red, green, blue (abbreviated RGB)

• RGB usually represented as axes of 3D cube, with values ranging from 0 to 255 for each axis

Black (0,0,0)

White (255,255,255)

Blue(0,0,255)

Green (0,255,0)

Red (255,0,0) Yellow (255,255,0)

Cyan (0,255,255)

Magenta(255,0,255)

Page 9: Sensing / Perception (con’.t)

Software Representation

1. Interleaved: colors are stored together (most common representation)– Order: usually red, then green, then blue

Example code:

#define RED 0#define GREEN 1#define BLUE 2

int image[ROW][COLUMN][COLOR_PLANE];…red = image[row][col][RED];green = image[row][col][GREEN];blue = image[row][col][BLUE];display_color(red, green, blue);

Page 10: Sensing / Perception (con’.t)

Software Representation (con’t.)

2. Separate: colors are stored as 3 separate 2D arrays

Example code:

int image_red[ROW][COLUMN];int image_green[ROW][COLUMN];int image_blue[ROW][COLUMN];

…red = image_red[row][col];green = image_green[row][col];blue = image_blue[row][col];display_color(red, green, blue);

Page 11: Sensing / Perception (con’.t)

Challenges Using RGB for Robotics

• Color is function of:– Wavelength of light source– Surface reflectance– Sensitivity of sensor

• Color is not absolute;– Object may appear to be at different color values at different distances to due

intensity of reflected light

Page 12: Sensing / Perception (con’.t)

Better: Device which is sensitive to absolute wavelength

Better: Hue, saturation, intensity (or value) (HSV) representation of color• Hue: dominant wavelength, does not change with robot’s relative

position or object’s shape• Saturation: lack of whiteness in the color (e.g., red is saturated, pink is

less saturated)• Intensity/Value: quantity of light received by the sensor

Transforming RGB to HSV

Page 13: Sensing / Perception (con’.t)

Representation of HSV

“bright”

pastelwhitesaturation

intensity: 0-1(increasing signal strength)

hue: 0-360(wavelength)

red 0orangegreen

cyanmagenta

saturation: 0-1(decreasing whiteness)

Page 14: Sensing / Perception (con’.t)

HSV Challenges for Robotics

• Requires special cameras and framegrabbers• Very expensive equipment

• Alternative: Use algorithm to convert -- Spherical Coordinate Transform (SCT)– Transforms RGB data to a color space that more closely duplicates response

of human eye– Used in biomedical imaging, but not widely used for robotics– Much more insensitive to lighting changes

Page 15: Sensing / Perception (con’.t)

Region Segmentation

• Region Segmentation: most common use of computer vision in robotics, with goal to identify region in image with a particular color

• Basic concept: identify all pixels in image which are part of the region, then navigate to the region’s centroid

• Steps:– Threshold all pixels which share same color (thresholding)– Group those together, throwing out any that don’t seem to be in same area as

majority of the pixels (region growing)

Page 16: Sensing / Perception (con’.t)

Example Code for Region Segmentation

for (i=0; i<numberRows; i++)for (j=0; j<numberColumns; j++)

{ if (((ImageIn[i][j][RED] >= redValueLow)&& (ImageIn[i][j][RED] <= redValueHigh))&& ((ImageIn[i][j][GREEN] >= greenValueLow)&& (ImageIn[i][j][GREEN] <= greenValueHigh))&& ((ImageIn[i][j][BLUE] >= blueValueLow)&& (ImageIn[i][j][BLUE] <= blueValueHigh)))ImageOUT[i][j] = 255;

elseImageOut[i][j] = 0;

}

Note range of readings required due to non-absolute color values

Page 17: Sensing / Perception (con’.t)

Example of Region-Based Robotic Tracking using Vision

Page 18: Sensing / Perception (con’.t)

Another Example of Vision-Based Robot Detection Using Region Segmentation

Page 19: Sensing / Perception (con’.t)

Color Histogramming

• Color histogramming: – Used to identify a region with several colors– Way of matching proportion of colors in a region

• Histogram: – Bar chart of data– User specifies range of values for each bar (called buckets)– Size of bar is number of data points whose value falls into the range for that

bucket• Example:

0-31 64-95 128-159 192-22332-63 96-127 160-191 224-251

Page 20: Sensing / Perception (con’.t)

Color Histograms (con’t.)

• Advantage for behavior-based/reactive robots: Histogram Intersection– Color histograms can be subtracted from each other to determine if current

image matches a previously constructed histogram– Subtract histograms bucket by bucket; different indicates # of pixels that didn’t

match– Number of mismatched pixels divided by number of pixels in image gives

percentage match = Histogram Intersection

• Useful for detecting affordances

• This is example of local, behavior-specific representation that can be directly extracted from environment

Page 21: Sensing / Perception (con’.t)

Range from Vision

• Perception of depth from stereo image pairs, or from optic flow

• Stereo camera pairs: range from stereo

• Key challenge: how does a robot know it is looking at the same point in two images?– This is the correspondence problem.

Page 22: Sensing / Perception (con’.t)

Simplified Approach for Stereo Vision

• Given scene and two images• Find interest points in one image• Compute matching between images (correspondence)• Distance between points of interest in image is called disparity• Distance of point from the cameras is inversely proportional to disparity• Use triangulation and standard geometry to compute depth map

• Issue: camera calibration: need known information on relative alignment between cameras for stereo vision to work properly

Page 23: Sensing / Perception (con’.t)

Example of Computing Depth from Multiple Images

• Robot Team: 4 ATRV-mini robots (Manuf: RWI/iRobot)– Named (after Roman Emperors):

Augustus, Constantine, Theodosius, Vespasian

• Sensors:– 2 robots: PTZ camera – 2 robots: SICK laser– Compass/inclinometer– DGPS– Sonar

Page 24: Sensing / Perception (con’.t)

Example Results of Depth Maps

Actual scene Depth map Depth covarianceAugustus:

Theodosius: