1
Background Microsoft Kinect : is a depth camera that uses infrared light to capture objects positioned in space. The depth sensor has a 640 x 480-pixel resolution and runs at 30 FPS (frames per second). It also comes with an RGB camera and 4 microphones. Data was collected using ROS(Robot Operating System). Hokuyo Aist URG 04LX-UG01 : COMPARATIVE ANALYSIS OF DIFFERENT RANGE SENSORS FOR MOBILE AND SOCIAL ROBOTS Karen C. Aragon 1 , Michael Constantino 2 , Simon Parsons 1,3 and Elizabeth Sklar 1,3 1 Department of Computer and Information Science, Brooklyn College, City University of New York, Brooklyn NY 11210 2 Department of Computer Science, The College of Staten Island, City University of New York, Staten Island, NY 10314 3 Department of Computer Science, The Graduate Center, City University of New York, 365 Fifth Avenue, New York NY 11210 Introduction We present an accuracy analysis of depth data using Microsoft’s 3D Kinect sensor, Hokuyo’s Aist 2D URG laser sensor, and two laser pointers. Both range sensors are mounted on top of an iRobot Create and the laser pointers on a Blackfin Surveyor. Motivation We aim to point developers, researchers, and hobbyists to the most attractive features of Kinect, which comes built with a laser and is priced considerably lower than the URG laser, with an emphasis on its applications in robot vision. Conclusion Results indicate that the Kinect is a low-cost alternative to the more expensive URG for use in complex tasks such as mapping and obstacle avoidance, with a 0.26% difference in error. However, the Kinect's disadvantage is its minimum range reading of 0.6m, which would prove difficult when creating Human-Robot Systems or mapping in close range. The laser pointers were the least reliable, with an error percentage of 18.7%. With a max distance reading of 140 cm, it failed to read corners. Experiment To determine the error percentage, depth data from the Kinect, URG, and laser pointers were compared to actual distances. The devices were placed in the center of an enclosed 203.2cm (80 inch) square arena with measurements taken at 5intervals. Results % error = (estimate - actual) / actual * 100 Funded by NSF CISE REU Site #08-51901 MetroBotics: Undergraduate robotics research at an urban public college Microsoft Kinect The depth camera utilizes the principle of structured light: Kinect’s infra-red laser shoots a grid of infrared dots in a straight line and the Kinect's sensors detect the returning reflections, and translate the resulting grid of dots into depth data (i.e. feet or meters). Hokuyo Aist URG uses “time-of-flight” to measure depth. The laser emits an infra-red beam and the time it takes for the light to be received is calculated. IR Projector IR Camera Distance = 0.1236 * tan( d/ 2842.5 + 1.1860) d= rawDisparity (calculations taken by the IR camera) Laser Pointers Range is calculated by inputting the number of pixels between the two laser pointers into the equation: Field of view: 58.2 Field of view: 240 min range: 0.6m max range: 4-5m min range: 0.02m max range: 4m Microsoft’s Kinect Hokuyo Aist URG Scan of corner taken with Kinect Scan of corner taken with URG Range = -23(500/sqrt(d-8) URG laser times the reflection off the object max range: 1.4 m Laser Pointers Surveyor SRV-1 Blackfin : is a 4-motor tracked wireless network controlled robot equipped with an on-board microprocessor, WLAN 802.11b/g chip, camera and two laser pointers. is a light-weight laser sensor with a wide scan angle of 240. It allows for scanning with minimal influence from object's color and reflectance. Data was collected using Player. Error: 1.72% Error: 1.787% Error: 1.813% Error: 18.711%

COMPARATIVE ANALYSIS OF DIFFERENT RANGE …agents.sci.brooklyn.cuny.edu/metrobotics/posters/2012/... ·  · 2017-08-23infrared light to capture objects positioned in space. ... attractive

Embed Size (px)

Citation preview

Page 1: COMPARATIVE ANALYSIS OF DIFFERENT RANGE …agents.sci.brooklyn.cuny.edu/metrobotics/posters/2012/... ·  · 2017-08-23infrared light to capture objects positioned in space. ... attractive

Background Microsoft Kinect: is a depth camera that uses infrared light to capture objects positioned in space. The depth sensor has a 640 x 480-pixel resolution and runs at 30 FPS (frames per second). It also comes with an RGB camera and 4 microphones. Data was collected using ROS(Robot Operating System).

Hokuyo Aist URG 04LX-UG01:

COMPARATIVE ANALYSIS OF DIFFERENT RANGE SENSORS FOR MOBILE AND SOCIAL ROBOTS

Karen C. Aragon1, Michael Constantino2, Simon Parsons1,3 and Elizabeth Sklar1,3 1Department of Computer and Information Science, Brooklyn College, City University of New York, Brooklyn NY 11210 2Department of Computer Science, The College of Staten Island, City University of New York, Staten Island, NY 10314

3Department of Computer Science, The Graduate Center, City University of New York, 365 Fifth Avenue, New York NY 11210

Introduction We present an accuracy analysis of depth data using Microsoft’s 3D Kinect sensor, Hokuyo’s Aist 2D URG laser sensor, and two laser pointers. Both range sensors are mounted on top of an iRobot Create and the laser pointers on a Blackfin Surveyor.

Motivation We aim to point developers, researchers, and hobbyists to the most attractive features of Kinect, which comes built with a laser and is priced considerably lower than the URG laser, with an emphasis on its applications in robot vision.

Conclusion Results indicate that the Kinect is a low-cost alternative to the more expensive URG for use in complex tasks such as mapping and obstacle avoidance, with a 0.26% difference in error. However, the Kinect's disadvantage is its minimum range reading of 0.6m, which would prove difficult when creating Human-Robot Systems or mapping in close range. The laser pointers were the least reliable, with an error percentage of 18.7%. With a max distance reading of 140 cm, it failed to read corners.

Experiment To determine the error percentage, depth data from the Kinect, URG, and laser pointers were compared to actual distances. The devices were placed in the center of an enclosed 203.2cm (80 inch) square arena with measurements taken at 5◦ intervals.

Results % error = (estimate - actual) / actual * 100

Funded by NSF CISE REU Site #08-51901 MetroBotics: Undergraduate robotics research at an urban public college

Microsoft Kinect The depth camera utilizes the principle of structured light: Kinect’s infra-red laser shoots a grid of infrared dots in a straight line and the Kinect's sensors detect the returning reflections, and translate the resulting grid of dots into depth data (i.e. feet or meters).

Hokuyo Aist URG uses “time-of-flight” to measure depth. The laser emits an infra-red beam and the time it takes for the light to be received is calculated.

IR Projector IR Camera

Distance = 0.1236 * tan( d/ 2842.5 + 1.1860)

d= rawDisparity (calculations taken by the IR camera)

Laser Pointers Range is calculated by inputting the number of pixels between the two laser pointers into the equation:

Field of view: 58.2 Field of view: 240

min range: 0.6m

max range: 4-5m

min range: 0.02m

max range: 4m

Microsoft’s Kinect Hokuyo Aist URG

The linked image cannot be displayed. The file may have been moved, renamed, or deleted. Verify that the link points to the correct file and location.

The linked image cannot be displayed. The file may have been moved, renamed, or deleted. Verify that the link points to the correct file and location.

Scan of corner taken with Kinect Scan of corner taken with URG

Range = -23(500/sqrt(d-8)

URG laser times the reflection off the object

max range: 1.4 m

Laser Pointers

Surveyor SRV-1 Blackfin: is a 4-motor tracked wireless network controlled robot equipped with an on-board microprocessor, WLAN 802.11b/g chip, camera and two laser pointers.

is a light-weight laser sensor with a wide scan angle of 240. It allows for scanning with minimal influence from object's color and reflectance. Data was collected using Player.

Error: 1.72%

Error: 1.787%

Error: 1.813%

Error: 18.711%