1
Robotic Bin Picking Wei Luo, Ed Richter, Dr. Arye Nehorai Department of Electrical and Systems Engineering Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we sampled from robot coordiante, we found there existed some tolerable errors. Here is an error estimation obtained from our data. The error is zero-mean. The largest x deviation is 5mm, the largest y deviation is 4.3mm, the largest z deviation is 4.3mm. We use Matlab to acquire image data from Kinect Camera (3-D camera). And then, with the help of Matlab we did some image processing and one-to- one mapping. After finding the correct location where the robot should execute a picking, we use Matlab to send this data to LabVIEW. By the communication between LabVIEW and Robot, Robot can implement a expected picking. Image Alignment: By using Kinect camera to generate RGB and DEPTH image simultaneously, we can align RGB image to DEPTH image, and as a result, any point in the aligned RGB image contains 3-D information( X,Y, and DEPTH information). Adding a 3-D camera to a robotics system can improve throughput by computing the coordinates of the next object while the robot is busy with another task. In this case, we are using the Kinect camera for our 3-D camera. By sampling 3- D position information of 64 points in a picture and their corresponding points, which consist of a cube, in robotics system simultaneously, the coordinate transformation matrix can be found which delicate one-to-one mapping rule. For example, if one figures out the position where the robot should execute a picking in a picture, one can locate the corresponding picking position in robotic system. By implementing cross-correlation of the template and targets, one can find the location with highest probability which usually can be expected location. After identifying shapes and locations of targets, we obtain the coordinate information in the pictures and transform them to the robotic coordinates. Once the robot has the coordinates, it can move to that location and pick up the object. One-to-one mapping DEPTH image Aligned RGB image RGB image Sampling: Matlab can automatically sample the target point in a picture and its corresponding point in robotic system. After choosing a template in the RGB image, edge detection is needed. We implement 2-D cross-correlation between target and template and then find the point with highest probability . Transformation Matrix: There is a built-in function in Matlab called “absor” which can help us find the transformation matrix with the 3-D pairs which we input to this function. Problem: In this project, the targets are making pens. If we implement 2-D cross-correlation using template and the edged images, some targets are always missed, because the targets are of different sizes in different positions. For example, if they are set close to camera, they are bigger in a image while they become smaller if they are set far away from camera. If we use 3-D cross- correlation between the 3-D area that I transformed from image and the 3-D template, it takes several hours to come out with a expected solution. Solution: By transform the whole area where targets are set in the picture to robot(real world) coordinate, the size of targets won’t change even if they are set in different position. We project the transformed 3-D picture to robot Y-Z plane. As a result, targets are always of the same size. 1.RGB image 2.DEPTH image after choosing working area 3.Transformed to 3-D robotic coordinate 4. Project to robotic Y-Z plane Mutiple Picking: Our algorithm can effectively recognize any standing target in the working area. In each picking, we just need to locate the one nearest to robot’s original position, and then just let the robot go there to pick it up and put it in a Bin.

Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we

Embed Size (px)

Citation preview

Page 1: Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we

Robotic Bin PickingWei Luo, Ed Richter, Dr. Arye Nehorai

Department of Electrical and Systems Engineering

Abstract

Overall

Algorithm Target Matching

Error Checking:

By comparing what we transform from Kinect Camera coordinate

to robot coordinate with what we sampled from robot coordiante,

we found there existed some tolerable errors. Here is an error

estimation obtained from our data. The error is zero-mean. The

largest x deviation is 5mm, the largest y deviation is 4.3mm, the

largest z deviation is 4.3mm.

We use Matlab to acquire image data from Kinect Camera (3-D

camera). And then, with the help of Matlab we did some image

processing and one-to-one mapping. After finding the correct

location where the robot should execute a picking, we use Matlab

to send this data to LabVIEW. By the communication between

LabVIEW and Robot, Robot can implement a expected picking.

Image Alignment:

By using Kinect camera to generate RGB and DEPTH image

simultaneously, we can align RGB image to DEPTH image, and

as a result, any point in the aligned RGB image contains 3-D

information( X,Y, and DEPTH information).

Adding a 3-D camera to a robotics system can improve throughput by computing the coordinates of the next object while the robot is busy with another task. In this case, we are using the Kinect camera for our 3-D camera. By sampling 3-D position information of 64 points in a picture and their corresponding points, which consist of a cube, in robotics system simultaneously, the coordinate transformation matrix can be found which delicate one-to-one mapping rule. For example, if one figures out the position where the robot should execute a picking in a picture, one can locate the corresponding picking position in robotic system.

By implementing cross-correlation of the template and targets, one can find the location with highest probability which usually can be expected location. After identifying shapes and locations of targets, we obtain the coordinate information in the pictures and transform them to the robotic coordinates. Once the robot has the coordinates, it can move to that location and pick up the object.

One-to-one mapping

DEPTH image Aligned RGB imageRGB image

Sampling:

Matlab can automatically sample the target point in a picture and

its corresponding point in robotic system. After choosing a

template in the RGB image, edge detection is needed. We

implement 2-D cross-correlation between target and template

and then find the point with highest probability .

Transformation Matrix:

There is a built-in function in Matlab called “absor” which can help

us find the transformation matrix with the 3-D pairs which we

input to this function.

Problem:

In this project, the targets are making pens. If we implement 2-D

cross-correlation using template and the edged images, some

targets are always missed, because the targets are of different

sizes in different positions. For example, if they are set close to

camera, they are bigger in a image while they become smaller if

they are set far away from camera. If we use 3-D cross-

correlation between the 3-D area that I transformed from image

and the 3-D template, it takes several hours to come out with a

expected solution.

Solution:

By transform the whole area where targets are set in the picture to

robot(real world) coordinate, the size of targets won’t change even

if they are set in different position. We project the transformed 3-D

picture to robot Y-Z plane. As a result, targets are always of the

same size.

1.RGB image 2.DEPTH image after choosing working area

3.Transformed to 3-D robotic coordinate 4. Project to robotic Y-Z plane

Mutiple Picking:

Our algorithm can effectively recognize any standing target in

the working area. In each picking, we just need to locate the one

nearest to robot’s original position, and then just let the robot go

there to pick it up and put it in a Bin.