16
Machine Vision Camera A machine vision camera is a digital camera that captures an image using a sensitive sensor and sends the image to a processor or a processing unit for ima processing and analysis. The camera is one of the important hardware for the mac vision system. A Machine Vision Camera Basic Mechanism: The sensor used in the camera is made of number of pixels. Each pixel captu light and converts photons (quantum of electromagnetic energy) into electrical c The amount of electrical charge indicates the light intensity/brightness falling pixel. These electrical charges are then converted into an electronic image eith the camera itself or in the frame grabber and are displayed in the monitor. Types of sensor: There are basically two types of light sensors (imagers) used within all ge available digital cameras at present CCD ! "harge "oupled #evice CMOS ! "omplementary $etal %xide &emiconductor 'oth ""# and "$%& are pixelated metal oxide semiconductors (photo!diodes) made from silicon. They have basically the same sensitivity within the visible a spectrum. They both convert the light that falls onto them into electrons by process and can be considered basically similar in operation. 'oth "$%& and ""# imagers can only sense the level/amount of light but not its color. *either of the names +""#+ or +"$%&+ has anything to do with image sensing +"harge "oupled #evice+ is a description of the technology used to move and stor electron charge and +"omplementary $etal %xide &emiconductor+ is the name of the technology used to ma-e a transistor on a silicon wafer. The fundamental difference between the two technologies is the architecture the imager within the chip and camera. The difference is less +what+ they are an +how+ they are used. This is because the imager not only measures the light that must also underta-e a host of other electronic tas-s. here and how these tas-s handled is what differentiates the two types of sensor.

Machine Vishan

Embed Size (px)

DESCRIPTION

machine vishan cam selection

Citation preview

Machine Vision Camera
A machine vision camera is a digital camera that captures an image using a light
sensitive sensor and sends the image to a processor or a processing unit for image  processing and analysis. The camera is one of the important hardware for the machine
vision system.
Basic Mechanism:
The sensor used in the camera is made of number of pixels. Each pixel captures light and converts photons (quantum of electromagnetic energy) into electrical charges.
The amount of electrical charge indicates the light intensity/brightness falling on that
 pixel. These electrical charges are then converted into an electronic image either within
the camera itself or in the frame grabber and are displayed in the monitor.
Types of sensor:
There are basically two types of light sensors (imagers) used within all generally available digital cameras at present
• CCD ! "harge "oupled #evice
'oth ""# and "$%& are pixelated metal oxide semiconductors (photo!diodes)
made from silicon. They have basically the same sensitivity within the visible and near! spectrum. They both convert the light that falls onto them into electrons by the same
 process and can be considered basically similar in operation. 'oth "$%& and ""#
imagers can only sense the level/amount of light but not its color.
 *either of the names +""#+ or +"$%&+ has anything to do with image sensing, +"harge "oupled #evice+ is a description of the technology used to move and store the
electron charge and +"omplementary $etal %xide &emiconductor+ is the name of the
technology used to ma-e a transistor on a silicon wafer.
The fundamental difference between the two technologies is the architecture of the imager within the chip and camera. The difference is less +what+ they are and more
+how+ they are used. This is because the imager not only measures the light that falls on it
 
 Block diagram of a CCD sensor 
ithin a ""# imager all image processing is done off!chip away from the
sensor allowing for a versatile approach to sensor and camera design. %nly the +0hoton!
to!Electron+ conversion is done within the pixel providing the maximum amount of space within each pixel to be used for capturing image information. The +electron!to!voltage+
conversion is done on the chip and data therefore still leaves the ""# in an analogue
 
ith the "$%& imager both the +0hoton!to!Electron+ conversion and the
+Electron!to!2oltage+ conversion is done within the pixel leaving less room for the light receptive part of the sensor. This means the "$%& chip has less area to actually receive
the light and normally some form of micro!lens is needed to capture the light the pixel
would otherwise miss.
The extra sensing area within the ""# imager allows the ""# based camera to capture more light which will normally provide higher quality than a camera based on
the "$%&. %n the other hand the "$%& chip has the advantage of having everything
that it needs to wor- within the chip thus it is able to provide a +camera!on!a!chip+ whereas the ""# camera needs many other secondary chips (from 3 to 4) as well as the
imager to allow it to wor-. A ""# produces analogue signals that are moved away from
the chip before they are digiti1ed and converted to 5+s and 6+s by an external dedicated chip. %n the other hand a "$%& underta-es the digiti1ation within the chip by
underta-ing the image!capture process within each pixel.
7p until quite recently the ""# based camera has been considered the +de facto+
standard within the digital mar-et and a great deal of development has been invested in  producing these sensors to maximi1e their quality potential. They can offer high
resolution (depending on how they are utili1ed) and high quality albeit at a fairly high
 price.
The "$%& based camera is generally much simpler to manufacture as there are far fewer components due to most of the processing technology being included within the
chip. At the time of writing both the "$%& and ""# sensors have their strengths and
wea-nesses and developers have overcome most of the limitations of the early "$%& sensor.
A comparison of CCD and CMOS image sensors:
CCD CMOS
8ong history of high quality performance 8ower performance in past but now providing
comparable quality
8ow noise *oisier but getting better quic-ly
ell established technology *ewer technology
9igh power consumption elatively low power consumption
$oderately reliable $ore reliable due to integration of chip
&mall pixel si1e 8arger pixel si1e
 *eeds lots of external circuitry All circuitry on chip
9igh :ill :actor 8ower :ill :actor  
 
digiti1ed off the chip
Camera Parameters:
• %ptical :ormat of &ensor 
Resolution of a Camera:
n general the resolution of the camera refers to the number of pixels in the image.
A pixel is the smallest picture element of the image and the imaging sensor. The image is
made up of these pixels arranged in a matrix format. The *o. of bits each pixel occupies is called as bit!depth. This bit depth is dependent on the output format of the camera. The
shape of the pixels is important in a machine vision system. t is always better to go with
a square pixel. The resolution will be given as (*o. of columns x *o. of ows) ;<6 x
<46 =6<4 x 5>3; 56=< x ?;4 etc. The resolution for a machine vision system is calculated based on the smallest feature that has to be detected by the system when
considering a lot of other parameters li-e contrast level lighting variations pixel errors
etc
Speed:
The rate at which the camera can send the images is defined as the speed of the camera. The speed of the camera is largely dependent on the speed of the interface the
sensor speed the resolution of the camera and the bit!depth (*o. of bits per pixel) of the
camera.
rame !ra""er:
A frame grabber is an electronic plug!in board installed in a computer that is used
to interface the camera and the computer. t ta-es the image data from the camera and gives to 0" and also provides signals to control the camera parameter li-e exposure time
trigger etc. A frame grabber has on!board memory and also does not involve computers
 processor for data transfer from camera hence it uses very little A$ present in the computer and allowing the computer to concentrate more on processing the images than
communicating with the camera. &ome computers has ports that allow the camera to be
connected directly with the computer without the need for a frame grabber in such conditions the communication between the camera and the computer ta-es up lots of
 processing capability and A$ space of the computer thereby reducing the A$ @
 processor availability for processing images or any other wor-. n addition to the above
 
Dri#er:
A driver is software program which ta-es care of the communication between the
hardware device and the computer. This software program lies in inner part of the operating system and communicates with the software and hardware that we use. n
general it acts as an intermediate person between two different components. %nly
 because of this software program also called as device driver our software is able to access the hardware.
:or the cameras also drivers are needed to communicate with the computers. #ifferent drivers are needed for different -inds of interfaces also these can change from
one vendor to the other.
$nterface standard:
There are many -inds of interfaces available to connect the camera to the computer through a frame grabber. &ome standard interfaces are
• "amera 8in- 
Camera %in&:
"amera 8in- is a serial communication protocol designed for computer vision applications. t was designed for the purpose of standardi1ing scientific and industrial
video products including cameras cables and frame grabbers. The standard is maintained
and administered by the Automated maging Association or AA the global machine vision industry+s trade group. The camera lin- has three modes B 'ase mode $edium
mode and :ull mode. These modes differ in the speed with which datas are transferred.
The speed of the camera lin- interface ranges from =>6$bytes/sec for the base mode to
;>6 $bytes/sec for the full mode. %ne maCor disadvantage of camera lin- interface is that there is no power transmitted through the camera lin- cable. 0ower has to be given
seperately to the camera.
 
7&' B 7niversal &erial 'us is the most widely used interface for computer
 peripherals. $achine vision also adopted this interface for communication between the
camera and the computer. 7&' can support up to ;6 $bytes/s. The advantage of using a 7&' interface is that it is very inexpensive and can also provide power to the camera.
The maCor disadvantage of this interface is that it uses much of the processing capability
of the "07 for data transfer and also the cables are not industrial standard hence are  prone to noise interference.
!ig(:
igE standard for machine vision was developed based on the igabit Ethernet
standard. The igE standard is developed by a group of machine vision component
manufacturers under the supervision of the Automated maging Association (AA). igE
interface has the longest cable length (about 566m) without the use of repeaters or  boosters. #ata is transferred at the rate of 5=> $bytes/s. The communication is using the
standard protocol B T"0 or 7#0. The igE interface does not transmit power to the
camera.
ire)ire *$((( +,-./:
The EEE 53D< interface is a serial bus interface standard for high!speed communications and isochronous real!time data transfer frequently used by personal
computers as well as in digital audio digital video automotive and aeronautics
applications. #" (nstrumentation @ ndustrial #igital "amera) is the :ireire data format standard for live video. The system was designed for machine vision systems but
is also used for other computer vision applications and for some webcams. Although they
are easily confused since they both run over :ireire #" is different from and
incompatible with the ordinary #2 (#igital 2ideo) camcorder protocol. The speed of this interface is around <66 $bits/s for EEE 53D<a standard and 466$bits/s for EEE 53D<b
standard. :ireire can carry power hence can supply to the camera.
Analog $nterface PA%01TSC:
Analog interfaces are generally prone to noise and standard interfaces cannot
support high resolution images. They generally support 2A resolution for both color and monochrome images. As mentioned earlier the digiti1ation of the images is done on
the frame grabber and not on the camera. The data transfer rate of analog interfaces is ;=
$bits/s. The analog interfaces are getting outdated and are slowly being replaced by
digital interfaces.
Color ormat:
The output of the camera can either be color image or monochrome image. enerally the sensors used in the cameras measure only the intensity of the light falling
on it and not the color of the light. 9ence to get the color image either a filter is used or a
 beam splitter is used. n general maCority of the applications can be addressed using Cust a monochrome camera.
Monochrome camera:
 
$onochrome camera gives out image in shades of gray (intensity information)
and no color information is given out. The photon to voltage conversion in the ""# or
"$%& is used as the image signal. The image signal is converted to its digital domain using an A#" (on chip in the "$%&). Each pixel is then represented using a certain
 binary value usually 4 bit. The value 6 corresponds to no light or dar- levelF and =>>
corresponds to saturation or white levelF.
Bayer Decoded $mage:
A bayer image is a color image captured using a sensor having a bayer filter. A 'ayer filter is a color filter array (":A) for arranging ' color filters on a grid of
 photosensors. ts particular arrangement of color filters is used in most single!chip digital
image sensors to create a color image. The filter pattern is >6G green =>G red and =>G
 blue hence is also called ' or other permutation such as '.
 A Bayer Filter Pattern
 
 Reresentation of Bayer Filter on a sensor 
The sensor array captures the light as obtained from the filter. Each pixel as
shown is sensitive to only one color component. 9ence the image has encodedF color information. The actual color can be decoded by one of the many bayer decoding
algorithms.
R!B Color 2 , chip sensor:
8ight entering from the lens is split using a trichroic prism assembly. The prism
assembly splits the image color components as ed green and blue. Each of these is  proCected onto a sensor. The outputs of the 3 sensors are then combined to produce a
color image.
Three!""# cameras are generally regarded to provide superior image quality to
cameras with only one ""#. 'y ta-ing a separate reading of red green and blue values for each pixel three!""# cameras achieve much better precision than single!""#
cameras. Almost all single!""# cameras use a bayer filter which allows them to detect
only one!third of the color information for each pixel. The other two!thirds must be interpolated with a demosaicing algorithm to +fill in the gaps+.
  Three!""# cameras are generally much more expensive than single!""#
 
Trigger:
A trigger is used to tell the camera when to ta-e the image. The trigger can be
given by using software or by directly connecting to the camera. hen the camera is
given trigger directly by connecting the trigger source directly to the camera the trigger ratings has to be adhered to. :or some cameras the trigger can be given only using
software.
Optical formats of Sensor:
The imaging sensor (""# @ "$%&) used in machine vision applications has
some standard formats. The formats are based on the 2idicon tubes of the 5D>6Hs. The
diagonal value of the sensor is roughly =/3rd of this format. Typically machine vision cameras will have 5/<F 5/3F 5/=F =/3F and 5F sensor format in an aspect ratio of <3.
The imaging sensor format has to be equal to or less than the lens format to utili1e all the
 pixels in the sensor.
 Some image sensor formats%
 
Diagonal
*mm/ >.66 >.;4 ;.66 ;.?= ?.54 4.66 4.D3 D.>6 55.6 5;.6 =5.;
)idth
*mm/ <.66 <.>< <.46 >.3? >.?; ;.<6 ?.54 ?.;6 4.46 5=.4 5?.3
;eight
*mm/ 3.66 3.<= 3.;6 <.6< <.=D <.46 >.3= >.?6 ;.;6 D.; 53.6
Area
*mm6/ 5=.6 5>.> 5?.3 =5.? =<.? 36.? 34.= <3.3 >4.5 5=3 ==>
$maging Terminology:
Sensiti#ity:
The amount of light that a sensorHs photodiode can collect is termed as sensitivity.
The more light that is collected the stronger the signal and therefore the image quality improves. The surface area of the pixel is directly proportional to the sensitivity. 'etter
sensitivity infers better image (noise free).
Signal to 1oise Ratio *S1R/:
The quantity of noise present in a given signal level is given in terms of &ignal to
 *oise atio. A high ratio gives better picture quality as compared to a low ratio.
Dynamic Range:
The ratio of the amount of light it ta-es to saturate the sensor to the least amount
of light detectable above bac-ground noise is termed as dynamic range. A good dynamic range allows very bright and very dim areas to be seen simultaneously. t is used to
describe the number of gray levels that can be resolved. Typical industrial scene can have
 
Responsi#ity:
esponsivity is a measure of the input!output gain of the sensor. t refers to the amount of signal the sensor delivers per unit of input optical energy. "$%& imagers are
marginally superior to ""#s in general because gain elements are easier to place on a
"$%& image sensor.
Spectral Response:
The spectral response gives the response of the sensor across different wavelengths of light. The requirements of optical filters are Cudged based on this factor.
:or example when rays give good response then to avoid interference of rays on
the image we use filters on the camera or lens this eliminates rays from affecting
 
ill actor:
The percentage of a pixel devoted to collecting light is called the pixelHs fill 
factor. ""#s have a 566G fill factor but "$%& cameras have much less this is due to the fact that some of the related circuits are inbuilt into the "$%& sensor. The lower the fill
factor the less sensitive the sensor is and relatively longer exposure times must be used.
Blooming:
'looming happens when too many photons are being produced to be received by
a pixel. The pixel overflows and causes the electrons to go to adCacent pixels. 'looming
is similar to overexposure in film photography except that in digital imaging the result is a number of vertical and/or hori1ontal strea-s appearing from the light source in the
 picture. 'looming happens when imaging very bright obCects or light sources.
 (ffects of Blooming 
)indo<ing:
%ne unique capability of "$%& technology is the ability to read out a portion of
the image sensor. This allows elevated frame or line rates for small regions of interest.
""#s generally have no or limited abilities in windowing. indowing is an adCustable
 parameter (for "$%& cameras) and can be set by the user.
Brightness:
'rightness level is an adCustable parameter used to increase the overall brightness of the image. This parameter is an additive factor where by a factor is added to the
intensity value of the pixels.
!ain:
ain is again an adCustable parameter used to increase the contrast. t is a
multiplicative factor where by a factor is multiplied to the intensity values of the pixels.
The gain parameter adCusts the responsivity factor. ncreasing the gain also increases the noise level in the image.
)hite Balancing:
hite balancing is used in machine vision only for bayer decoded color image. n
a bayer image the number of pixels showing green values is more than that of blue and
red hence the image can appear with a greenish tint to remove or reduce the greenish tint we use white balancing. hite balancing is done by adCusting the gain individually for
the three color planes. 0lease note that not all cameras have a white balancing feature.
Also white balancing is only an aesthetic feature and it does not significantly improve the  performance of the system.
 )mages before 'hite balancing and after 'hite balancing 
(=posure Time0 $ntegration time:
Exposure time is the period for which the sensor is open/sensitive to the external
light. Typically the exposure time in machine vision application will me in microseconds or in milliseconds. The exposure time will directly affect the amount of light falling on
 
exposure time will be less to avoid motion blur. ntegration time is the sum of exposure
time and the time ta-en to read out the image from the sensor.
Area Scan > %ine Scan Cameras:
The cameras are classified based on whether the effective pixels are arranged in a
matrix format (=# array) or in a single array in the sensor. f it is arranged in a matrix (similar to consumer digital cameras) it is called as Area scan camera. f it is arranged in
a 5# fashion it is called as 8ine scan camera.
   An Area scan sensor and a &ine scan sensor 
Area Scan Camera:
Area scan cameras in machine vision are used for inspecting components that are stationary or relatively slow moving. &etting up an area scan camera is relatively easy
when compared to that of the line scan camera. These cameras are used widely in the
industries and can address almost all applications except those where the components are
moving at a very high speed or where it is rotating. These cameras have two -inds of electronic shutters B olling shutter and lobal &hutter. The rolling shutter is used to ta-e
images of stationary obCects or very slow moving obCects. 9ere integration of pixels
(exposure to light) happens one row at a time () one by one across the entire array (9) hence there is a possibility of s-ew if the obCect moves fast in front of the camera. n
general the rolling shutter cameras have a better fill factor. n global shutter the
 
 
%ine Scan Cameras:
These cameras are almost solely used in industrial settings to capture an image of
a constant stream of moving material. #ata coming from the line!scan camera has a frequency where the camera scans a line waits and repeats. The data coming from the
line!scan camera is commonly processed by a computer to collect the one!dimensional
line data and to create a two!dimensional image. The collected two!dimensional image data is then processed by image!processing methods for industrial purposes. The line rate
or frequency of the line scan camera has to be matched to get proper image. f the line
rate is not set properly then the image will loo- li-e the component is either elongated or shrun-. Typically line scan camera needs more illumination but at a concentrated area
 *o' an image is constr!cted !sing a line scan camera
;o< to choose a Machine Vision Camera:
The most important factor to remember when choosing a machine vision camera
is the resolution B number of pixels needed for that particular application. To calculate the
number of pixels one must -now what is the smallest feature that has to be detected on the component if it is presence/absence -ind of application or what is the measurement
accuracy needed if it is a dimensioning -ind of application. Two pixels are enough to
resolve a feature under ideal condition B ie when the contrast levels are high no lighting variations no positional variations etc. 'ut the real world conditions are far from ideal,
hence we ta-e eight to ten pixels to resolve a feature in an image. %nce the number of
 pixels needed to resolve the smallest feature is calculated then we can calculate the resolution using the following formula
 *o. of pixels in I plane J K8ength of the :%2 (in mm) x 4L/ smallest feature (in mm)
 *o. of pixels in M plane J Kidth of the :%2 (in mm) x 4L/ smallest feature (in mm)
9ere the 4 stands for the number of pixels needed for detecting the smallest feature.
Therefore the resolution is given by
 
As mentioned earlier the cameras available in the mar-et come in standard resolution li-e
;<6 x <46 56=< x ?;4 =6<4 x 5>3; etc hence after this calculation choose the resolution
that is closest to your calculated value.
&imilarly for a dimensioning -ind of application each pixel should cover an area
of approximately one!tenth of the required accuracy. 7sing this principle and the above formula the resolution can be calculated.
The next important factor is the speed of the application. f the component is moving in front of the camera at low speed even a rolling shutter camera can address the
application but if the component is moving at high speed only a global shutter camera
will wor- for the application. f the component is moving at very high speed where even
a global shutter camera canHt address the application then we have to use a line scan camera.
The other things to -eep in mind is that the camera interface standard color or monochrome image output provision for external trigger and the optical format of the
sensor. The optical format of the camera should be equal to or less than that of the lens
and also the lens mount should match with that of the camera.
Common errors and Trou"le shooting:
The common errors that people ma-e while using the machine vision camera is
during the installation of the camera. These errors include not proper installation of the
drivers mounting of the camera is not rigid improper selection of lens insufficient
lighting improper installation of frame grabber cards cable not connected properly  power connector if available is not connected properly not configuring the triggers to the
needed specification