10
Camera based precision measurement in improving measurement accuracy Md. Hazrat Ali , Syuhei Kurokawa, Kensuke Uesugi Department of Mechanical Engineering, Faculty of Engineering, Kyushu University, 819-0395 Fukuoka, Japan article info Article history: Received 5 September 2013 Received in revised form 14 October 2013 Accepted 28 November 2013 Available online 4 December 2013 Keywords: Measurement Stylus Probe Camera Tracking Machine vision abstract This paper presents a vision based software system which is developed in order to improve the precision measurement in machining technology. Precision measurement, monitoring and control are very important in manufacturing technology. In order to increase the accu- racy of the measurement system; application of camera or vision is very useful. Automatic control is also vital for the measurement performance to be improved. During measure- ment of the gear profile; human monitoring sometimes may face danger as this is a stylus contact scanning system and the stylus is very small and thin as well as the probe moves with a maximum speed of 10 mm/s. The existing methods for gear measurement are either time consuming or expensive. This paper presents the successful implementation of the vision system in precision engineering which saves times and increases safety of the mea- surement system with the increment of the measurement performance. Color based stylus tracking algorithm is implemented in order to acquire better performance of the developed system. Stylus tracking based measurement is the key issue of the present research. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction The measurement of images is often a principal method for acquiring scientific data and generally requires that features or structure be well defined, either by edges or unique color, texture, or some combination of these fac- tors. In our previous research, a vision based measurement system is presented [1]. To investigate mechanical damage of the apple, a vision based detection system is discussed and elaborated. Based on image processing application, the apple’s skin damage checking was accomplished here [2]. RGB is often considered as the standard in many programming languages, and it is used in many important color image data formats, such as JPEG and TIFF. True color consists of 24 bits of RGB color, 1 byte per color channel for 24 bits. In order to enhance performance, a 32-bit or 4-byte representation is often used because various commands are optimized for groups of 4 bytes or more [3]. Another important issue described is the impact of measurement performance in automatic mode on the quality of perfor- mance in case of the numerical image of scanned surface, from the standpoint of accuracy and number of collected data points within the shortest time. The discussion in- cludes an analysis of conditions related to the measure- ment works, such as the process of preparing the model and measurement equipment as well as the data process- ing capacity [4]. Analysis of the accuracy of gears at different stages of technological process requires control of geometrical sur- face parameters with the use of coordinate measuring methods [5–7]. In another case, a low-quality frame grab- ber results in poor performance and instability of the en- tire system. For this reason, one commercial frame grabber called myVision USB is used to perform the desire activities in [8]. In 2009, Fukuda and others have presented a cost-effective (a term previously introduced by Lee and Shinozuka [10]) vision based displacement measurement system applied to large-size civil engineering structures, 0263-2241/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.measurement.2013.11.057 Corresponding author. Tel.: +81 928023269. E-mail address: [email protected] (Md. Hazrat Ali). Measurement 49 (2014) 138–147 Contents lists available at ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement

Camera based precision measurement in improving measurement accuracy

  • Upload
    kensuke

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Camera based precision measurement in improving measurement accuracy

Measurement 49 (2014) 138–147

Contents lists available at ScienceDirect

Measurement

journal homepage: www.elsevier .com/ locate /measurement

Camera based precision measurement in improvingmeasurement accuracy

0263-2241/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.measurement.2013.11.057

⇑ Corresponding author. Tel.: +81 928023269.E-mail address: [email protected] (Md. Hazrat Ali).

Md. Hazrat Ali ⇑, Syuhei Kurokawa, Kensuke UesugiDepartment of Mechanical Engineering, Faculty of Engineering, Kyushu University, 819-0395 Fukuoka, Japan

a r t i c l e i n f o a b s t r a c t

Article history:Received 5 September 2013Received in revised form 14 October 2013Accepted 28 November 2013Available online 4 December 2013

Keywords:MeasurementStylusProbeCameraTrackingMachine vision

This paper presents a vision based software system which is developed in order to improvethe precision measurement in machining technology. Precision measurement, monitoringand control are very important in manufacturing technology. In order to increase the accu-racy of the measurement system; application of camera or vision is very useful. Automaticcontrol is also vital for the measurement performance to be improved. During measure-ment of the gear profile; human monitoring sometimes may face danger as this is a styluscontact scanning system and the stylus is very small and thin as well as the probe moveswith a maximum speed of 10 mm/s. The existing methods for gear measurement are eithertime consuming or expensive. This paper presents the successful implementation of thevision system in precision engineering which saves times and increases safety of the mea-surement system with the increment of the measurement performance. Color based stylustracking algorithm is implemented in order to acquire better performance of the developedsystem. Stylus tracking based measurement is the key issue of the present research.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

The measurement of images is often a principal methodfor acquiring scientific data and generally requires thatfeatures or structure be well defined, either by edges orunique color, texture, or some combination of these fac-tors. In our previous research, a vision based measurementsystem is presented [1]. To investigate mechanical damageof the apple, a vision based detection system is discussedand elaborated. Based on image processing application,the apple’s skin damage checking was accomplished here[2].

RGB is often considered as the standard in manyprogramming languages, and it is used in many importantcolor image data formats, such as JPEG and TIFF. True colorconsists of 24 bits of RGB color, 1 byte per color channel for24 bits. In order to enhance performance, a 32-bit or 4-byterepresentation is often used because various commands

are optimized for groups of 4 bytes or more [3]. Anotherimportant issue described is the impact of measurementperformance in automatic mode on the quality of perfor-mance in case of the numerical image of scanned surface,from the standpoint of accuracy and number of collecteddata points within the shortest time. The discussion in-cludes an analysis of conditions related to the measure-ment works, such as the process of preparing the modeland measurement equipment as well as the data process-ing capacity [4].

Analysis of the accuracy of gears at different stages oftechnological process requires control of geometrical sur-face parameters with the use of coordinate measuringmethods [5–7]. In another case, a low-quality frame grab-ber results in poor performance and instability of the en-tire system. For this reason, one commercial framegrabber called myVision USB is used to perform the desireactivities in [8]. In 2009, Fukuda and others have presenteda cost-effective (a term previously introduced by Lee andShinozuka [10]) vision based displacement measurementsystem applied to large-size civil engineering structures,

Page 2: Camera based precision measurement in improving measurement accuracy

Fig. 1. Gear measurement system with vision [20].

Table 1Threshold results.

Threshold values (TV) Resulting image

0 Not visible (full blank)10 Edge detected25 Under dilation

112 Visible image156 Slightly dark212 Very dark250 Not visible (full dark)

Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147 139

such as bridges and buildings [9]. They also implemented aTCP/IP (transmission control protocol/Internet protocol)for communications and carried out time synchronizationfor the time synchronization of the system. More recently,in 2011, Choi and others have introduced a vision basedstructural dynamic displacement system using an econom-ical hand-held digital camera. In this case, a recorded videocontaining dynamic information of target panel attacheddirectly to the civil structure was processed with imageresizing method, and mm/pixel coefficient updating pro-cess, then the structure displacement was successfullydetermined by calculating the target position in eachframe [11].

A technique of ‘‘Machine Vision Based Identificationand Dimensional Measurement of Electronic Components’’which is based on color pattern matching approach, thatenables identification of an electronic component presentin a group and its gauging gives the dimensional measure-ment of the electronic component [12]. A method for theprecise and accurate measurement of part features suchas edges require collimated light, careful placement ofthe part, a telecentric lens, calibration and perhaps opticaldistortion correction, and computation by the vision sys-tem’s computer is proposed [13]. A geometric analysis oflaser beam for underwater vehicle in measuring distanceis discussed [14]. In this case, the moving object’s distanceis calculated by comparing it to a reference object.

Another evaluation based on minimum zone circle(MZC) method as to comply with the definition of round-ness error given by Y14.5 M-1994 and ISO 1001 is dis-cussed and highlighted the advantage of applyingmachine vision in the non contact nature of the measure-ment process which can be implemented in-process ofmaking the cylindrical part [15]. A bore scope measure-ment system based on machine vision, which was com-posed of a bore scope, a CCD camera, an industrialcomputer, and a precision linear stage mounted with anoptical linear encoder, was developed. The image process-ing algorithm was developed based on IMAQ Vision. Thissystem is used to measure the space of circular rings, andcomparison with the results measured by universal toolmicroscope shows the maximum absolute measurementerror of this system is below 8 lm [16]. The types of mea-surements that can be performed on entire scenes or onindividual features are important in determining theappropriate processing steps [17]. A computer vision algo-rithm for measurement and inspection of spur gears usingmultiple available software is proposed in order to increasethe accuracy only limited to a maximum outer gear diam-eter of 156 mm [20].

Fig. 2. Robot and detect the position of particular point [18].

2. Background study

Fig. 1 shows a photograph of a vision based gear profilemeasurement system. It consists of two main parts, hard-ware and developed software. The hardware consists ofthree items. The first item is the backlighting Table 1,which is a lighting box with diffusing surface at its front,and it is used to produce a back lighting for the gear tobe measured (2). The second item is a CCD color video

camera (3) and a set of lenses with different focal lengths.The camera is carried by a camera holder (4). The thirditem is a 24 bit per pixel (ELF VGA) frame grabber videocard (5), which is installed inside the PC computer (6)and connected to the CCD camera. Capturing software (7)is provided with the frame grabber to acquire images andsave it to files with various types of file formats [20].

Fig. 2 shows single camera version of PhotoCalc tech-nology, however, it can calculate 3D position by one cam-era, so it is affordable and nothing special is requiredexcept target for certain cases. SingleView3D can be usedfor 3D inspection of object and for tracking object by this[18].

Fig. 3 shows a developed system that uses multiplecameras to detect multiple points on a plane. Camera

Page 3: Camera based precision measurement in improving measurement accuracy

Fig. 3. Multiple points calculation [18].

140 Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147

easily detects the point based on color as it can be seen inthe picture. Measurement stylus monitoring and trackingis essential in precision engineering in order to know thecorrect state of the experiment which is performed. Moni-toring generates the experimental conditions that havebeen done in the past. Similarly, measurement probe orstylus tracking is also very important in the precision mea-surement. Stylus tracking gives the exact position displace-ment of the stylus in terms of co-ordinate values. Based onthe measurement importance the probe or stylus can betracked applying vision or camera in the system. Trackingcan be done in both dynamic and static ways. Dynamictracking refers to the movement of the camera based onthe stylus movement. Simply it follows the motion pathof the stylus or probe during measurement. Static trackingrefers to the condition where the camera is static and notmoving during the experiment. Normally, it is fixed onsome parts of the machine or any other suitable place fromwhere the stylus can be easily tracked by the camera.

2.1. Laser in measurement

Fig. 4 shows a distance measurement system for under-ground unmanned vehicle. It is designed in order to mea-sure the distance between the obstacle and the vehicleitself by acquiring the values in centimeter. As we see inFig. 4, the data is calibrated to its actual values in order

Fig. 4. Geometric analysis of laser beam [14].

to reduce the error in measurement. The symbols and ar-rows are used to explain the measurement procedure.

2.2. Camera in measurement

A camera based gear tooth measurement system is pro-posed in Fig. 5. There are two cameras integrated to theprojector and both are controlled through computer. Thissystem has some limitations as well. Camera needs to havereasonable frame per second (fps) to process the requiredata. Turn table should move accordingly during measure-ment in order to measure whole gear teeth.

2.3. Measurement without camera

Fig. 6 shows bevel gear measurement system. This issimilar to the developed system in the laboratory. Themain difference between this one and the developed oneis the implementation of the vision based camera system.In the developed vision based measurement system, weuse cameras and one stylus attached to the probe in orderto measure the gear profile. Due to the camera integrationwith the system, the measurement accuracy increases inaddition to the increment of overall performance.

2.4. Dynamic tracking in measurement

Fig. 7 is an example of dynamic tracking of the stylus.The C-Track dual-camera sensor is a complete portable3D measurement tracker that offers probing inspectionand dynamic measurement capabilities. It is fitted withhigh-quality optics and special lighting, enabling it to mea-sure all reflectors within its operating distance. The prob-ing stylus is very useful for aligning parts with respect toa referential (calculated using a group of reflectors), whichallows movement or deformation monitoring directly onthe part’s referential.

Dynamic tracking is a key component of the TRU accu-racy technology which guarantees the highest precision.The C-Track can be used to measure positions and orienta-tions in space simultaneously and continuously, and withgreat precision. This makes it possible to control displace-ments, drive assembly processes or measure deformations[18].

Fig. 5. The measuring system GOM ATOS: schema [4].

Page 4: Camera based precision measurement in improving measurement accuracy

Fig. 6. A measurement system without vision [17].

Fig. 7. C-Track dual-camera for measurement probe.

Fig. 8. Camera setting to track the probe and stylus.

Fig. 9. Camera setting to track the probe and stylus at different angle.

Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147 141

Design of a precision tracking controller (PTC) to per-form superior tracking for enhancing the measurementaccuracy and the probing speed in providing less inspec-tion time at high motion speeds is carried out and dynamicmodel for the CMM is developed including drive flexibili-ties represented with lumped springs at the joints [19].

2.5. Proposed static tracking in measurement

Fig. 8 shows camera setting position for the measure-ment probe as well as stylus to be tracked. In this settingcondition, camera is placed 30 degree angular positionfrom the targeted stylus tips. The position can be changedbased on desired location and optimum focus point. One ofthe target points of the camera position is on the top of themeasurement probe. In this situation, the good image andvideo can be acquired from the camera. Fig. 9 shows an-other setting position of the camera for suitablemeasurement.

As discussed above that the existing techniques havethe drawbacks of not having integrated camera for stylus

tracking and control in precision measurement. In addi-tion; human safety also in danger during monitoring thiskind of high speed (10 mm/s) measurement probe and sty-lus. The purpose for the development of this new tech-nique is to implement vision in precision measurementbased on stylus tracking; thus enhance the accuracy ofthe whole measurement system as well as increased hu-man safety by implementing camera in mechanicalmeasurement.

3. Image processing algorithm in proposed system

Image processing has several steps to perform in orderto acquire the most accurate image from the source. Oneof the most common approaches for image processing isto compare the current frame with the previous one. It isuseful in video compression when it is necessary to esti-mate image changes and to save only the changes intothe hard disk rather saving the whole frame. But it is notthe best one for motion detection applications. There aremany other options for motion detection in imageprocessing.

3.1. Image processing steps

The applied image processing method follows the stepsin sequence as mention below:

Page 5: Camera based precision measurement in improving measurement accuracy

Fig. 11. Perspective projection geometry.

142 Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147

� Image acquisition.� Reference image.� Erosion.� Two Frames Difference.

3.2. Motion threshold

The resulting image from opening filter is passed to pix-el count. This counts the white pixel in the image. Histo-gram computation method is used for pixel count. Asthere exists binary image, histogram is used here to countthe value of white pixels in Eq. (1):

h0f ðzÞ ¼1

2kþ 1þXk

j¼�k

hf ðzþ jÞ ð1Þ

where k is a constant representing the size of the neighbor-hood used for smoothing and its value is 128 as in Fig. 10, zrefers to the count 0 to 255, j applies to the pixel values,and hf refers to the total histogram values for the imagefunction where mean = 0.40 and standard devia-tion = 10.12. This value white pixel count is used as motionlevel. It signifies the magnitude of the motion. Greatermagnitude of motion produces more white pixels; thusgenerates higher motion level.

The background and the images are first converted togray scale image for simpler processing. Subsequently theimage is compared to the background to detect anychanges in the image. The result of this process is passedto threshold processing which aims to transform gray scaleimage to black and white image. Pixels with the lightnessbelow filter threshold acquire the low color state ‘0’ andthose with the lightness above the filter threshold acquirethe high color state ‘1’ by producing binary image. The per-spective projection geometry is illustrated in Fig. 11.

The algorithm for this process in histogram is given bythe threshold value T which consists of a number between0 and 255. It also creates a T-th frame by replacing all thepixels that have gray level lower than or equal to T withblack (0) and the rest with white (1). Axes g and f indicatethe required memory in bytes. Often only 256 bytes ofmemory needed. Thus, selected threshold values minimizeboth effects. From various experiments carried out, theoptimal threshold value found is 112 as shown in Table 1.

From experimental results, it is observed that the imagewith Threshold Value (TV) bigger and smaller than 112generates darker and brighter images. Similarly, imagewith TV = 255 is completely dark due to increasing theTV up to its peak value. On the other hand, TV = 0 generates

Fig. 10. Resulted image when threshold value is 128.

completely bright image. Camera pixel noise due to illumi-nation changes has been removed applying opening filter.

4. Develop system’s software configuration

4.1. Camera startup

The NET Framework 4.0 is used in the applied tech-nique. It is a C# desktop application; it can take up to 25frames per second. Color size can be changed at any time;upon changing color size the color of drawing point is alsochanged.

There are several steps for the system to work

� Capture video frame from camera.� Use color filter.� Apply grayscale.� Find objects based on the selected size.� Find biggest object.� Draw object position in bitmap.� Display geometric position in x and y coordinates.� Save data in .dat and .rtf formats.

Beside this, it also has the following two important fea-tures. One is for recording videos and another one is to re-cord the experimental images. In addition, it can record orsave the data in .dat and .rtf format during measurement.

4.2. Video features

Fig. 12 shows a image capturing state during measure-ment of a spiral bevel gear having 1 stylus connected to it.Red lines are denoting as motion detection state or in an-other way, we can say that this is video recording or savingstate. The developed system is able to save the video filesbased on detected motion. The system saves the video onlywhen it detects a motion and highlights the image borderby the red lines. If there is no motion then the camera is inidle state. Thus it saves the memory too. During measure-ment once the measurement starts, the camera recordingalso starts automatically and stops as well when the mea-surement stops.

Video feature has the following options:

� Log in with valid username and password.� Open video file.� Open a live camera.

Page 6: Camera based precision measurement in improving measurement accuracy

Fig. 12. Recording video during gear measurement.

Fig. 13. Image processing window for frame capture.

Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147 143

� Close the window.� Start video capture.� Video capture on motion.� Apply image processing.

4.3. Image features

The system has another option to save the image framesonly during measurement. In both cases the fps is between25 and 30. The image quality is quite good in both cases.Based on the detected motion, the image saves in the des-tination file.

Fig. 14. Image of spur g

Image feature has the following options:

� Log in using valid username and password.� Select live camera.� Click start button for image capture.� Click stop button.� Numbers of saved images.

Fig. 13 shows the image saving window of the measure-ment system. The image can be saved by just clicking startand can be stopped by clicking stop button. We can alsocount the number of saved image from the window. Imageprocessing time is also displayed here. Fig. 14 highlights aimage of a spur gear during measurement. The images aresaved in JPEG format. Depends on the measurement targetthe number of stylus may vary from system to system. Thedeveloped vision based measurement system consists ofone stylus. The images shown above are tested in thedeveloped system taken from [17].

4.4. X, Y-coordinate data saving

Fig. 15 shows the data process system during measure-ment. The data is recorded in .rtf and .dat formats duringmeasurement. In .dat format the data is recorded in hori-zontal axis whereas in .rtf format the data is recorded invertical axis. The file takes almost same data space in bothformats. The first data is displayed in terms of pixel valueof the original position during setting. The next valueshows the position changed in terms of pixel value. Thispixel value can be converted in terms of millimeter andcentimeters values depend on the desirable unit.

5. Results and discussion

When the program runs, it starts tracking by defaultcolor as black and color range is 120 but it can be changedin real time. In order to change default color it requires topress the color selection option and then a color dialog boxwill appear, then choose the exact color and press OK andthe solid color box shows the selected color. There are 4panels to view, 3 of them are Aforge video source controland another one is a picturebox. The first one shows realvideo from camera, the second one shows all moving ob-

ear measurement.

Page 7: Camera based precision measurement in improving measurement accuracy

Fig. 15. Object tracking data.

144 Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147

jects in rectangular boxes, the third panel shows only thebiggest object and the fourth one just draws a ’0’ sign inbiggest object location. It is also added a richtextbox whichshows the pixel of biggest rectangle position.

5.1. Stylus tracking

Fig. 16(a)–(d) shows different objects tracking based onobjects colors. Based on color selection, the system willtrack the objects formed by the color. Yellow, black, pink,green, etc. color system can be tracked by the system ifthe stylus color is black then the selection must be blackfor the tracking purpose. If the probe and stylus have samecolor then the detected objects area will be bigger thanthat when the system and probe color are different. Onlyunique color stylus cab be tracked at a time. For the pres-ent system, multiple probe or stylus cannot be tracked.This tracking has great advantage in monitoring of the ma-chine during experiment.

Fig. 17 shows stylus tracking of the measurementprobe. The red point in the below axis indicates the initialposition. The system we developed in our lab is based onstatic tracking. In case of static tracking the camera re-mains static at all times during measurement and it is fixedon a suitable position where the camera has the ability totrack the measurement probe or stylus accurately. In caseof dynamic tracking, extra camera movement controllingmotors and controllers are necessary to drive the camerawhere

For clarification how filter works in detection and track-ing the stylus, it is convenient to use a color code. The usercan choose his desire color and size. In the developed sys-tem, the color detection code is applied here by imple-menting Euclidean filter and then apply blob counter toextract data. In the top-right corner, software applicationoptions are shown. The first line selects the color value.As we know, color has a value of 0–255. From the functionArgb (205, 20, 30); here center color will be a red effectedcolor and value of red is 205, green is 20 and blue is 30 andradius = 105 means that all color values around 105 in thespecified color. Applying filter extracts pixels and if the col-or is inside of the RGB sphere with specified center and ra-dius then it keeps pixels with colors inside of the specifiedsphere and fills the rest with specified color. Thus, the rect-angle appears in the final detected object with the speci-fied color. And if the color is outside of the RGB spherewithout the specified center and radius then the spherewill not be filled by the specified color.

Fig. 18 shows the 3D probe and its applications duringmeasurement of various gear profile. It also shows thedeveloped system for image processing to be implementedin precision measurement. It shows the image saving win-dow of the measurement system. The image can be savedby just clicking start and can be stopped by clicking stopbutton. We can also count the number of saved image fromthe window. Image processing time is also displayed here.Fig. 19 shows the latest improved state of the camera sys-tem to be implemented in the precision measurement. Atthis moment camera can track the moving probe and sty-lus during measurement and save the displacement inthe .doc and .rtf format. Successful development of thisautomatic system increases the measurement accuracyextensively.

Fig. 19 also shows ‘X’ and ‘Y’ coordinates data storingfiles and its inner combinations. The resulting output savesin both vertical and horizontal combinations inside the .txtand .rtf file formats respectively.

5.2. Controls in precision measurement

Fig. 20 shows a block diagram of a simplified feedbackcontrol system. The system represents the plant which isto be controlled. Its output y(t) is continuously comparedto a reference signal r(t) representing the desired output.The difference between the two, called the error signale(t) is used by a controller device (c) to generate a controlsignal u(t) to change the behaviour of the plant. Eqs. (2)and (3) simplify the displacement of the measurementprobe.

DXi ¼ Xiþ 1� Xi ð2Þ

DXf ¼ Xf � Xf � 1 ð3Þ

where, Xi = Initial position of measurement probe.Xi + 1 = First point of measurement probe. Xf = Final posi-tion of measurement probe. DXi = First displacement ofthe probe. DXf = Final displacement of the probe.

After calculating displacement for each position if theresult sent to the probe in order to keep the measurementprobe always on its original position then the measure-ment error can be eliminated easily by the output value.In this case, the probe only travels to the distance it was di-verted from its original position. It means the probe alwaysmaintain its original axis during measurement to avoidmeasurement error.

Page 8: Camera based precision measurement in improving measurement accuracy

Fig. 16. Tracking of various moving pens based on colors a–d. (Forinterpretation of the references to colour in this figure legend, the readeris referred to the web version of this article.)

Fig. 17. Camera setting to track the probe and stylus.

Fig. 18. Gear profile measurement with Renishaw probe.

Fig. 19. Coordinate data format.

Fig. 20. Simplified feedback control model [22].

Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147 145

Front position of the measurement probe is determinedby the front camera and rear position can be detected by aweb camera. Based on the measured data, the control is per-formed. Disturbances are the noises occur during measure-ment from the inner and outer environment. Controller gaincan be both positive and negative based on actuator’s inputvalues. In the first case; if the process gain is positive, an in-crease in the measurement requires a decrease in the con-trol action through driver. In the second case; if theprocess gain is negative, an increase in the measurement re-quires an increase in the control action similarly.

Page 9: Camera based precision measurement in improving measurement accuracy

Table 3Overall accuracy of the system.

Parameters Correct acceptance False/error Accuracy (%)

Detection 5 0 100Tracking 5 0 100Video/image 5 0 100File save (x, y) 4 1 80Total 5

Table 4Performance evaluation of vision based measurement.

Parameters Single camera Multiple camera

Without With Without With

ME (mm) �86.84 �14.3 �1.26 �0.2SD (mm) 90.34 7.19 23.17 1.73SE (mm) 12.77 1.86 3.49 0.37Sample no 50 15 44 22CV (%) 24.34 1.62 5.07 0.38

Table 5Comparison between develop and existing methods.

Parameters Developed method Existing methods

Image save Yes NoVideo save Yes NoPrecision

controlYes No

Data save(x, y)

Yes No

Humanmonitor

Not Necessary Necessary

Time Saves time Time consumingCost Less human monitoring;

inexpensiveHuman monitoringthus expensive

Safety Safe Unsafe

146 Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147

6. Performance analyses

6.1. Accuracy of the proposed method

Overall performance for video recording during mea-surement is very accurate and capturing rate is 100%. Onthe other hand, image frames saving function generatessome errors during experiment. The error here is not dueto false detection rather it’s because of missing the de-tected frames in desire location. Parameterized threadingand cross threading improves the performance of this sys-tem. The frame rate can be increased by selecting perfectthreading technique. Euclidian filter is more suitable thanHSL filters or Color filtering for this purpose. Color basedobject detection has a success rate of 100% while Colorbased stylus tracking has a success rate of 80% in the devel-oped system as shown in Table 2.

6.2. Overall system accuracy

Recent rapid technological improvements in videocameras have improved the utility and accuracy of mea-surements with such systems as discussed in [21,22]. Thedeveloped method has a single camera system for thispurpose but it is possible to integrate the system togetherfor multi-camera based measurement system. The specifi-cations of single camera used in the developed system areiBuffalo, 1.3 Mega Pixel, Real Color CMOS web camerahaving a resolution of 640 � 480. Table 3 shows overallaccuracy of the developed system.

6.3. Single camera based measurement

For single camera based measurement system the errorand precision varied when camera is rotated to an angle ofmore than 70�. If measurements were made at angles lessthan 70�, there were substantial improvements in theaccuracy and precision as shown in Table 4. Camera anglemore than 70� has a problem of viewing it in its propershape; thus error occurred.

6.4. Multiple camera based measurement

One of the most significant advantages of stereo-videotechnology is the ability to make repeat measurements ofthe length of a particular object [25]. Taking the mean ofa set of repeated measurements can substantially improvethe accuracy and precision of stereo-video measurements.

Table 4 shows the accuracy of single and multiplecamera based measurement system [23].

Table 2Color based stylus tracking.

Color basedstylus

Detect Displaytracking path

Display and savecoordinate data (x, y)

Yellow Yes Yes YesBlack Yes Yes YesPink Yes Yes YesGreen Yes Yes YesRed Yes Yes No

Accuracy is defined as the nearness of a measurementto the actual value being measured, while precision re-ferred to the closeness of repeated measurements of thesame subject to one another. The accuracy is calculatedby subtracting the known length or distance from the ob-served measurement made from the single and multiplecamera systems.

In Table 4, negative values represented under-estimatesand positive numbers represented over estimates. Thecoefficient of variation (CV) is calculated by dividing thestandard deviation of the estimate by the mean value(ME) of the estimate, and expressed as percentage. CV isa useful measure of precision widely used in similar fieldof research [24]. The standard error (SE) of the measure-ments is also calculated applying Eq. (4).

Cm ¼ r=l ð4Þ

6.5. Comparison of results with the existing systems

Table 5 analyzes the merits of the developed methodover the existing methods for measurement. Overallparameters show that this method has great advantages

Page 10: Camera based precision measurement in improving measurement accuracy

Md. Hazrat Ali et al. / Measurement 49 (2014) 138–147 147

over existing methods in practical application during pre-cision measurement.

7. Conclusions

From the above discussion it can be seen that the visionbased measurement system has advantages over non-vi-sion based measurement system. Two important imageprocessing applications are applied in order to enhancethe overall measurement system. Several challenges arefound during the implementation of vision based measure-ment system. Such as:

� Occlusion handling in the practical environment due tolight variance and shadow.� At higher camera angle results are not satisfactory.� False detection varies depending on the camera resolu-

tion and target distance.

In addition, color based tracking and control of the mea-surement stylus as well as measurement probe is verymuch necessary for precision engineering in manufactur-ing industries.

References

[1] MD. Hazrat Ali, Syuhei Kurokawa, Kensuke Uesugi, Vision basedmeasurement system for gear profile, IEEE Int. Conf. Info. Electron.Vis. (2013), http://dx.doi.org/10.1109/ICIEV.2013.6572652.

[2] A. Beyaz, R. Ozturk, U. Turker, Assessment of mechanical damage onapples with image analysis, J. Food Agric. Environ. (JFAE) 8 (3,4)(2010) 476–480. ISSN: 1459-0263.

[3] R. Lukac, K.N. Plataniotis, Colour Image Processing Methods andApplications (Image Processing), CRC Press, Printed in the Canada,2007. ISBN: 13: 978-0- 8493-9774-5 (Hardcover).

[4] Adam Marciniec, Grzegorz Budzik, Tomasz Dziubek, Automatedmeasurement of bevel gears of the air craft gearbox using GOM, J.KONES Power Train Transport 18 (2) (2011).

[5] G. Budzik, B. Kozik, J. Pacana, B. Muda, Modeling and prototyping ofaeronautical planetary gear demonstrator, J. KONES Power TrainTransport 17 (3) (2010) 49–54.

[6] G. Budzik, T. Dziubek, B. Sobolewski, M. Zaborniak, Selectionmeasuring strategy for gear wheels produced by RP methods,devices in coordinate, Metrology (2010) pp. 407–414.

[7] F. Haertig, W. Lotze, 3D Gear Measurement by CMM, Laser Metrologyand Machine Performance V, 2001.

[8] Withrobot Lab, 2012. <http://www.withrobot.com>.[9] Y. Fukuda, M.Q. Feng, M. Shinozuka, Cost-effective vision-based

system for monitoring dynamic response of civil engineeringstructures, Struct. Control Health Monit. 17 (8) (2010) 918–936.

[10] J.J. Lee, M. Shinozuka, A vision-based system for remote sensing ofbridge displacement, NDT Int. 39 (5) (2006) 425–431.

[11] H.S. Choi, J.H. Cheung, S.H. Kim, J.H. Ahn, Structural dynamicdisplacement vision system using image processing, NDT E Int. 44(7) (2011) 597–608.

[12] S. Jainy, Machine Vision Based Identification and DimensionalMeasurement of Electronic Components, Master’s Thesis, June,2006, pp. 12.

[13] Dalsa Technology, Precision Measurements with Machine Vision,USA. <https://www.teledynedalsa.com>.

[14] K. Muljowadidodo, A.R. Mohammad, N. SaptoAdi, Agus Budiyono,Vision based distance measurement system, Indian J. Mar. Sci. 38 (3)(2009) 327.

[15] A. Mohamed, A.H. Esa, M.A. Ayub, Roundness measurement ofcylindrical parts by machine vision, in: International Conference onElectrical, Control and Computer Engineering, (2011), pp. 486–490.

[16] Tao Zhang, Yi Luo, Xiaodong Wang, Mixin Wang, Machine visiontechnology for measurement of miniature parts in narrow spaceusing borescope, in: International Conference on Digitalmanufacturing and Automation (ICDMA), China, 2010, pp. 904–907.

[17] J.C. Russ, The Image Processing Handbook, fifth ed., CRC Press,Printed in the United States of America, 2006. ISBN 0-8493-7254-2.

[18] Hexagon Metrology, Gear Measuring with Leitz CMM Retrieved from<http://www.creaform3d.com/en/ndt-solutions/c-track>. (2009,March 20).

[19] Tugrul Özel, Int. J. Adv. Manuf. Technol. 27 (2006) 960–968, http://dx.doi.org/10.1007/s00170-004-2292-3.

[20] E.S. Gadelmawla, Computer vision algorithms for measurement andinspection of spur gears, Measurement 44 (9) (2011) 1669–1678.<http://dx.doi.org/10.1016/j.measurement.2011.06.023.>, ISSN0263-2241.

[21] E.S. Harvey, M.R. Shortis, A system for stereo-video measurement ofsubtidal organisms, J. Mar. Technol. Soc. 29 (4) (1996) 10–22.

[22] E.S. Harvey, M.R. Shortis, Calibration stability of and underwaterstereo-video system: implications for measurement accuracy andprecision, J. Mar. Technol. Soc. 32 (2) (1998) 3–17.

[23] E.S. Harvey, D. Fletcher, M.R. Shortis, A comparison of the precisionand accuracy of estimates of reef-fish length made by divers and astereo-video system, J. Mar. Technol. Soc. 36 (6) (2003) 38–49.

[24] R.E. Thresher, J.S. Gunn, Comparative analysis of visual censustechniques for highly mobile, reef associated piscivores(Carangidae), Environ. Biol. Fish. 17 (1986) 93–116.

[25] E.S. Harvey, D. Fletcher, M.R. Shortis, A comparison of the precisionand accuracy of estimates of reef-fish length made by divers and astereo-video system, Fishery Bull. 99 (2001) 63–71.