View
289
Download
10
Category
Preview:
Citation preview
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
LabVIEW-based Vision
Raul G. Longoria
Spring 2015 Image as data types
Acquire images
Process images
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
“To see is to think.”– Salvador Dali
(and quoted by E. Dickmanns in his book Dynamic Vision for Perception
and Control of Motion, Springer, 2007)
From Dickmanns (2007)
The Persistence of Memory, 1931 S. Dali
Histogram
of pixel
intensity
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
LabVIEW-based Vision
• LabVIEW Vision enables you to read/create image files and provides means for managing those files
• There are built-in functions (VIs) for analyzing image files (select areas of interest, measure intensity, etc.)
• It is necessary to also have LabVIEW IMAQ software which enables you to acquire images from cameras.
• In this course, we want to demonstrate how you can use these software tools to develop a simple vision-based measurement system, particularly for object motion.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Overview of LV-Based Vision Tools
• Image data type
• Analyzing images
• Capturing images
Vision Utilities
Image processing
Machine vision
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Analyzing Images
• Vision Utilities – VIs for creating and
manipulating images, etc.
• Image Processing – provides ‘low level’ VIs
for analyzing images.
• Machine Vision – groups many practical VIs
for performing image analysis. For example,
the “Count and Measure Objects” VI is found
under this group.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Vision Utilities
• Image management (create, dispose, etc.)
• File handling
• Image manipulation
• Pixel editing
• etc.
• Best to learn use through examples.
To create and manipulate images
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Reading an Image File
This VI block diagram opens an existing image file (e.g., a bitmap), reads the
file, and then displays it.
On the Front Panel, place an
‘Image Display’ to get this
terminal; then wire image data.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Example – looking at an image
This read-out indicates the
size of the image (pixels).
When moving the cursor
around the image, the
readout shows cursor (x,y)
coordinates and the
‘intensity’ value at that
location.
The simple VI in the
previous slide can be
used to open an image
file.
This is the ‘clamp’ example file provided in LabVIEW
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Another example of using the image display to study image
pixel intensity, locations
NxM, 32-bit RGB image RGB levels…
Cursor
location
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Machine Vision Concepts*
• Machine (or computer) vision has six principal areas
1. Sensing – which yields a visual image
2. Preprocessing – noise reduction, image enhancement, etc.
3. Segmentation – partitioning an image into objects of interest
4. Description – computation of features for differentiating among types of objects
5. Recognition – identifying objects (e.g., bolt, wrench, etc.)
6. Interpretation – assigning meaning to an ensemble of recognized objects
• Levels of processing are divided into low (1, 2), medium (3,4,5), and high (6)
• In this course, we’ll primarily be concerned with low-level vision, and will utilize some functions of medium-level vision.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Machine Vision Concepts*
• Low-level vision involves processes considered ‘primitive’ (or
automatic) and requiring no ‘intelligence’ (1,2). This could be
thought of as analogous to how a human eye senses and
adapts.
• Medium-level vision extracts, characterizes, and labels
components in an image.
• High-level vision refers to processes that attempt to emulate
perception and cognition.
*From Fu, Gonzalez, and Lee, Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill,
New York, 1987.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
LabVIEW Machine Vision VIs• Select Region of Interest
• Light Meter
• Count and Measure Objects
• Select a few of these to work with in the lab.
There are many others you can skim through
to get an idea of what is available.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
IMAQ Light Meter
IMAQ Select Rectangle
Use to specify a rectangular
area in the image.
Rectangle coordinates are
output and can be sent to
next function.
Image
Rectangle
Histogram data (send
directly to a waveform
graph)
If you want to examine
pixel intensity in a certain
region, you need rectangle
information.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
IMAQ Select Region of Interest
User input can help identify regions based on:
point, line, rectangle, annulus
The output from this VI can be sent to other VIs that
require that bounding information.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Selecting a Region of Interest
If you want to build a program that does NOT prompt the
user, there are ways to specify the region of interest
without using the IMAQ Select ROI VI.
You can create a control on the front panel for the
‘rectangle data’ and set it by hand. This way it stays the
same and no prompting is required.
An example will be shown later.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Count and Measure Objects
Another VI that needs rectangle information, and which is very
useful for basic segmentation is the ‘Count and Measure Objects’
VI.
This VI needs several inputs, as shown below.
NOTE: This VI requires that you convert the image to grayscale.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
IMAQ Cast Image
Example usage shown below:
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Example – Finding Objects and Intensities
3 objects detected
Image
Rectangle
Error
The limit on object size prevented the 3 larger
objects in the ROI from being identified
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Define ‘Threshold’
These objects have an
intensity of close to
zero.
These objects have an
intensity of close to
255.
For 8-bit image
Bright objects have ‘high’ intensity values (e.g., 255 for 8-bit)
Dark objects have ‘low’ intensity values (e.g., 0 for 8-bit)
The ‘Threshold’ must often be specified as an input to some machine vision VIs.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Generating Intensity Histogram
Light Meter
Within the ROI, a histogram is generated of the
intensity values. Note that most of the image is
made up of pixels with intensity greater than about
180. White is 255.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Capturing Images
• PCI cards (for capturing from streaming source)
• USB cameras
• Firewire cameras
• Ethernet/wireless cameras
IMAQdx refers to
VIs that can be used
with cameras that
interface directly
(‘direct show’)
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
USB Cameras• USB cameras are probably the slowest cameras
available, especially the way they are to be used in this course.
• Our experience has shown that the maximum bandwidth we can achieve for image acquisition is about 10 frames/sec (within LabVIEW).
• Some online sources indicate that ‘hacked’ webcams can achieve 30 frames/sec.
• So, it is the software environment (Windows, LV, communications, etc.) that we’ve chosen that is likely placing the restrictions on the performance.
• NOTE: While sbRIO cannot use USB cameras, the myRIO device can acquire directly from a USB camera.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Example: code to acquire USB
camera image
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Creating vision subVIs can make your final VI cleaner
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
More examples of building vision subVIs
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
More examples of building vision subVIs
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Example: Use NIMAX to test the camera, fine tune settings
1. NI-IMAQ devices
2. Camera name
3. Look at the
Acquisition
Attributes. Sometimes
the default settings are
not suitable. For
example, this camera is
actually a doc cam and
the resolution was very
high and frame rate too
slow for what is needed
in this lab. It was set to
lower resolution, higher
fps.
4. Camera Attributes lets
you adjust settings such
as focus, contrast, etc.
(as long as you have
installed the proper
camera drivers)
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
CyVS Lab and DaNI Vision Setup
Insert sketch of network
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
CyVS Lab Overhead Camera
Overhead
camera in lab
Make sure you put in the
password using NIMAX
Sometimes you may get a ‘network error’
when you test in NIMAX. First try exiting
NIMAX and restart it. If that does not
work, resetting the WLAN router usually
cleans this up.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Here is an example with Axis
IP camera. (1) Enumerate and
identify camera, (2) initialize,
(3) create and (4) grab image,
(5) display, then (6) dispose
and (7) close.
Disposing of images and
closing cameras can help free
up resources.
1
2
3
4
5
6
7
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
If you want to retain
that image on the
front panetl display,
right-click and select
‘Snapshot’.
Also, Zoom to Fit so
the whole image can
be viewed.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Say we want to get the position of one or more of those dots and
send it to the sbRIO that controls the DaNI vehicle.
Use a Search Rectangle defined by
a front panel control
5 objects were found in this example. Note that Object ‘0’ was drawn as an
ellipse. It turns out that object ‘orientation’ information can also be found.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
First, here is how you extracting object information
Now this information in the
form of an array can be
shared with your DaNI code.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Here is an example where all the initialization and ‘closing’ steps are
outside of a while loop.
This code will grab images and process as fast as possible. Remember, this
vision code is running on a host machine.
The (X,Y,psi) information on one of the objects found will be sent to a
sbRIO that is on the network. Let’s try a couple of methods.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
This table is at:
http://zone.ni.com/reference/en-XX/help/370622L-01/lvrtbestpractices/rt_gui_bp/#RT_Networking_Options
Let’s try these first two methods.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Using the LabVIEW Shared Variable
http://www.ni.com/white-paper/4679/en/
Refer to the document in the URL above for more information on shared
variables and especially Network Shared Variables.
The following slides illustrate the application.
Other links:
http://zone.ni.com/reference/en-XX/help/370622L-01/lvrtconcepts/rt_projectvariable/
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Right-click on the RT target, select New and
then Variable
Call it XYpsi for (X,Y,psi) as a 1D array of SGL:
Read more from
that page cited on
the previous slide
for explanations
on these settings.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Save the project and name the library. For example:
Now you can drag the
variable from the project
window right onto the RT
code. For example, here is a
simple while loop that just
reads XYpsi
Drag and drop the variable
also on the host VI, but
change the Access Mode to
Write.
Then wire it up. Save project,
connect, and run the host then
the target VI.
Can adjust this
loop time
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
The upper VI is
the front panel
for the host VI
which is
acquiring and
processing the
image.
The lower front
panel is for the
sbRIO showing
the values of
(X,Y,psi).
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
http://zone.ni.com/reference/en-XX/help/371361G-01/lvconcepts/usingstreams/
Using a Network Stream
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
With a Network Stream, add a writer endpoint to the VI that is generating
the data. For example
Flush and
close
Wire in the data to this VI
Note that you need to give the URL of the target that will be reading the
stream in the ‘reader url’. Making timeout default to -1 means unlimited. In
some cases you may want to use specific values.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Then there is an associated stream reader created on the target VI.
All the extra code on this version compared to the shared variable code
was inserted to compensate for what appeared to be ‘empty’ reads. This
code will ‘sample and hold’ until good data comes in. This way there are
always values being sent to the ‘indicator’ (or control code).
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Summary
• Vision VIs in LabVIEW provide a way for us to include image acquisition and analysis to our existing set of tools (simulation, DAQ).
• The vision VIs alone allow you to use an image as a data type.
• Images can be loaded from a file or acquired using IMAQ routines.
• Once within LabVIEW, an image can be processed using some very sophisticated built-in programs.
• Machine vision VIs can be used to develop vision-enhanced measurement and control systems.
• The slides also illustrated two ways to ‘share’ acquired vision data with RT targets over a network: network published shared variables and network streaming.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Appendix A: Image Data TypeMenu: NI Measurements->Vision->Vision Utilities->Image Management
This VI is used to create an
image. It is called prior to,
say, capturing an image
using a camera.
ME 379M/397 – Prof. R.G. LongoriaCyber Vehicle Systems
Department of Mechanical EngineeringThe University of Texas at Austin
Appendix B: Image Type and Bit Depth
• We know digital images are formed by an array of pixels, and each pixel is quantized into a number of levels based on the number of bits available.
• Depending on whether pixels are black and white, grayscale, or color, pixels have different bit depths. Bit depth refers to the amount of information allocated to each pixel.
• When pixels are either black or white, pixels need only two bits of information (black or white), and hence the pixel depth is 2.
• For grayscale, the number of levels used can vary but most systems have 256 shades of gray, 0 being black and 255 being white. When there are 256 shades of grey, each pixels has a bit depth of 8 bits (one byte). A 1024 x 1024 grayscale images would occupy 1MB of memory.
• In digital color images, the RGB (red green blue, for screen projection) or CMYK (printing color) schemes are used. Each color occupies 8 bits (one byte), ranging in value from 1-256. Hence in RGB each pixel occupies 8x3 =24 (3 bytes) bits, in CMYK 8x4 = 32 bits (4 bytes).
• Note, LV uses an ‘alpha’ channel for RGB. The alpha channel stores transparency information--the higher the value, the more opaque that pixel is.
Recommended