Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
CHAPTER I MEASURING INSTRUMENTS
Introduction:
Measurement is defined as the process or an art of obtaining a quantitative comparison
between a predefined standard and an unknown magnitude. Measurement is the basis of
all scientific study and experimentation.
Example:
Length is a measure of the distance between two points, which is measured using a ruler
or a measuring tape.
Time is the interval between two events, which is measured using a clock.
Mass of a body is the amount of matter contained in it and is measured with the help of a
balance.
Temperature is a measure of the degree of hotness of a body and is measured using a thermometer.
Metrology:
Metrology is the science of measurement and includes all theoretical and practical aspects
of measurement.
Metrology is defined by the International Bureau of Weights and Measures (BIPM) as
“the science of measurement, embracing both experimental and theoretical determinations
at any level of uncertainty in any field of science and technology’’ Requirements of
Measurement:
The important requirements are,
1. The standard used for comparison must be accurately known and commonly accepted.
Ex: A length cannot be simply said, it is too long but it must be said it is comparatively
longer then some standard.
2. The procedure and apparatus used for comparison must be commonly accepted and
must be provable.
Significance of Measurement
Measurement is the fundamental basis for all research, design and development and its
role is prominent in many engineering activities.
➢ Measurement is also a fundamental element of any control process, which
requires the measured difference between actual and desired values.
➢ Many operations require measurement for proper performances.
Ex: Measurement of temperature, pressure in power stations.
➢ Measurement is important to control costs
➢ Measurement helps to achieve the quality of the product and thereby increasing
the efficiency.
Methods of Measurement
There are two methods of measurement:
1) Direct comparison
2) Indirect comparison
1) Direct Comparison Method
In this method, measurement is made directly by comparing an unknown magnitude with
the primary or secondary standard.
Example: To measure the length of the bar, we will measure it with the help of the
measuring tape or scale that acts as the secondary standard.
2) Indirect Comparison Method
There are number of quantities that cannot be measured directly by using some
instrument. Hence in the indirect comparison method of measurement, measurement is
made by comparing an unknown magnitude with a standard through the use of calibrated
system.
The indirect method of measurements comprises of the system that senses, converts, and
finally presents an analogues output in the form of a displacement or chart.
Example: To measure the strain in the machine member, a component senses the strain
and another component transforms sensed signal into an electrical quantity which is then
processed suitably before being fed to a meter or recorder.
Difference between Direct measurement and indirect measurement
Direct comparison Indirect comparison
1)Unknown quantity is measured
comparing directly with primary or
secondary standards
1)Unknown magnitude is measured by
comparing with a standard indirectly
through the use of a calibrated system
2)Human senses are very much necessary
for measurement
2)Consists of a chain of devices which
form a measuring system
3)Results obtained from direct comparison
are not that dependable
3)This consists of a detector element to
detect ,a transducer to transducer and a
unit to indicate or record the processed
signal
4)Not always accurate 4) Fairly accurate.
NOTE:
The primary standards are the original standards made from certain standard values or
formulas.
The secondary standards are made from the primary standards, but most of the times we
use secondary standards for comparison since it is not always feasible to use the primary
standards from accuracy, reliability and cost point of view. There is no difference in the
measured value of the quantity whether one is using the direct method by comparing with
primary or secondary standard.
Generalized Measuring System:
The Generalized System of Measurement comprises of three stages (see the fig), These
are:
I) First stage - The Detector-transducer stage.
II) Second stage – Intermediate modifying stage
III) Final stage – Terminating stage comprising of indicator, recorder, controller.
I) First Stage: The Detector-Transducer Stage
The important function of this stage is to detect or to sense the input signal. At the same
time, it should be insensitive to every other possible input signal. For ex, if it is a pressure
signal, it should be insensitive to acceleration. In the measurement of strain, the strain
gauges should be insensitive to temperature.
II) Intermediate Modifying Stage
The second stage or the intermediate modifying stage converts the input signal in the form
that can be used easily by the final stage to indicate the value of the input physical
quantity. The modifying stage may change the type of the input signal so that the output
value can be measured easily. Or it may increase the amplitude and/or the power of the
input signal to the level so as to drive the final terminating devices.
The intermediate may also have to perform the functions of filtering the unnecessary
input, and also integration, differentiation, telemetering etc, wherever required.
III) Final Stage: Terminating Stage
The final stage or the terminating stage provides the information about the input physical
quantity in the form that can be easily read by the human beings or the controller in the
form of the pointer movement on the predefined scale, in the digital format, by the graph
etc.
Measuring instruments:
A measuring instrument is a device for measuring a physical quantity.
Terms Applicable to Measuring instruments:
Accuracy:
Accuracy indicates the closeness of the measured value with the actual or true
Value.
It is expressed in the form of the maximum error (= measured value – true value) as a
percentage of full scale reading.
Accuracy of the instrument depends on factors like static error, dynamic error, reproducibility etc.
For example, if in laboratory you obtain a weight measurement of 3.2 kg for a given
substance, but the actual weight is 10 kg, then your measurement is not accurate. In this
case, your measurement is not close to the actual value.
Precision:
Precision is defined as the repeatability of a measuring process. The precision of an
instrument indicates its ability to reproduce a certain set of readings within a given
accuracy.
Or
Precision refers to the closeness of two or more measurements to each other.
Using the example above, if you weigh a given substance five times, and get 3.2 kg each
time, then your measurement is very precise. Precision is independent of accuracy.
The following figure illustrates the difference between accuracy and precision.
Difference between Accuracy and Precision:
Sl no Accuracy Precision
1. It is the closeness with the true value of the quantity being measured
It is a measure of reproducibility of the measurements
2. The accuracy of measurement means conformity to truth
The term precise means clearly or sharply defined
3. Accuracy can be improved Precision cannot be improved
4. Accuracy depends upon simple techniques of analysis
Precision depends upon many factors
and requires many sophisticated
techniques of analysis
5. Accuracy is necessary but not sufficient condition for precision
Precision is necessary but not a sufficient condition for accuracy
Sensitivity:
It is the ability of the measuring device to detect small difference in a quantity being
measured.
Repeatability:
It is defined as the ability of a measuring system to reproduce output readings when the
same input is applied to it repeatedly under the same conditions and in same directions.
It could be expressed as the maximum difference between the output readings.
Range (or span):
It represents the highest possible value that can be measured by an instrument. It is the
difference between the largest and smallest results of measurement.
Or
It defines the maximum and minimum values of the inputs or the outputs for which the
instrument is recommended to use.
Threshold:
The minimum value of input signal that is required to make a change or start from zero is
called as threshold.
Or
It is defined as the minimum value of input below which no output can be detected.
Back lash:
It is the maximum distance through which one part of the instrument may be moved
without disturbing the other part.
Hysteresis:
It is the difference between the indications of a measuring instrument when the same
value of the measured quantity is reached by increasing or by decreasing that quantity.
The phenomenon of hysteresis is due to the presence of dry friction as well as to the
properties of elastic elements. It results in the loading and unloading curves of the
instrument being separated by a difference called the hysteresis error. It also results in the
pointer not returning completely to zero when the load is removed. Hysteresis is
particularly noted in instruments having elastic elements.
The phenomenon of hysteresis in materials is due mainly to the presence of internal
stresses. It can be reduced considerably by proper heat treatment.
Calibration:
A known input is given to the measurement system and the systems output is noted, if the
systems output deviates with respect to the given known input, corrections are made in the
instrument so that the output matches the input. This process is called calibration.
The calibration of the instrument is done to find its accuracy, precision of the instrument.
The calibration procedure compares an "unknown" or test item(s) or instrument with
reference standards (Primary and Secondary standards)
Standard procedure for calibration:
The following procedure is adopted for calibration of measuring instrument.
➢ Cleaning of instruments: Every instrument should be cleaned thoroughly.
➢ Determination of error: Determine the errors in the instruments by various
methods.
➢ Check for tolerable limits: The errors are to be compared with the allowable
tolerances.
➢ Minor Changes: To minimise the errors in the readings if possible, made some
minor changes in the instruments.
➢ Allotment of calibration setup: Each instrument is allowed to setup as per its
standards.
➢ Calibration date: Allot the next calibration date to calibrate the instruments.
Errors in measurements:
An error may be defined as the difference between the measured value and the actual
value. Sequentially, to understand the concept of errors in measurement, you should know
the two terms that define the error. They are true value and measured value. The true
value is impossible to find out the truth of quantity by experimental means. It may be
defined as the average value of an infinite number of measured values. Measured value
can be defined as the estimated value of true value that can be found by taking several
measured values during an experiment.
Types of Errors in Measurement System
Generally errors are classified into three types:
1. Systematic errors or fixed errors.
❖ Instrumental Errors
❖ Environmental Errors
❖ Observational Errors
❖ Theoretical Errors
❖ Calibration errors
❖ Human errors
❖ Loading errors
2. Random errors or Accidental errors
❖ Errors by fluctuating environmental conditions.
❖ Judgement errors
❖ Insufficient sensitivity of the measuring instrument
3. Illegitimate errors.
❖ Blunders or mistakes
❖ Computational errors
❖ Chaotic errors
1. Systematic Errors
The Systematic errors that occur due to fault in the measuring device are known as
systematic errors. These errors can be detached by correcting the measurement device.
These errors may be classified into different categories.
• Instrumental Errors
• Environmental Errors
• Observational Errors
• Theoretical errors
Instrumental Errors
Instrumental errors occur due to wrong construction of the measuring instruments. These
errors may occur due to hysteresis or friction. These types of errors include loading effect
and misuse of the instruments. In order to reduce the gross errors in measurement,
different correction factors must be applied and in the extreme condition instrument must
be recalibrated carefully.
Environmental Errors
The environmental errors occur due to some external conditions of the instrument.
External conditions mainly include pressure, temperature, humidity or due to magnetic
fields. In order to reduce the environmental errors
Observational Errors
As the name suggests, these types of errors occurs due to wrong observations or reading in
the instruments particularly in case of energy meter reading. The wrong observations may
be due to parallax. In order to reduce the parallax error highly accurate meters are needed:
meters provided with mirror scales.
Theoretical Errors
Theoretical errors are caused by simplification of the model system. For example, a theory
states that the temperature of the system surrounding will not change the readings taken
when it actually does, then this factor will begin a source of error in measurement.
2. Random Errors
Random errors are caused by the sudden change in experimental conditions and noise and
tiredness in the working persons. These errors are either positive or negative. An example
of the random errors is during changes in humidity, unexpected change in temperature and
fluctuation in voltage. These errors may be reduced by taking the average of a large
number of readings.
3. Illegitimate errors.
These are those errors which occur by mistakes in reading or computing the results.
Chaotic errors are those introduced in the measurements due to high vibrations, shock to
the instruments or electrical noise.
Factors in selecting the measuring instruments.
The following factors are considered in selection of measuring instruments
1. Accuracy expected from the instrument.
2 .The time of taking final data required for the measurement.
3. Cost of measuring instrument.
4. The type of data displayed, i.e., indicating, recording, photograph etc.
5. The measurement quantity to be measured has constant value or is it a time variant. (Ex:
Linear or Parabola)
6. Safety in use.
7. Adaptability to different sizes of inputs.
Screw Thread Metrology
A Screw Thread Metrology deals with the measurements of the threads. To understand
what are the measurements, we can take from screw threads, we should know about the
terms or elements of the threads. The screw thread terminology gives the list of the terms
and their definition those terms.
Screw Thread Terminology:
The following are the terms of the screw threads:
Screw Thread: It is defined as a helical ridge which is formed by a continuous helical
groove of uniform cross section on the external or internal surface of the cylinder or cone.
The threads formed on cylinder are known as straight threads and the threads formed on a
cone or a fulcrum of a cone are known as tapered threads.
External Thread: Threads formed on the outside of the workpiece body are known as
external threads. Ex: Bolts and Studs etc.
Internal Thread: The threads formed on inside of the workpiece body are known as
Internal Threads. Ex: Nuts
Right hand or left hand thread: The Thread is placed in such a way that its longitudinal
axis is normal to the observer and the thread is rotated in clockwise direction, if it is
moving away from the observer, then it is a right hand thread; if it is moving towards the
observer, then it is left hand thread.
Form of thread: It is an edge shape of one complete thread as seen in axial section.
Crest of thread: It is the top most point of the groove forming threads.
Root of thread: It is the bottom point of the groove forming threads.
Flanks of threads: These are the straight edge surfaces which joins the crest to root.
Angle of thread: It is an angle between two opposite flanks or slopes of a thread
measured in an axial plane.
Pitch: It is the distance between the two successive crest points or root points measured
parallel to the axis of the thread.
Lead: It is the distance moved by the screw for one complete revolution with respect to its
mating part. Sometimes the lead is equal to the pitch but not alway
Thread per inch: It is the number of threads taking place per one inch. It is reciprocal of the pitch.
Helix angle: It is an angle made by the helical curve of the thread with the axis of the
thread.
Depth of thread: It is the distance between crest point and root point which is measured
along a plane perpendicular to the axis of thread.
Major diameter: It is defined as the diameter of an imaginary cylinder which passes
through the crest points of the thread.
Minor diameter: It is defined as the diameter of an imaginary cylinder which passes
through the root points of the thread.
Mean or Effective or Pitch diameter: It is defined as the diameter of an imaginary
cylinder which passes through the pitch line of the screw threads.
Thread Gauge Micrometer:
A thread gauge micrometer is a device used to measure the pitch diameter of screw
thread.It is just like an ordinary micrometer and it has a pointed spindle and a double V-
anvil,both are correctly shaped to contact the screw thread .
The anvil is not fixed which is free to rotate.Thus V- anvil can accomodate itself to any
angle of thread. When the conical spindle is brought into contact with the Vee anvil ,
micrometer reads zero.Different set of anvils are provided for different types of threads. It
is directly reads in terms of pitch diameter.
The angle of Vee anvil and the conical point at the end of the spindle corresponds to the
included angle of the profile of the thread.The micrometer spindle is moved over a screw
whose pitch diameter is to be measured.When double Vee shape anvil and conical shape
of spindle contacts with the thread flanks,the micrometer reading should be taken,which is
the measure pitch diameter of screw thread.
Bench Micrometer
The major diameter of a screw thread can be determined by a normal micrometer by
carefully adjusting the screw in between the two anvils of micrometer. But there is a
difficulty of pressure application during adjusting the screw in between the anvils. Due to
this pressure the chance of errors in measurement becomes more. So, to avoid this a
special instrument that is bench micrometer used.
A bench micrometer consists of two anvils, one is connected to fudicial indicator and
another is connected to micrometer head. The fudicial indicator confirms the application
of uniform and light pressure while adjusting the screw in between the two anvils. The
micrometer gives the desired reading. Generally for finding the major diameter of a screw,
a setting cylinder is used which a standard piece equal to the major diameter. It is used to
avoid pitch errors in a micrometer screw if any. First, micrometer reading is taken for
setting cylinder and the reading is assumed as R1. Next, the micrometer reading is taken
for required screw and the reading is assumed as R2. The final measurement is obtained
by the following formula without any errors.
Major diameter = D± (R2-R1)
D= Setting cylinder diameter
R1 = Micrometer reading on setting cylinder R2 = Micrometer reading on screw thread
Introduction to Angular Measurements:
Definition of Angle:
• Angle is defined as the opening between two lines which meet at a point.
• If a circle is divided into 360 parts, then each part is called a degree (o).
• Each degree is subdivided into 60 parts called minutes ('), and each minute is
further subdivided into 60 parts called seconds (").
The unit 'Radian' is defined as the angle subtended by an arc of a circle of length equal to
the radius. If arc AB = radius OA, then the angle q = 1 radian.
For measuring the angle, no absolute standard is
required. The measurement is done in degrees, minutes and seconds.
There are several methods of measuring angles and tapers. The various instruments used
are angle gauges, clinometers, bevel protractor, sine bar, sine centres, taper plug and ring
gauges.
Vernier Bevel Protractor (Universal Bevel Protractor):
It is a simplest instrument for measuring the angle between two faces of a component. It
consists of a base plate attached to a main body and an adjustable blade which is attached
to a circular plate containing Vanier scale.
The adjustable blade is capable of sliding freely along the groove provided on it and can
be clamped at any convenient length. The adjustable blade along with the circular plate
containing the vernier can rotate freely about the centre of the main scale engraved on the
body of the instrument and can be locked in any position with the help of a clamping
knob. The adjustable blade along with the circular plate containing the
vernier can rotate freely about the centre of the main scale engraved on the body of the
instrument and can be locked in any position with the help of a clamping knob.
The main scale is graduated in degrees. The vernier scale has 12 divisions on either side
of the centre zero. They are marked 0-60 minutes of arc, so that each division is 1/12th of
60 minutes, i.e. 5 minutes. These 12 divisions occupy same arc space as 23 degrees on the
main scale, such that each division of the vernier = (1/12)*23 = 1(11/12) degrees
If the zero graduation on the vernier scale coincides with a graduation on main scale, the
reading is in exact degrees. If some other graduation on the vernier scale coincides with a
main scale graduation, the number of vernier graduations multiplied by 5 minutes must be
added to the main scale reading.
ACUTE ANGLE MEASUREMENT OBTUSE ANGLE MEASUREMENT
Sine Bar:
Sine bar is a tool used to measure angles in metal working.
Sine bars are made from high carbon, high chromium, corrosion resistant steel which can
be hardened, ground & stabilized. Two cylinders of equal diameters are attached at the
ends as shown in fig. The distance between the axes can be 100, 200 & 300 mm. The Sine
bar is designated basically for the precise setting out of angles and is generally used in
conjunction with slip gauges & surface plate. The principle of operation relies upon the
application of Trigonometry.
In the above fig, the standard length AB (L) can be used & by varying the slip gauge stack
(H), any desired angle ⊖ can be obtained as, ⊖=sin-1(H/L).
Use of Sine Bar:
For checking unknown angles of a component, a dial indicator is moved along the surface
of work and any deviation is noted. The slip gauges are then adjusted such that the dial
reads zero as it moves from one end to the other.
Advantages of sine bar:
1. It is used for accurate and precise angular measurement.
2. It is available easily.
3. It is cheap.
Disadvantages:
1. The application is limited for a fixed center distance between two plugs or rollers.
2. It is difficult to handle and position the slip gauges.
3. If the angle exceeds 45°, sine bars are impracticable and inaccurate.
4. Large angular error may results due to slight error in sine bar.
GAUGES
Gauges are inspection tools of rigid design, without a scale, which serve to check the
dimensions of manufactured parts. Gauges do not indicate the actual value of the
inspected dimension on the work. They can only be used for determining as to whether
the inspected parts are made within the specified limits or not.
Plain gauges are used for checking plain (Unthreaded) holes and shafts.
Plain gauges may be classified as follows
1. According to their type:
(a) Standard gauges
These are made to the nominal size of the part to be tested and have the measuring
member equal in size to the mean permissible dimension of the part to be checked. A
standard gauge should mate with some snugness.
(b) Limit Gauges
These are also called 'go' and 'no go' gauges. These are made to the limit sizes of the
work to be measured. One of the sides or ends of the gauge is made to correspond to
maximum and the other end to the minimum permissible size. The function of limit
gauges is to determine whether the actual dimensions of the work are within or outside the
specified limits.
2. According to their purpose:
(a) Work shop gauges: Working gauges are those used at the bench or machine in
gauging the work as it being made.
(b) Inspection gauges: These gauges are used by the inspection personnel to inspect
manufactured parts when finished.
(c) Reference or Master Gauges: These are used only for checking the size or
condition of other gauges.
3. According to the form of tested surface:
Plug gauges: They check the dimensions of a hole
Snap & Ring gauges: They check the dimensions of a shaft.
4. According to their design:
Single limit & double limit gauges Single
ended and double ended gauges Fixed &
adjustable gauges
Plain Plug gauges:
Plug gauges are the limit gauges used for checking holes and consist of two cylindrical
wear resistant plugs. The plug made to the lower limit of the hole is known as 'GO' end
and the plug made to the upper limit of the hole is known as 'NO GO' end.
If GO end doesn’t go inside the hole, the hole is undersize and component is rejected.
If the hole size is within the limits, the GO gauge should go into the hole and NOGO end
should not enter into the hole.
The plugs are arranged on either ends of a common handle. Plug gauges are normally
double ended for sizes up to 63 mm and for sizes above 63 mm they are single ended type.
The handles of heavy plug gauges are made of light metal alloys while the handles of
small plug gauges can be made of some non metallic materials.
Note: This is only Basic Infor
For smaller through holes, both GO & NO GO gauges are on the same side separated by a
small distance. After the full length of GO portion enters the hole, further entry is
obstructed by the NO GO portion if the hole is within the tolerance limits.
Ring gauges:
Ring gauges are used for checking the diameter of shafts. The ring gauges are having a
central hole, which is the gauging surface and is heat treated ground and lapped. The other
surfaces are finished smooth and its periphery is knurled. Ring gauges are available
separately as GO ring and NOGO ring as shown in the figure. The GO ring made to the
lower limit size of the shaft and NOGO ring made to the upper limit size of the shaft
SNAP (or) GAP GAUGES:
A snap gauge usually consists of a plate or frame with a parallel faced gap of the required
dimension. Snap gauges can be used for both cylindrical as well as non cylindrical work
as compared to ring gauges which are conveniently used only for cylindrical work.
Double ended snap gauges can be used for sizes ranging from 3 to 100 mm. For sizes
above 100 mm up to 250 mm a single ended progressive gauge may be used.
Progressive gap gauge Double Ended gap gauge
Surface finish:
Surface finish, by definition is the allowable deviation from a perfectly flat surface that is
made by some manufacturing surfaces.
Surface finish, also known as surface texture or surface topography, is the characteristics
of a surface. It has three components: lay, surface roughness, and waviness.
Terminology:
Lay: It is the measure of the direction of the predominant machining pattern and it
reflects the machining operation used to produce it.
Surface roughness: Surface roughness commonly shortened to roughness is a measure of
the finely spaced surface irregularities. In engineering, this is what is usually meant by
"surface finish".
Waviness: Waviness is the measurement of the more widely spaced component of
texture. These usually occur due to warping, vibrations, or deflection during machining.
Flaws: Irregularities that occur occasionally on the surface. It includes cracks, scratches,
inclusions, and similar defects in the surface.
Roughness height: It is the height of the irregularities with reference tpo a reference line.
It is measured in mm or microns.
Roughness width: It is the distance parallel to the nominal surface between successive
peaks or ridges which constitute the predominate pattern of the roughness measured in
mm.
Waviness height: Waviness height is the peak to valley distance of the surface profile,
measured in millimetres.
Real surface: It is a surface limiting the body and separating it from the surrounding
surface.
Geometrical Surface: It is the surface prescribed by the design of manufacturer,
neglecting the errors of form and surface roughness.
Effective surface: It is the close representation of real surface obtained by instrumental
means.
Surface texture: Repetitive or random deviations from the nominal surface which form
the pattern of the surface. It includes roughness, waviness, lay and flaws.
Traversing length: it is the length of the profile necessary for the evaluation of the
surface roughness parameters. It may include one or more sampling length.
Sampling length(l): It is the length of profile necessary for the evaluation of the
irregularities to be taken in to account. It is also known as cut off length of a measuring
instruments.
Mean line of the Profile: It is the line that divides the effective profile such that, within
sampling length the sum of squares of distances (y1, y2, ….yn) between effective points
and mean line is minimum.
Center line of the Profile: It is the line for which the area embraced by the profile above
or below the line is equal.
Spacing of irregularities: It is the mean distance between the more prominent
irregularities of the effective profile, within the sampling length.
Maximum height of irregularities: It is defined as the average difference between the
five highest peaks and the five deepest valleys within the sampling length measured from
a line parallel to the mean line.
Talysurf Surface Roughness Tester:
This is an electronic instrument works on principle of carrier modulation. The measuring head of
this instrument consists of a diamond stylus of about 0.002 mm tip radius and skid or shoe which
is drawn across the surface by means of a motorised driving unit
(gearbox), which provides three motorised speeds giving respectively x 20 and x 100 horizontal
magnification and a speed suitable for average reading. A neutral position in which the pick-up
can be traversed manually is also provided. In this case the arm carrying the stylus forms an
armature which pivots about the centre piece of E-shaped stamping as shown in Fig. On two legs
of (outer pole pieces) the E-shaped stamping, there are coils carrying an a.c. current. These two
coils with other two resistances form an oscillator. As the armature is pivoted about the central
leg, any movement of the stylus causes the air gap to vary and thus the amplitude of the original
a.c. current flowing in the coils is modulated. The output of the bridge thus consists of modulation
only as shown in Fig. This is further demodulated so that the current now is directly proportional
to the vertical displacement of the stylus only.
The demodulated output is caused to operate a pen recorder to produce a permanent record and a
meter to give a numerical assessment directly. In recorder of this statement the marking medium is
an electric discharge through a specially treated paper which blackens at the point of the stylus, so
this has no distortion due to drag and the record strictly rectilinear one. Now-a-days
microprocessors have made available complete statistical multi-trace systems measuring several
places over a given area and can provide standard deviations and average over area-type readings
and define complete surface characterisation.
A coordinate measuring machine (CMM) is a device for measuring the physical
geometrical characteristics of an object. This machine may be manually controlled by an
operator or it may be computer controlled. Measurements are defined by a probe attached
to the third moving axis of this machine. Probes may be mechanical, optical, laser, or
white light, among others.
A machine which takes readings in six degrees of freedom and displays these readings in
mathematical form is known as a CMM.
The typical 3D "bridge" CMM is composed of three axes, X, Y and Z. These axes are
orthogonal to each other in a typical three-dimensional coordinate system. Each axis has a
scale system that indicates the location of that axis. The machine reads the input from the
touch probe, as directed by the operator or programmer. The machine then uses the X,
Y,Z coordinates of each of these points to determine size and position with micrometer
precision typically.
A coordinate measuring machine (CMM) is also a device used in manufacturing and
assembly processes to test a part or assembly against the design intent. By precisely
recording the X, Y, and Z coordinates of the target, points are generated which can then
be analyzed via regression algorithms for the construction of features. These points are
collected by using a probe that is positioned manually by an operator or automatically via
Direct Computer Control (DCC). DCC CMMs can be programmed to repeatedly measure
identical parts, thus a CMM is a specialized form of industrial robot
Coordinate-measuring machines include three main components:
1. The main structure which includes three axes of motion.
2. Probing system
3. Software.
Role of CMM:
CMMs are particularly suited for the following conditions:
• Short runs-We may be producing hundreds or even thousands of a part, but the
production run is not sufficient to justify the cost of production inspection tooling.
• Multiple features-When we have a number of features both dimensional and
geometric to control, CMM is the instrument that makes control easy and economical.
• Flexibility-Because we can choose the application of the CMM system, we can also
do short runs and measure multiple features.
• High unit cost-Because reworking or scrapping is costly, CMM systems significantly
increase the production of acceptable parts.
• Production interruption-Whenever you have to inspect and pass one type of part
before you can start machining on the next part, a machining centre may actually be
able to help a manufacturer save more money by reducing down time than would be
saved by inspection.
Advantages and Disadvantages of CMM:
Advantages:
❖ High precision and accuracy
❖ Requires less labour.
❖ Accurate dimensions can be obtained just by knowing the coordinates and
distance between the two reference points
❖ Robustness against external force and error accumulation.
❖ Reduction in set up time.
❖ Uniform inspection quality.
❖ Total flexibility.
❖ Simplification i inspection procedure.
❖ Reduces total cost.
Disadvantages:
❖ The Coordinate measuring machines are very costly.
❖ The CMMs are less portable.
❖ If the operating software cracks down it is difficult to restart the entire system.
❖ It needs to construct some feature on its own as some parts of the workpiece are
unreachable by the probe.
Applications of CMM:
❖ CMMs find applications in automobile, machine tool, electronics, space, and
many other large companies. These machines are ideally suited for development
of new products and construction of prototype because of their maximum
accuracy, universatility and ease of operation
❖ Because of high speed of inspection, precision and reproducibility of coordinate
measuring machines, these find application to check the dimensional accuracy
of NC produced workpiece in various steps of production.
❖ For safety components as for aircraft and space vehicles, 100% inspection is
carried out and documented using CMM.
❖ CMMs are best suited for the test and inspection of test equipment, gauges and
tools.
❖ CMMs can be used for determining dimensional accuracy of the bought in
components, variation on same and thus the quality of the supplier.
❖ CMMs can also be used for sorting tasks to achieve optimum pairing of
components within tolerance limits.
❖ A coordinate measuring machine can replace several single purpose equipment
with a low degree of utilisation like gear tester, gauge tester, length measuring
machine, measuring microscope, etc.
❖ CMMs are also best for ensuring economic viability of NC machines by
reducing their downtime for inspection results. They also help in reducing reject
costs, rework costs through measurement at the appropriate time with a suitable
CMM.
Uses of CMM:
❖ Profile measurement.
❖ Dimensional measurement.
❖ Depth mapping.
❖ Angularity or orientation measurement.
❖ Shaft measurement.
❖ Digitizing or imaging activities.
Input
(Physical quantity) (Mechanical
displacement, thermal or
optical signal)
Transducer Output (Electrical
signal)
CHAPTER -2 TRANSDUCERS AND STRAIN GAUGES
Introduction:
Sensing of input parameter is very essential in all measurement. This is effectively done using sensors
and transducers.
Definition:
A transducer is a device used to convert position displacement, thermal and optical signal into electrical
quantities that may be amplified, recorded and otherwise processed in the instrumentation system.
Transducers are also known as prime sensors or pickups or signal generators.
The function of the transducer is to present the input information into analogous form.
Simple Block diagram of Transducer:
Examples of common transducers:
• Microphone. (Converts sound into electrical impulses. Sound energy into
electrical energy)
• Loud speaker. (Converts electrical impulses into sound. Electrical energy into
sound energy.)
• Electric motor.(Converts electrical energy into mechanical energy or motion)
• Thermocouple. (Converts thermal energy into electrical energy) etc.
Characteristics of Transducers:
The characteristics of transducers are as follows:
• It should be compact, small in size and less weight
• It should have exceptional reliability
• It should high sensitivity
• It should maintain stability with environmental changes
• It should develop linear relationship between input and output
• It should be available at lowest possible cost and ease of producing, fabricating it.
Requirements of Transducers:
1) Nature of measurement to be made
2) Mechanical input characteristics
a. Linearity
b. Mechanical hysteresis
c. Viscous flow or Creep
3) Loading effect of the transducers
4) Environmental factors (Ability to withstand environmental conditions).
5) Capability of the transducer
6) Compatibility of the transducer and measuring system.
7) Economical factors or considerations
8) Smaller in size and weight.
9) High sensitivity.
10) Low cost.
Nature of measurement to be made
The measurement parameters are either steady state, transient or dynamic. Each of these groups falls
under certain frequency ranges Ex: A strain gauge located on the structured member of an aircraft
would indicate steady loading on smooth flight, dynamic loading and transient loading during landing.
Sensitivity of the transducer is indicative of the amplification that will be required before the signal is
processed.
Mechanical input characteristics
For static measurements, it is very important that transducer must have linear relationship
between inputs and output.
Mechanical hysteresis in transducers like springs, pressure capsule etc. and is an indication of the
imperfect response of the microscopic crystal grains integrated over the macroscopic dimensions of the
strained transducer element. Factors like friction, backlash, loose screws etc., are responsible for
hysteresis in the transducer.
Loading Effect of transducer
If the attachments or installation of transducer in anyway affect or changes the value of parameters
being measured, errors may be introduced. The loading characteristics are determined by the mass of
transducer, exterior size of transducer and geometric configuration of transducer.
Ex: If the mass of the thermocouple junction is too large, it physically affects the process by absorbing
the heat in the system
Environmental considerations
The following environmental factors affect the performance of transducer
a) Temperature
b) Shocks
c) Vibration
d) Electromagnetic interference.
The temperature effect is to cause error in zero reference setting and sensitivity of the transducer
output. To avoid errors due to vibration and shocks, transducers must be selected with a minimum of
movable mass in the sensing mechanism.
General consideration for transducer environment includes acceptability of transducer for adjustments,
simplicity of mounting and cable installations, convenient size, shape, distance to corrosion, etc.
Transducer Capability
Transducer capabilities should be investigated to determine the effect of overheating, over range and
power dissipation rating. Many transducers will maintain their calibration at or below maximum
designated temperature or range from overloading condition. Some units may be damaged due to
overloading, particularly for insulation breakdown and mechanical linkage distortion. When sensing
the input signal, the output power of
the transducer must not exceed the specified maximum power. Some factors involved in the selection
of transducers are cost, Basic simplicity, reliability and low maintenance.
Factors To Be Considered While Selecting Transducer:
• It should have high input impedance and low output impedance, to avoid loading effect.
• It should have good resolution over is entire selected range.
• It must be highly sensitive to desired signal and insensitive to unwanted signal.
• Preferably small in size.
• It should be able to work in corrosive environment.
• It should be able to withstand pressure, shocks, vibrations etc..
• It must have high degree of accuracy and repeatability.
• Selected transducer must be free from errors.
Classification Of Transducers:
1. Based on the physical phenomenon,
➢ Primary transducer
➢ Secondary transducer
2. Based on the power type
➢ Active transducer
➢ Passive transducer
3. Based on the type of output the classification of transducers are made,
➢ Analog transducer
➢ Digital transducer
4. Based on the electrical phenomenon
➢ Resistive transducer
➢ Capacitive transducer
➢ Inductive transducer
➢ Photoelectric transducer
➢ Thermoelectric transducer
➢ Piezoelectric transducer
➢ Photovoltaic transducer
5. Based on the non-electrical phenomenon
➢ Linear displacement
➢ Rotary displacement
6. Based on the transduction phenomenon,
➢ Transducer
➢ Inverse transducer
Active Transducers:
These transducers do not need any external source of power for their operation.
Therefore they are also called as self generating type transducers. The energy required for production
of an output signal is obtained from the physical phenomenon being measured.
Example: Thermocouples, Piezoelectric transducer, Photovoltaic cell etc.
Passive Transducers:
These transducers need external source of power for their operation. So they are not self generating
type transducers. They are known as externally powered transducer. These transducers produce the
output signal in the form of variation in resistance, capacitance, inductance or some other electrical
parameter in response to the quantity to be measured. Example: Resistance thermometer, thermistors,
differential transformer, Potentiometric device etc.
Analog transducer:
These transducers convert the input phenomenon into an analogous output which is a continuous
function of time. Example: Strain gauge, Thermocouple, Thermistors, LVDT
Digital transducer:
These transducers convert the input phenomenon into an electrical output which may be in the form
of pulse.
Examples of Digital Transducer Applications as Turbine meter used in for flow measurement.
OR
Classification:
The transducers can be grouped under the following main groups. Transducers
are also classified as
1) Primary and secondary transducers
2) Passive and active transducers.
3) Mechanical transducers
a. Mechanical Springs
b. Pressure sensitive elements
c. Hydro Pneumatic elements
4) Electrical Transducers
a. Resistive transducers
b. Resistance strain gauges
c. Variable inductance transducers
d. Capacitive transducers
5) Piezoelectric transducers
6) Photo electrical transducers
7) Photo Conductive transducers
8) Ionization transducers
9) Electronic transducers
Actuating Mechanisms:
An actuator is a device that converts energy into motion. Therefore, it is a specific type of a
transducer. Actuating mechanisms used in some devices are ,
1. Bellows
2. Diaphragms- Flat, Corrugated and Capsule.
3. Bourdon tubes- Circular and Twisted.
4. Vanes
5. Rectilinear motions.
6. Shaft rotations – Pivot Torque and Unrestrained.
7. Proving rings.
8. Cantilever beams.
9. Beam elongation.
10. Seismic mass and springs and others.
Voltage and current generating analog transducers: Piezoelectric
Transducers:
Certain materials can produce an electrical potential when subjected to mechanical strain or can change
the dimension when subjected to voltage this effect is known as piezoelectric effect.
This transducer works on a piezoelectric effect. When force is applied to the plate, a stress will be
produced in the crystal and a corresponding deformation with certain crystals. This deformation
produces a potential difference at the surface of the crystal, and this effect is known as piezoelectric
effect
The induced charge on the crystal is proportional to the impressed force. Fig. shows the arrangement of
piezoelectric transducer. A Piezoelectric crystal is placed between two plate electrodes. When a force is
applied to the crystal, potential difference (voltage) will be developed at the surface of the crystal and it
is given by,
E=gtp
Where
g=Voltage sensitivity t=Crystal thickness p=Pressure
The induced charge will be given by the relation.
Q⍺ F Q=k F
Where, Q (charge) is in Coulombs,
F is in Newton and
‘k’ is Piezoelectric constant
Natural crystals are quartz and tourmaline while synthetic crystals include Rochelle salts, barium titrate,
ammonium dihydrogen phosphate (ADP), potassium dihydrogen phosphate, ceramics A & B , lithium
sulphate etc.
Characteristics of Piezoelectric Materials Rochelle salt
Rochelle salt among the available materials, Rochelle salt gives the highest output but needs
protection from moisture in the air and limited to use up to 45OC only.
Quartz:
Quartz is the most stable material of all, but has a very low output. Because of its high stability, quartz
is commonly used for stabilizing the electronic oscillators. Usually quartz is shaped in to a thin disk,
and silvered on either faces for attaching the electrodes. This disk thickness is also maintained that it
provides a mechanically resonant frequency corresponding to the desired electrical frequency.
Barium titrate: It is a polycrystalline material and is used to produce photoelectric effect. It may be
formed into variety of sizes and shapes and used for high temperature range.
Two Coil Transducer
A two coil inductance transducer is shown in fig. It consists of a single coil with centre tap. In this type
the movement of the core (or the armature) varies the relative inductance of the two coils. The variation
in inductance ratio between the two coils gives the output. It is used in secondary transducer for
pressure measurement.
STRAIN GAUGES:
Introduction:
It is not possible (currently) to measure stress directly in a structure. However, it is possible to measure
strain since it is based on displacement. There are a number of techniques to measure strain but the two
more common are extensometers (monitors the distance between two points) and strain gauges
Strain gauges are constructed from a single wire that is wound back and forth. The gauge is attached to
the surface of an object with wires in the direction where the strain is to be measured.
The electrical resistance in the wires change when they are elongated. Thus, the voltage change in the
wires can be collaborated to the change in strain. Most strain gauge measurement devices automatically
collaborate the voltage change to the strain, so the device output is the actual strain
Definition
A strain gauge is a device used to measure strain on an object.
Purposes:
Strain gauges are used for either of the two purposes.
1) To determine the state of strain existing at a point on a loaded member for the
purpose of stress analysis.
2) To act as a strain sensitive transducer element calibrated in-terms of quantities
such as force, pressure, displacement, acceleration or for the purpose of
measuring the magnitude of the input quantity.
Metals Used In Making Strain Gauges:
The strain gauges are made with the following metals.
1) Constantan
2) Nichrome
3) Dynalloy
4) Platinum alloy
5) Copper Nickel
6) Nickel Chrome
7) Nickel Iron
8) Modified Nickel Chrome
9) Platinum Tungsten
Classification
Strain gauges can be classified as follows.
a) Mechanical strain gauges
b) Optical strain gauges
c) Electrical strain gauges
a. Resistance strain gauges
i. Bonded type
ii. Un-bonded type
iii. Bonded wire type
iv. Bonded foil type
v. Semiconductor gauges
b. Capacitive gauges
c. Inductive gauges
d. Piezoelectric gauges
Mechanical Strain Gauges (Berry-type)
This type of strain gauges involves mechanical means for magnification. Extensometer employing
compound levers having high magnifications was used. Fig. shows a simple mechanical strain gauge. It
consists of two gauge points which will be seated on the specimen whose strain is to be measured. One
gauge point is fixed while the second gauge point is connected to a magnifying lever, which in turn
gives the input to a dial indicator. The lever magnifies the displacement and is indicated directly on the
calibrated dial indicator. This displacement is used to calculate the strain value.
The Berry extensometer as shown in the Fig. is used for structural applications in civil engineering
for long gauge lengths of up to 200 mm.
Fig. Mechanical Strain Gauge ( Berry Extensometer)
Advantages
1. It has a self contained magnification system.
2. No auxiliary equipment is needed as in the case of electrical strain gauges.
Disadvantages
1. Limited only to static tests.
2. The high inertia of the gauge makes it unsuitable for dynamic measurements and
varying strains.
3. The response of the system is slow and also there is no method of recording the
readings automatically.
4. There should be sufficient surface area on the test specimen and clearance above it in
order to accommodate the gauge together with its mountings.
Optical Strain Gauges
The most commonly used optical strain gauge was developed by Tuckerman as shown in the figure. It
combines mechanical and optical system consisting of an extensometer and an autocollimator. The
nominal length of the gauge is the distance from a knife edge to the point of contact of the lozenge. The
lozenge acts like a mirror. The distance between the fixed knife edge and lozenge changes, due to
loading. Then, the lozenge rotates and if any light beam is falling on it will be deflected. The function
of the autocollimator is to send parallel rays of light and receive back the reflected light beam from the
lozenge on the optical system. The relative movement of the reflected light as viewed through the eye-
piece of the autocollimator is calibrated to measure the strain directly. This gauge can be used for
dynamic measurements of up to 40 Hz using a photographic recorder, and strains as small as 2µm/m
can be resolved. Gauge lengths may vary from 6 mm to 250 mm.
Advantages:
The position of autocollimator need not be fixed relative to the extensometer, and reading can be taken
by holding the autocollimator in hand.
Disadvantages:
1. Limited only for static measurements.
2. Large gauge lengths are required.
3. Cannot be used where large strain gradients are encountered.
Mounting Of Strain Gauges
1) The surface on which the strain gauge has to be mounted must be properly
cleaned by an emery cloth and bare base material must be exposed.
2) Various traces of grease or oil etc., must be removed by using solvent like
acetone
3) The surface of the strain gauges coming in contact with the test item should also
be free from grease etc.
4) Sufficient quantity of cement is applied to the cleaned surface and the cleaned
gauge is then simply placed on it. Care should be taken to see that there should
not be any air bubble in between the gauge and the surface. The pressure applied
should not be heavy so that the cement may puncture the paper and short the grid.
5) The gauges are then allowed to set for at least 8 or 10 hours before using it. If
possible a slight weight may be placed by keeping a sponge or rubber on the
gauge.
6) After the cement is fully cured the electrical continuity of the grid must be
checked by ohm-meter and the electrical leads may be welded.
Problems Associated With Strain Gauge Installations
The problems associated with strain gauge generally fall in to the following three categories.
1) Temperature effects: Temperature problems arise due to differential thermal
expansion between the resistance element and the material to which it is bonded.
Semiconductor gauges offer the advantage of that they have lower expansion co-
efficient than either wire or foil gauges. In addition to the expansion problem,
there is a change in resistance of gauge with temperature which must be
adequately compensated.
2) Moisture absorption: Moisture absorption by the paper and cement can change
the electrical resistance between the gauge and the ground potential and thus
affect the output resistance readings.
3) Wiring problems: This problem arises because of faulty connections between
the gauge resistance element and the external read-out circuit. These problems
may develop from poor soldered connections or from in-flexible wiring, which
may pull the gauge loose from the test specimen or break the gauge altogether.
STRAIN GAUGE ROSETTES:
Introduction:
A strain gauge rosette is, by definition, an arrangement of two or more closely positioned gauge grids,
separately oriented to measure the normal strains along different directions in the underlying surface of
the test part. Rosettes are designed to perform a
very practical and important function in experimental stress analysis. It can be shown that for the not-
uncommon case of the general biaxial stress state, with the principal directions unknown, three
independent strain measurements (in different directions) are required to determine the principal strains
and stresses. And even when the principal directions are known in advance, two independent strain
measurements are needed to obtain the principal strains and stresses.
Two Element Rosette Gauges
These are used for the measurement of stresses in bi-axial stresses fields, where the directions of
principal stress is known. Two strain gauges are mounted at 90º to each other. Whenever the strain
gradient along the surface is high and it is important to approach a ‘point’ as nearly as possible, the
grids are staked one on the top of the other, being insulated between each other. Where there is a high
strain gradient, perpendicular to the surface, the gauges must be as near to the surface as possible, i.e. in
one plain. It is shown in the following figure.
Three Element Rosette Gauges
These are used in general bi-axial stress fields. In this type also there exist overlapping type and single
plain type of gauges. The choice of either type depends up on the nature of strain gradient at the point
where the gauge is to be mounted. This is also known as rectangular Rosette. The three strain gauges
are oriented as shown in the following fig.
Requirements of Ideal Strain Gauges
The following requirements are considered in selection of strain gauges.
a. High gauge factor
b. High resistivity
c. Low temperature sensitivity
d. High yield point
e. High electrical stability
f. High endurance limit
g. Good weldability or soldaribility
h. Low hysteresis.
i. Low thermal EMF
j. Corrosion resistant
Gauge Factor:
Gauge factor is defined as the ratio of electrical strain to the mechanical strain. It is denoted by
‘F’. It is an important parameter of the strain gauge which measures the amount of resistance
change for a given change. It is given by,
Gauge factor, F =
Where,
ΔR=Change in Resistance. ΔL=Small change in Length. R=Initial Resistance.
L=Initial length
Higher the gauge factor of strain gauge, the more sensitivity of the gauge and greater electrical
output for indication and recording purpose. All the efforts is to be made to develop a strain
gauge having high gauge factor.