15
©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 Transcript

Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved.

Essentials of Digital ImagingModule 1 Transcript

Page 2: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

Essentials of Digital Imaging Module 1 – Fundamentals

1. ASRT Animation

2. Welcome Welcome to Essentials of Digital Imaging – Module 1 Fundamentals. 3. License Agreement 4. Objectives After completing this module, you will be able to:

Identify the characteristics of digital imaging receptors.

List the factors affecting digital receptor response to exposure.

Describe the image capture process for digital image receptors.

Define detective quantum efficiency (DQE) and its potential impact on patient dose.

Describe the dynamic range and latitude of digital image receptors.

List the factors that determine spatial resolution for digital image receptors.

Name the sources of image blur.

Recognize equipment associated with digital fluoroscopic imaging systems. 5. Basic Imaging Concepts The basic goals of diagnostic imaging are to capture information found in a patient’s body and present this information to an observer for viewing and analysis. In order to understand digital imaging, the technologist must first understand the fundamental concepts of each part. 6. The X-ray Tube The first element in the imaging chain is the source of ionizing radiation — the x-ray tube. The operator uses the x-ray tube to produce a beam of ionizing radiation. The primary beam exits the tube and strikes the patient. It consists of photons, or packets of energy, that diverge from their source, move in straight lines and travel at the speed of light. One of the key characteristics of the primary beam is that it consists of photons of varying energies, with different abilities to penetrate the tissues found in the patient. This concept of using a beam of varying energies and different penetrating capabilities is very important. 7. The Patient The patient is a key part of the imaging chain because the patient contains 100% of the useful or diagnostic information. One of our first goals in diagnostic imaging is to accurately transfer that information to some form of image receiver. The tissues of the human body have varying abilities to attenuate, or absorb, x-ray photons. For example, tooth enamel is the most difficult substance to penetrate, and air or air-filled structures are poor absorbers of the x-ray beam. We transfer patient information to an image receptor by creating a pattern of x-ray photons in the beam of radiation exiting the patient that corresponds to the patient’s anatomical structures. This beam that contains useful or diagnostic information is known as the remnant beam. 8. The Remnant Beam If the energy of an x-ray beam is so low that it doesn’t penetrate the patient, no information can be captured by an image receptor, and we couldn’t create a useful or diagnostic image. At the opposite extreme, using a beam with such a high energy that it penetrates all the tissue structures in its path also doesn’t produce a useful image. In this case, all structures — small or large, thick or thin — are penetrated and so there would be no difference in the pattern of radiation reaching the image receptor. The image receptor would respond by recording an image in which everything looks the same. It is the varying ability of the radiation beam to penetrate tissues and the tissues’ different abilities to absorb x-ray photons that helps create a diagnostic image.

Page 3: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

So part of the art of producing diagnostic images is to align the characteristics of the x-ray beam with the characteristics of the anatomical structures under examination to produce a pattern of x-ray photons exiting the patient that will transmit useful diagnostic information. The term “signal” is used to correspond to varying patterns of information carried through the imaging chain. 9. The Image Receptor The diagnostic image is created using energy within the remnant beam that reaches and is absorbed by the image receptor. In the early days of radiography, we relied on a film emulsion to absorb x-ray energy and record 100% of the patient information. This inefficient process resulted in large doses of radiation to the patient. With the introduction of intensifying screens, approximately 95% of a diagnostic image was created by light given off by the screen’s phosphor materials when they were excited by the beam. The remainder of the image was created through the direct action of x-rays on a film emulsion. The term “analog” is associated with film-based image recording and viewing systems. The image receptors in modern diagnostic radiography are electronic devices. The term “digital” is associated with electronic image recording and viewing systems. 10. Image Processing After the image receptor collects the patient information, the captured invisible image must be converted into a visible image. Digital imaging systems use a variety of technologies to remove captured image data from the image receptor and prepare that data for archival storage, retrieval and display. In this module the discussion of digital systems is organized by receptor type and corresponding data extraction techniques. 11. Image Display After processing, images must be presented to the observer for interpretation. Because the viewer’s eyes are light-sensitive organs, a light source is used to display the images. In digital viewing systems, the surface of the display monitor emits light in varying intensities that correspond to structures within the patient. Areas that were difficult to penetrate with the primary x-ray beam appear brightly lit on the display; areas that were somewhat penetrated by the primary beam appear moderately; highly penetrated structures appear as low or unlit areas on the display. The ability to display digital diagnostic images depends on delivering varying intensities of light to the observer’s eyes for analysis and interpretation. 12. Knowledge Check 13. Knowledge Check 14. Digital Imaging Systems Digital concepts have been a part of medical imaging since the development of computed tomography in the early 1970s and digital imaging has been in use for more than 20 years in general radiography. Digital imaging is a rapidly evolving technology that changes often in the world of consumer electronics. It’s more critical now than ever before for technologists to keep up to date. To use digital imaging equipment safely while producing quality images, you need to continually add to your professional skill set and knowledge base. 15. Advantages of Digital Imaging Digital images allow us to see structures that previously would have required additional imaging. For example, bone and tissue can be distinguished on a single image. In addition, radiologists can manipulate a digital image to view structures differently after an image is acquired. Once a digital image is captured, making multiple copies of a study is simple. Digital images can be shared among several members of a health care team or sent to various locations. If necessary, a study can be distributed electronically throughout an organization or worldwide to clinicians with expertise in a

Page 4: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

specialized field for referral and consultation. For this reason, many analog images are converted to a digital format and archived. Digital imaging systems have a large dynamic range, which means that the digital image receptor responds to a wide range of exposure values to create diagnostic images. Whereas an analog imaging system required precise technique to achieve an optimal image, a digital system is more forgiving in terms of the technique required to produce an image of diagnostic quality. 16. Imaging Examination Regardless of whether the technology is analog or digital, the imaging examination involves an x-ray tube, a patient, and an image receptor. The technologist must choose the appropriate technique and properly position the patient. A proper radiograph contains all the patient information within the x-ray beam and transmits this information to the image receptor. To fully understand digital imaging and digital receptors, technologists must consider the analog process. The differences between analog and digital systems occur at the image receptor. Film-screen systems are considered analog because the analog signal records the continuous series of gray on the receptor. An analog image receptor has 3 components. The cassette supports the intensifying screens and prevents the film from being exposed to any light other than the light energy emitted by the intensifying screen. The intensifying screen converts the remnant x-ray beam exiting the patient into light. The film is the recording device. It uses the x-rays reaching the film and light produced by the intensifying screens to create an image. The image created on the film directly after exposure is not yet visible to the human eye and is known as the latent image. The film then must be processed to convert the latent image into a visible, or manifest, image. 17. Analog-to-Digital Conversion Converting patient images recorded on analog film into digital image files makes a lot of sense. Modern film digitizer devices can scan and translate film-based images into digital equivalents with little to no loss of diagnostic image integrity. Film-based archives that previously took up hundreds of square feet of storage space now can be stored electronically. A digital imaging exam can be linked to a patient’s medical record and instantly retrieved for viewing in multiple or remote locations. Imaging departments used these devices even before they started making the transition from analog to digital equipment. Film digitizers convert film-screen images into digital data as part of a teleradiology system. The files are sent electronically to radiologists for emergency readings, or to other facilities for additional consultations. 18. Film Digitizers Film digitizers work much like photocopiers to create high-quality electronic reproductions of analog originals. Some information is lost during the digitizing process because a copy is never as good as the original. Converting an analog to a digital image starts with directing light from a precalibrated light source on one side of the film to a photo detector device on the opposite side of the film. The photo detector measures the intensity of light that passes through the film. Diagnostic images represent many, many subtle gradations of opacity ranging from white, through grays, to black. Film digitizer devices are very efficient in measuring slight changes in film opacity and producing slight differences in photo detector output. Digitization also can contribute additional artifacts to digitized images if the digitizer isn’t properly maintained or when there is dust or dirt on the film. When an excessively dusty or dirty film is digitized, the particles deposited in the digitizer degrade the image along with all subsequent images fed into the equipment. After the image data is converted to a digital format, it can be uploaded to the facility’s picture archiving and communication system, or PACS, to be distributed and stored. The digital format allows radiologists to reconstruct and manipulate image data to view anatomy in ways not possible with analog imaging. Electronic access to existing images also allows them to be used for comparison. 19. Binary Language

Page 5: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

Digital receptor systems need to create a signal that can be used by the computer. All digital receptors first convert an analog electrical signal into a digital signal. The analog signal must be digitized before it reaches the computer, so an analog-to-digital converter, or ADC, changes the continuous analog signal into a signal of discrete units. The analog signal consists of electrons until it reaches the ADC, where the signal is sampled and converted to digital, or numeric, data. The process of assigning numeric values to the signal is called quantization. These values determine the gray levels that are related to radiation exposure. When you look at a digital image on a display monitor, the image may appear to be made up of various gradations of gray tones. Digital systems don’t produce shades of gray but rather individual discrete numeric values. We convert an analog image into a digital image through a variety of methods that assign these individual discrete values to the various shades of gray in an image, or in the case of computers, the information is converted to binary language. Binary language consists of a system of 2 symbols, either a 0 or 1, to manage data. Each number is called a bit, short for binary digit. The more bits a system has, the more information it can handle. The analog x-ray image must be converted into bits. You also may hear the term “byte” used to represent the numerical values; a byte is a string of 8 bits, and also is called a "word." 20. Image Matrix Changing the analog signal into numeric values allows the computer to manipulate the image. The computer “sees” a series of numbers in boxes that represent the amount of radiation deposited across the image receptor. These boxes are known as picture elements, or pixels, and the entire set of pixels is called a matrix. The numeric value in each pixel corresponds to the intensity of the x-ray beam absorbed by a particular area of the digital receptor. When we see an image on a monitor, the digital values have been converted back into shades of gray by a digital to analog converter and resemble an analog image. 21. Bit Depth Image quality is affected by the number of pixel elements in a matrix and how many shades of gray the system can digitize. Remember, the ADC converts the analog signal into digital data, so the more numbers it can assign, the more shades of gray are available and the closer the image will resemble an analog image. The number of gray levels depends on bit depth. The values available for 1 bit are 1 and 0; since only 2 numbers are available, the density is either black or white. If 6 bits are available, there are 64 shades of gray; 8 bits equal 256 shades of gray. You can see how the images would look with a bit depth of 1, 6, or 8. The more bits that are available to the ADC, the more accurate the signals from the digital receptors will be. The bit depth for an ADC used in diagnostic imaging can range from 12 to 32. 22. Spatial Resolution Regardless of the image receptor, the unit of measure used to distinguish small objects adjacent to each other is called line pairs per millimeter or cycles per millimeter. We want to see the structures, or lines, as separate and distinct. Line pairs of higher resolution images are clearer or, separate and distinct, where lower resolution images appear not as clear or even blurry. Therefore, the image on the right has lower resolution for the given number of line pairs. 23. Pixels Pixel size is the first characteristic we look at when considering digital image spatial resolution. The size of the pixels determines the line pairs, or cycles, per millimeter that we can see. We can measure pixels in 2 ways. Pixel size is measured from side to side within an individual pixel. Pixel pitch measures from the center of 1 pixel to the center of an adjacent pixel and includes any gaps between them. The 1-mm squares on this slide illustrate the concept of pixel density. Pixel density measures the number of pixels within a unit area. The more pixels there are in a given measured area, the greater the pixel

Page 6: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

density; the greater the pixel density, the higher the image resolution. This can also be referred to as matrix size; a large matrix has many pixels or higher pixel density. The square on the right contains 10 pixels per mm, and therefore has a higher resolution than the square on the left with a pixel density of 5 pixels per mm. 24. Matrix Size The flower images show the effect of rendering an image using different pixel sizes. Image A on the left was created using the smallest pixel size. You can see that images B and C to the right have coarser texture and few fine details. Image C was produced using the largest pixel size. Ultimately, as the pixel size used to produce an image decreases, recorded detail increases. The number of pixels in an image, or the matrix size, often is determined by the examination. For instance, a typical matrix size for a digital radiograph may be 3,000 x 3,000 with a bit depth of 12; for a computed tomography scan, the matrix size would be 512 x 512 with a bit depth of 16. Pixel density and resulting matrix size are fixed characteristics of a digital image receptor. 25. Knowledge Check 26. Knowledge Check 27. Photostimulable Phosphor Receptors A photostimulable phosphor receptor stores the energy from its interaction with an x-ray beam. The PSP plate is found in 2 types of systems. In a cassette-based system, the cassette holds the PSP plate. Technologists can open the cassette and inspect the plate for cleaning and to remove artifacts. An exposed cassette must be placed in a reader to process and retrieve the recorded image. In a noncassette-based, or cassette-less system, the PSP plate is located inside a large piece of equipment, such as the movable Bucky of a chest unit. The technologist never handles a plate in a cassette-less device. A PSP plate reader is part of the image receiver assembly, and plates are scanned immediately following exposure. 28. Flat-panel Detectors Flat-panel detectors with thin-film transistors can be either scintillator-based or nonscintillator-based. A scintillator-based flat-panel detector converts the energy absorbed from the x-ray beam to light, which then is converted to electrons to create an image. Scintillation is a term that refers to the ability of certain materials to respond to excitation and give off light. In the case of radiography, the scintillation material absorbs x-ray energy. The photo on the left shows a tethered “cassette” flat-panel device. Unlike a PSP cassette-based detector, this device can’t be opened by the technologist. The nonscintillator-based detector converts the energy absorbed from the x-ray beam into electrons to create the image. 29. Other Types of Digital Image Receptors A charge-coupled device, or CCD, and complementary metal oxide semiconductor, or CMOS, image receptors use a scintillator. This means the receptor must contain a material that gives off light when the x-ray beam strikes it. Light from the scintillator material must be converted into a pattern of electrons. This conversion is accomplished with CCD or CMOS technology. 30. Characteristics of Image Detectors There are numerous parameters to consider when assessing the performance of imaging detectors. Absorption efficiency is a measurement of the percentage of energy that strikes a receptor material that is actually absorbed by the receptor. Conversion efficiency is the percentage of energy absorbed by a receptor that is converted to useable output. Detective quantum efficiency, or DQE, is the percentage of energy that strikes a receptor that results in a useful output signal. It is the product of absorption and conversion efficiency. Increasing any of these characteristics decreases the exposure required to create an image and therefore decreases patient exposure.

Page 7: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

31. Photostimulable Phosphor Receptors PSP plates don’t come in as wide a variety of configurations as analog image receptors. Although some PSPs are manufactured for high-resolution examinations, such as mammography, processing parameters control the spatial resolution in general diagnostic imaging rather than the physical construction of the PSP. Plates for high-resolution use are either constructed differently at the phosphor layer, such as varying phosphor crystal size, or designed with technology that allows the phosphor layer to be read from both sides of the imaging plate. 32. Phosphor Characteristics Technologists don't have to think about phosphor characteristics on a daily basis, but it helps to know what materials are used for a photostimulable phosphor plate. There are two available phosphor types for PSPs. This first is the turbid phosphor. The crystals are distributed throughout the phosphor level. There are various chemical makeups of turbid phosphors which include is barium fluorobromide and europium, barium fluorobromide iodide with europium, barium fluoroiodide with europium, barium strontium-fluoride bromide-iodide, gadolinium oxysulfide and rubidium chloride. This second phosphor type is the needle phosphor. Needle is sometimes called columnar. A needle phosphor is special in that it’s grown in a controlled setting in the form of a column. The chemical makeup for needle phosphor is cesium iodide. Needle crystals have less light divergence when stimulated compared with the turbid phosphor material. Less light divergence helps improve spatial resolution. You may hear the term “light pipes” associated with needle phosphors. 33. PSP Image Capture Image date is captured by the PSP receptor when x-ray photon energy strikes the phosphor crystals, causing electrons within the crystals to move from their normal orbital location to a higher energy level. This action forms a latent, or invisible, image in the receptor. The number of electrons affected is directly proportional to the amount of energy absorbed by the PSP. Some of the electrons raised to a higher energy level spontaneously return to their resting energy state. As these electrons return to a resting state, they give off excess absorbed energy in the form of light. A red laser light source scans the receptor plate to extract the image data. The laser light exposure then causes the remaining electrons at high-energy states (those that form the latent image) to return to their resting state. They release the energy absorbed from the x-ray beam as a blue light. Following laser scanning, the light released from the PSP is collected by the light guide assembly and used to record the patient image. Not all the electrons raised to a higher energy state respond to red light laser scanning, so an exposed PSP plate must go through a final erasure step before being used again. Plate erasure allows any residual electrons at higher energy levels to return to their normal resting states following a scan. The latent image doesn’t remain on the PSP detector indefinitely. The image can last for several hours; however, the exposed PSP plate should be read in a reasonable amount of time. The PSP latent image can lose 25% of its energy in 8 hours. 34. Image Capture and Erasure The crystal structure illustrates the classic arrangement of a barium fluorobromide type of photostimulable phosphor receptor. The blue “F” structure within the crystal represents a component within the crystal known as an “F-trap”. The F-trap holds electrons elevated to a higher energy level than their resting state. Crystals absorb the x-ray photons as they enter and move from a resting energy state, at the “valence band,” to a higher energy level indicated by “conductive band.” The electrons are then captured in the F-, or electron-, trap. Electrons that aren’t captured release light as they migrate back to the valence band.

Page 8: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

The electrons held in the F-trap create the latent image. The laser light scans the crystals to release electrons in the F-trap. The electrons emit light as they migrate back to the lower energy valence band. Laser scanning doesn’t release all the electrons in the F-trap. The plate reader releases any remaining electrons left in the F-trap so the plate can be used again. 35. PSP Plate Reader In a point scan reader, the laser light is directed at the plate as it moves through the plate reader. A light guide focuses the light from the trapped electrons to the photomultiplier tube, or photodetector, connected to the light guide assembly. The difference between laser light color, which is generally toward the red end of the spectrum, and the blue color emitted from the PSP allows the photodetector to recognize the light used for plate scanning versus image formation. 36. PSP Reader Technology Newer readers are line-scan devices, which have several lasers in a row with a corresponding CCD photodetector array. The line-scan reader processes an entire line of the image plate at one time and is faster than point-scan readers. The newest type of PSP reader is dual-sided, so the imaging plate is read from both sides. This helps obtain more signal from the plate. The photodetector captures light from the PSP plate as it is being scanned and converts the various light intensities into an electric signal. This signal is sent to the ADC, where it undergoes sampling and quantization as previously demonstrated. 37. Knowledge Check 38. Knowledge Check 39. Flat-panel Receptor Characteristics There are two types of flat-panel image receptors that might be available in an imaging department: scintillator-based, or indirect, receptors and nonscintillator-based, or direct, receptors. At this time, the scintillator materials in indirect flat-panel receptors are cesium iodide (CsI), which comes in a columnar needle configuration, or gadolinium oxysulfide (Gd2O2S), which comes in a turbid formation. The scintillator converts x-ray photons into light, and then the light emitted from the scintillator interacts with a photoconductive material, typically made of amorphous silicon. The amorphous silicon converts the light photons it receives into electrons that migrate to thin-film transistors (TFTs) and produce an electric signal. Because the x-rays are converted to light first, this process is considered an indirect image capture system. The nonscintillator flat-panel receptor doesn’t use a material that converts a remnant x-ray beam into light. Instead, nonscintillator image receptors use amorphous selenium to convert the remnant x-ray beam directly to electrons, which are then collected by a thin-film transistor array. Remember that scintillator-based receptors also use TFTs. The image capture process for nonscintillator-based, direct, receptors consists of 2 stages rather than the three-step process associated with scintillator-based, or indirect, receptors. 40. Thin-film Transistors The thin-film transistor, or TFT, is a complex circuit device that collects electrons emitted from either amorphous selenium or amorphous silicon. Detector elements, or DELs, are contained with the circuit assembly of a TFT. DELs collect the electrons that represent individual components of a digital image. The individual DEL collects electrons that represent the exposure level, or x-ray intensity. The sensing area is the largest part of the DEL because it receives the x-ray beam data. There are other nonsensitive electronic components of the DEL that take up space. One way to assess the quality of a flat-panel detector is by the fill factor. The fill factor is the ratio of the sensitive area to the entire detector area and is usually expressed as a percentage. If a detector has a fill

Page 9: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

factor of 80%, it means that 80% of the detector area is devoted to sensing and 20% is taken up by nonsensitive electronics. The fill factor affects both spatial resolution and the signal-to-noise ratio. Once the signal is sent from the thin-film transistor, it goes through the ADC for sampling and quantization, similar to the PSP receptor. 41. Diagnostic Imaging The creation of a diagnostic image is very complex. Transferring information from within the human body to an image detector and accurately capturing 100% of the transmitted information is our ultimate goal but one that’s very difficult to achieve. A number of direct and random variations occur during the process of sending an x-ray beam through the patient’s body. 42. Image Noise During image acquisition, direct variations within the beam are represented by the different energy intensities that pass through low and high density body structures and reach the image receptor making up the electrical signal. Information patterns that don’t add to the diagnostic information within the image are referred to as noise. As an example of this concept, imagine that you are trying to carry on a conversation with a friend while sitting in a busy lunchroom at work. The conversations around you are a form of unwanted information that you may overhear while you’re trying to talk and listen to your friend. These conversations are a form of noise that we normally filter out or disregard. 43. Signal-to-Noise Ratio Technologists and radiologists typically can tolerate a degree of random information, or noise, in diagnostic images. But like trying to carry on a conversation in a loud lunchroom, at some point noise begins to distract from a conversation between friends. In the same way, noise affects the ability of an observer to perceive structural details within a diagnostic image. When noise is equal to or greater than the signal making up an image, information is lost or unnoticeable. In an ideal setting, diagnostic images would consist of 100% signal, or useful information, and no noise or random information. This situation is unrealistic because of the variations in the x-ray beam, tissue characteristics, energy absorption capacities and background system noise. Technologists must be able to recognize the potential sources of noise and the presence of noise within images they create, and try to minimize noise whenever possible. 44. Cassette-based Systems Some newer indirect thin-film transistor systems are cassette-based. The cassette has a scintillator that is constructed with cesium iodide or gadolinium oxysulfide along with the TFT. TFT cassettes can’t be opened, and they don’t need to be placed in a reader. Used in mobile radiography or a bucky, the cassettes can be connected to the workstation by a tether or are wireless. 45. CCD Receptors The way a CCD device forms an image is very different from how a PSP or TFT receptor creates images. The CCD system requires a scintillator, which may be either cesium iodide or gadolinium oxysulfide. Because a CCD needs a scintillator to produce light photons before an image is captured, it is considered an indirect form of image capture. The light from the scintillator material strikes the silicon in the CCD chip. The amount of light produced by the scintillator is in direct proportion to the intensity of radiation striking the scintillator. The CCD converts that pattern of light to an electrical charge that also is in direct proportion to the intensity of the radiation absorbed by the scintillator. The electrons then are collected by the CCD chip elements, or pixels. The number of pixels in a given chip varies by manufacturer, but may be in the millions. The electrical signal from the CCD is sent to an analog-to-digital converter, which creates a binary code of digital values that becomes the data set of the digital image.

Page 10: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

To simplify the process, x-rays strike the scintillator, which converts the energy to light photons. These light photons reflect off a mirror onto a focusing lens that directs them to the CCD. The CCD then converts the light into an electrical signal that is sent to the ADC. 46. Advantages of CCD Receptors Like all digital receptors, CCDs provide diagnostic quality images over a wide range of exposure settings. CCDs respond to lower light levels from a scintillator than other types of receptors, which makes them suitable for low-dose imaging. CCDs also produce images quickly, and instantly refresh for the next image. This makes them excellent image receptors for fluoroscopic applications. 47. Complementary Metal Oxide Semiconductor Receptor A complementary metal oxide semiconductor receptor, more commonly referred to as CMOS, is a semiconductor that uses p-type, or positive, and n-type, or negative, transistors. A CMOS detector also requires a scintillator that directs light to the CMOS array. The light from the scintillator is directed to the CMOS by either a fiber-optic coupling device or a lens system, similar to that used in a charge-coupled device. The CMOS converts the light into an analog electrical signal that is sent to the ADC. 48. Knowledge Check 49. Knowledge Check 50. Detective Quantum Efficiency Detective quantum efficiency is a measure of image quality and is used to compare dose vs. spatial resolution characteristics of imaging systems. It indicates how efficiently a digital detector system can convert the remnant beam signal exiting the patient into a useful data signal representing the image. The ability of a system to absorb x-rays has significant impact on DQE. All receptors need good x-ray conversion and absorption efficiency to produce diagnostic-quality images at the lowest dose possible. Remember that x-ray absorption depends on numerous factors, including density, thickness and atomic (Z) number of the material. When viewing a K-shell absorption graph to compare the various receptors, digital systems absorb less radiation at low and very high energies. 51. Receptor DQE Since DQE is a measure of an image receptor's ability to create an output signal that accurately represents the input signal of the x-ray beam that exits a patient, a higher DQE, which indicates that a receptor is more efficient in converting the input x-ray signal, means lower exposure is required to create the image. Different receptor materials produce images that exhibit various levels of spatial frequency. Higher spatial frequencies represent a larger number of viewable objects in an image. Comparing the DQE of different image receptors over a range of spatial frequencies demonstrates that screen-film, offers less spatial frequencies. Therefore, screen-film requires a higher exposure to create the image. Cesium iodide thin film transistors demonstrate the highest DQE and therefore require the lowest exposure to create an image of diagnostic quality. 52. DQE Relationships This graphic shows the relationship between absorption efficiency, conversion efficiency, and detective quantum efficiency. Increased absorption efficiency and conversion efficiency means an increased DQE. A higher DQE means a lower patient dose. Our goal is to pick an image receptor with the highest absorption efficiency, greatest conversion efficiency, and greatest DQE to achieve the lowest patient dose and the highest-quality image. 53. Pixels and Spatial Resolution for CR Remember that spatial resolution is measured in line pairs per millimeter or cycles per millimeter and that the size of the pixels determines the line pairs, or cycles, per millimeter that are visible. Pixels affect the spatial resolution of the PSP-based system and the flat-panel detector system. The number of pixels

Page 11: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

sampled per millimeter controls spatial resolution. Additionally, when a higher volume of pixels are sampled on a PSP plate the recorded detail that can be achieved on a digital receptor is greater. The pixel size of a digital image is not related to the exposure required to produce that image. Exposure doesn’t change pixel size. Technologists can’t select pixel size when working with digital receptors because that function is a fixed characteristic based on the construction of the receptor. 54. Spatial Resolution and Pixel Size Recorded detail is determined by how frequently the signal extracted from the PSP is sampled. Changing the sampling frequency affects pixel size, and therefore spatial resolution, for photostimulable phosphors. Though changing the sampling frequency alters spatial resolution, sampling frequency isn’t tied to the exposure level. A lower sampling frequency takes fewer samples from the signal originating from the plate. The lower frequency equates to a larger pixel size and lower spatial resolution which means that the geometric sharpness, or recorded detail, of the image is not as accurate. When sampling at a higher frequency, the individual pixels are closer together. Additionally, the pixel size is smaller and spatial resolution increases meaning that recorded detail improves by becoming more accurate and defined. 55. Sampling Frequency Radiology departments have a variety of equipment, and you may work with PSP plates from multiple vendors. Sampling frequencies for PSP readers vary from vendor to vendor and can range from a low of 5 pixels per mm up to 20 pixels per mm. Systems with more pixels per mm provide better spatial resolution. Technologists should select a higher sampling frequency for images that require better recorded detail, such as extremity exams. This may mean choosing a cassette with a bar code that automatically changes sampling frequency, or using the interface menu on the plate reader to select the option that samples a plate at a higher frequency. Sometimes sampling frequency is determined by cassette size although; typically changing the cassette size alters the size of the image. When you look at these radiographs, you may come away with the impression that 1 elbow is magnified and that the technologist made some positioning error. In fact, the discrepancy has nothing to do with what the technologist did with the patient. The difference in appearance is due to the size of the image receptor that was selected and the monitor on which the images are viewed. In reality, with many of the newer systems the image is resized based on collimation. The image on the left was produced using an 8 x 10 inch cassette (20.3 x 25.4 cm) and the image on the right was created using a 14 x 17 cassette (35.6 x 43.2 cm). Both cassette images are displayed across the full surface of the same monitor, but they appear different because the cassettes are different sizes. 56. Spatial Resolution and Flat-panel Detectors For flat-panel detectors, the size of the detector element, or DEL, determines the spatial resolution of the detector. Whereas sampling frequency can be changed in PSP-based systems, the technologist can’t change this characteristic in a flat-panel detector because the DEL size is fixed. Recall how energies are transformed in a scintillator-based flat-panel detector. Note how the varying beam intensities exiting the line pattern render very accurately, with the center thin-film transistor receiving the most electrons. 57. Spatial Resolution for FPD vs Analog A DEL size of 200 microns provides a spatial resolution of approximately 2.5 line pairs per mm. If we want a higher resolution image, we must use a flat-panel detector with smaller DELs. In this case, DELs of 100 microns provide a spatial resolution of 5 line pairs per mm. If we want even more spatial resolution, we must use a 50-micron flat-panel detector to provide 10 line pairs per mm.

Page 12: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

When comparing the spatial resolution of photostimulable phosphor detectors and flat-panel detectors a PSP detector with 200-micron pixels has the same resolution as a flat-panel detector with a DEL of 200 microns. Interestingly, the PSP with 100-micron pixels has a lower resolution than a 100-micron DEL. The difference is in how PSP systems create images. Remember that when a laser scans across a plate and light is emitted, some resolution is lost in the process. On the other hand, flat-panel detectors are discrete elements that create image information without losing resolution. 58. Nyquist Frequency The Nyquist frequency determines the level of spatial resolution for an image receptor. The Nyquist theorem states that for a given spatial frequency, or resolution, you must sample the signal at twice the desired sampling frequency. For example, if your goal is to display 2.5 line pairs per mm, then you must sample the image receptor at 5 pixels per mm. If you need to see 5 line pairs per mm, then the sampling frequency must be 10 pixels per mm and for a resolution of 10 line pairs per mm, the image receptor is sampled at 20 pixels per mm. An important aspect of spatial resolution is that it rendering an image for viewing takes longer when sampling frequency increases because more information must be gathered. 59. Pixels and Receptor Size The relationship between pixel and receptor size ─ often referred to as field of view ─ and the actual matrix size of an image are other relationships that can affect digital imaging workflow. Denser pixels within a fixed receptor size require a larger image matrix to accommodate more squares within the image receptor. If the receptor size increases with a fixed pixel size, more pixels of the same size must be added for a larger image matrix. Larger matrices affect day-to-day work because an image with a larger matrix takes longer to render on an image display and requires more data storage. Now consider how changing pixel size affects the technologist's workflow. Suppose a technologist grabs a 14 x 17-inch receptor that’s processed using 200-micron pixels. For the next exam, the technologist uses a 14 x 17-inch receptor but processes it using 100-micron pixels. The 14 x 17 receptor that was sampled at 100 microns will contain 4 times the amount of information than the 14 x 17 receptor sampling at 200 microns. More information means it takes much longer to process the image and significantly increases the space needed to store the image data. 60. Exposure Latitude and Dynamic Range A technologist often hears the terms “exposure latitude” and “dynamic range” used interchangeably. In fact, exposure latitude and dynamic range apply to different aspects of digital imaging. Exposure latitude is the range of underexposure or overexposure that can occur in producing an acceptable image. The term “dynamic range” refers to the receptor's ability to respond to different exposure levels. 61. Dynamic Range This graphic for the digital receptor shows a wide range of exposure response values. For the technologist, quantum noise is the only visual cue that indicates when the digital receptor is underexposed. When an image is overexposed, the image contrast is decreased because of the reduced signal difference reaching the image receptor. So, the visual cues of brightness and contrast that are customary of an image produced using an analog system are lost. The end result is a tremendous potential to produce images that look satisfactory and diagnostic on a display monitor but are the result of overexposure to the patient. 62. Exposure Latitude The ultimate goal is to create an optimal image at the lowest possible patient exposure. Analog receptors can produce an acceptable image, but not an ideal image, within a range of 30% underexposure to 50% overexposure. Although you could see a difference in the image, it’s not so great that you would have to repeat the exam. On the other hand, a digital receptor can produce an image that appears acceptable on a display monitor, but which was produced at 50% underexposure to as much as 100% overexposure. It would look the

Page 13: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

same as an optimal image in many ways but would not provide the technologist with any visual cues that the patient was underexposed or overexposed. In this respect, more radiation than necessary might be used to create images with a digital receptor, or not enough. Therefore, it’s necessary to develop an ideal exposure level so that exposure latitude can be used to produce an optimal image. 63. Selecting Digital Exposure Factors Ideally, technologists would select the appropriate exposure factors for a digital image receptor. However, because the visual cues of image brightness and contrast are no longer linked to exposure factors, technologists must use the readout of an exposure indicator to determine if an image is properly exposed or within the acceptable range of underexposure or overexposure. 64. Latitude Facts A key point to remember about exposure latitude is that a digital imaging system can produce an image that appears diagnostic on the display monitor but that may range from grossly underexposed to grossly overexposed. Digital imaging systems produce images with such wide exposure variations because of automatic rescaling which means an image is produced with equal brightness and contrast without regard to the level of exposure. Technologists can look for 2 visual clues to help determine whether the exposure values are incorrect. Noise is a visual cue for an underexposed image. An image appears mottled if the exposure is less than 50% of the desired amount. So, in essence, if the desired exposure is 10 mAs and you use 5 mAs, the image appears mottled, or noisy. Loss of image contrast is a visual cue for an overexposure. The inability to manipulate the image contrast to show anatomical structures with low and high contrast is a visual cue of overexposure. 65. Animated Dynamic Range This graph gives a good comparison of the dynamic range capabilities of analog and digital image receptors. You’ll see that the images don’t present the same visual cues that would show technologists they’ve made an exposure error. 66. Sources of Image Blur The radiographic image is made up of brightness and contrast factors, as well as geometric factors. Geometric factors affect how true the image captured by the image receptor and presented to the observer is compared to the actual anatomical structures within the patient. The ability of the observer to perceive where the edge of one object ends and the edge of second object begins is how fine details are perceived within an image. Image blur decreases our ability to perceive distinct edges between objects. There are three sources of blur: receptor, geometric and motion blur. 67. Receptor Blur The cause of receptor blur originates in the equipment. At this point it would help to revisit the resolution grid and the ability to differentiate between adjacent structures. Remember that spatial resolution is the ability to differentiate between adjacent structures and is measured in line pairs, or cycles, per millimeter. More line pairs per millimeter means higher resolution. So when considering receptor blur, keep this unit of measurement in mind: line pairs, or cycles, per millimeter. For a PSP receptor, the sampling frequency of a PSP plate controls image blur. More blur occurs if the sampling frequency is low than at a higher sampling frequency. A low sampling frequency is sufficient for imaging large body parts, such as the image of the pelvis on the left. A higher sampling frequency provides greater spatial resolution for examining extremities, such as the image of the foot on the right. Otherwise, blur would obscure fine structures. 68. Geometric Blur

Page 14: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

A common misconception concerning geometric blur is that focal spot size, source-to-image receptor distance (SID), and object-to-image receptor distance (OID) don't affect a digital image receptor. However, this is not the case. The focal spot size, regardless of the receptor used, affects the blur in an image. A large focal spot produces greater image blur than a small focal spot. The effect of a large focal spot is a large area of penumbra created around an object, which represents an area of unsharpness, or blur. A smaller focal spot results in a smaller penumbra. Therefore, less blur is created around the object and its edges are much easier to recognize. The relationship between focal spot size and blur is like painting a portrait using small brushes versus painting with large paint rollers. With a small paint brush, or a small focal spot, the artist can include small structures and fine details within a painting. The artist using large paint rollers can cover a canvas in a short time, but is limited to big features. 69. Geometric Blur Technologists also should remember the role of SID in producing an optimal image. An inverse, or indirect, relationship exists between SID and image blur. Increasing SID decreases the blur in an image because the x-ray beam photons diverge less. Another geometric factor to address is the OID. Once again, distance plays a crucial role in creating blur in the image. The relationship between OID and magnification is directly related. Increasing OID increases blur and decreasing OID decreases blur. Technologists always should minimize the OID to reduce blur. 70. Motion Blur Finally, the last source of blur is motion blur. Image blur can be caused by any “unintended” motion of a body part, the x-ray tube, or image receptor. In certain cases, technologists use motion and blurring to produce images of body parts that would otherwise be difficult to see or would be obscured by other anatomical structures. A classic example of this technique is when a technologist instructs a patient to breathe shallowly during an exposure to visualize the lateral thoracic spine. The motion of the patient’s ribs during shallow breathing blurs and distorts the details of the ribs, while revealing fine details of the stationary vertebrae of the thoracic spine. 71. Fluoroscopy Units The fluoroscopic procedure has changed very little with the advent of digital image receptors. The most significant difference is that flat-panel technology changes the appearance of the fluoroscopy tower. However, the actual fluoroscopic procedure and images are the same, whether the equipment is a traditional image-intensified or a flat-panel fluoroscopic unit. 72. Fluoroscopy Overview When a single x-ray strikes the input phosphor of an image intensifier, energy transfers across a typical image intensifier tube, from x-rays to light photons to electrons, then back to light photons again. The intensifier produces light photons that immediately are converted into electrons by the photocathode. The anode pulls the electrons to the output phosphor and electrostatic lenses focus them. The image then appears on the output screen and is broadcast being changed to an electrical signal by a video camera or CCD. This signal is then sent to an ADC as described for other digital systems. An electronic image intensifier device renders a magnified image on the output phosphor by manipulating the size of the input field used to capture the patient image. The image of the figure is a 9-inch input and appears smaller and inverted near the output phosphor – this simulates a normal mode in fluoroscopy. When reducing the input phosphor size to 6-inches, the output phosphor size remains the same and the image is magnified.

Page 15: Essentials of Digital Imaging - Santa Rosa Junior College 1.pdf · With the introduction of intensifying screens, approximately 95% of a diagnostic image was ... the early 1970s and

©2016 ASRT. All rights reserved. Essentials of Digital Imaging Module 1 – Fundamentals

73. Flat-panel Fluoroscopy A flat-panel intensifier is much shorter than the traditional image intensifier. The internal operation of this receptor is identical to the one used in flat-panel radiography. In other words, scintillator and nonscintillator-based technology is used to capture and record patient diagnostic information in real time. 74. Knowledge Check 75. Knowledge Check 76. Conclusion This concludes Essentials of Digital Imaging Module 1 – Fundamentals. You should now be able to:

Identify the characteristics of digital imaging receptors.

List the factors affecting digital receptor response to exposure.

Describe the image capture process for digital image receptors.

Define detective quantum efficiency (DQE) and its potential impact on patient dose.

Describe the dynamic range and latitude of digital image receptors.

List the factors that determine spatial resolution for digital image receptors.

Name the sources of image blur.

Recognize equipment associated with digital fluoroscopic imaging systems. 77. References American Association of Physicists in Medicine. AAPM Report No. 116. An exposure indicator for digital radiography. http://www.aapm.org/pubs/reports/RPT_116.pdf. Published July 2009. Accessed February 14, 2013. Bushong SC. Radiologic Science for Technologists: Physics, Biology, and Protection. 10th ed. St Louis, MO: Mosby Elsevier; 2012. Carlton RR, Adler AM. Principles of Radiographic Imaging: An Art and A Science. 5th ed. Clifton Park, NY: Thomson Delmar Learning; 2012. Carroll QB. Radiography in the Digital Age: Physics, Exposure, Radiation Biology. Springfield, IL: Charles C Thomas; 2011. Carter CE, Vealé BL. Digital Radiography and PACS. St Louis, MO: Mosby Elsevier; 2009. Practice Standards for Medical Imaging and Radiation Therapy. Radiography practice standards. American Society of Radiologic Technologists website. http://www.asrt.org/main/standards-regulations/practice-standards/practice-standards. June 19, 2011. Accessed February 14, 2013. Seeram E. Digital Radiography: An Introduction for Technologists. Clifton Park, NY: Delmar Cengage Learning; 2011. Strategic Document. Version 2012-3, April 11, 2012. Digital Imaging and Communications in Medicine website. http://medical.nema.org/dicom/geninfo/Strategy.pdf. Accessed February 14, 2013.