Upload
lytruc
View
249
Download
0
Embed Size (px)
Citation preview
Imaging artifacts due to pixel spatial sampling smear and amplitudequantization in two-dimensional visible imaging arrays
Terrence S. Lomheim*a, Jeffrey D. Kwok**b, Tracy E. Duttona, Ralph M. Shimaa, Jerris F. Johnsona,Richard H. Bouchera, and Christopher Wrigleyc
aThe Aerospace Corporation, Sensor Systems Subdivision, P.O. Box 92957, Mail Stop M4-980, LosAngeles, CA 90009-2957
bUS Air Force SMC/AXE, 160 Skynet Street, El Segundo, CA 90245-4683
cJet Propulsion Laboratory, 4800 Oak Grove Blvd., Pasadena, CA 91109-8099
ABSTRACT
In this paper, we studied the imaging effects of pixel spatial-sampling and scan-velocity mismatch in two-dimensional visibleimage sensors. These effects were examined experimentally by projecting bar pattern sequences of varying spatial frequencyon two different devices and by comparing their outputs with the results of a corresponding imaging simulation. Beatpatterns and aliased spatial frequencies were observed by imaging the bar pattern sequences onto an area CMOS ÒactivepixelÓ sensor. Image phase reversal effects were observed by inducing a systematic mismatch between the scan velocity of abar pattern ÒsunburstÓ areal image and the corresponding velocity of the clocked image charge in a time-delay-and-integration (TDI) CCD image sensor. The visual image effects of an analog-to-digital converterÕs (ADC) pixel amplitudequantization, specifically integral nonlinearity (INL) and differential nonlinearity (DNL) were studied using two verydifferent input images. The INL and DNL patterns, obtained from measurements on a 14-bit, video ADC were scaled andthen imbedded in the response characteristics of these two images. Various scaling of these INL and DNL patterns wereused. The results obtained show artifacts varying in impact from insignificant to clearly degrading.
Keywords: Image artifacts, aliasing, pixel quantization, image scan velocity mismatch smear, integral nonlinearity,differential nonlinearity, CMOS imager, CCD TDI imager.
1. INTRODUCTION
Solid state visible and infrared imaging arrays are used in a myriad of civil, defense, and commercial applications. Thetechnologies for these arrays continue to advance, motivating new applications for the associated electro-optical cameras andsystems. Unlike film-based imaging systems, the spatial sampling associated with the pixelization of modern arrays produceeffects such as aliasing and beat patterns that can affect image quality in unusual ways. Advanced imaging systems arelargely digital, hence the imaging array pixel amplitudes are quantized. The linearity, consistency, and reciprocity of thisquantization process are clearly important. However, it is difficult, in general, to quantify maximum allowable levels of anADCÕs integral nonlinearity (INL) and differential nonlinearity (DNL). The importance of these INL- and DNL-basedimaging artifacts on image quality are very application dependent. The definitions of INL and DNL will be provided later inthis section.
In this paper we provide a systematic, albeit limited, study of these imaging artifacts. The work is divided into two parts. Inthe first part, as described in Sections 2 and 3, we examine, under controlled and quantifiable conditions, spatial imagingartifacts. Specifically, we look at aliasing, beat patterns, and image Òphase reversalÓ effects induced by a scan velocitymismatch smear effect.
*Email: [email protected]; Telephone: 310-336-8836
**Lt. Jeffrey D. Kwok was part of the Aerospace Air Force Officer Education program when this work was done.
The aliasing and beat patterns were observed by using a visible CMOS1 area imaging array designed by the Advanced Imagerand Focal Planes Technology Group at the Jet Propulsion Laboratory. The pertinent technical details associated with thisdevice are described in Section 2.1. To systematically observe aliasing and beat pattern effects requires that we project, ontothe CMOS arrays, patterns having known spatial frequencies and having enough extent to avoid ÒtruncationÓ effects.2 Forthis purpose we use bar pattern sequences corresponding to several spatial frequencies below and above the CMOS imagerÕsNyquist spatial frequency of 42 line pairs/mm.
Extracting and interpreting aliasing and beat pattern effects from complex scenes, typical of most imaging applications, isvery difficult. Such scenes represent a complex superposition of a myriad of spatial frequencies at varying pixel intensitiesdistributed across some large spatial extent. Specific exceptions to this can be envisioned. For example, in a commercialremote sensing application (aircraft or satellite), where an electro-optical sensor is imaging an orchard with trees planted in agrid pattern having a projected spatial frequency comparable to or greater than the sensorÕs sampling spatial frequency,clearly visible aliasing or beat patterns will be produced. Future machine vision applications that image manmade scenes(robotics vision for manufacturing, web inspection, etc.) will likely increase the possibility of encountering these types ofimaging artifacts. Color aliasing effects associated with single-chip color filter array cameras are well known.3
Phase reversal effects are well known in optical imaging. Goodman provides a dramatic visual example of this based on theoptical transfer function (OTF) characteristics of defocus.4 Holst uses a hypothetical OTF to generate this same effect usingtri-bar targets.5 We illustrate such a phase reversal effect in this paper using a scanned TDI CCD sensor. The deliberatemismatch of the scanned areal image velocity and the rate of motion of the corresponding clocked TDI electronic image,produces a velocity mismatch OTF characteristic6,7 that can generate image phase reversals. The image degradationproduced by phase reversal effects will be discussed in Section 3.
The second part of our work deals with artifacts brought on by pixel amplitude quantization inherent to all digital imagingsystems. On a digital camera pixel, amplitudes are discrete. This intrinsic amplitude quantization, for example the 256shades of gray possible in an 8-bit image, is universally understood. In this paper we look at the more subtle effects of pixelINL and DNL. Since the amplitude response of the human visual system is inherently nonlinear, it can be visually difficult tosee some of the associated effects. The parameters INL and DNL derive from behavior of digital video hardware. INLquantifies8 the departure from linear response of the imaging pixels over their entire dynamic range. This source ofnonlinearity can have contributions from the on-chip pixel electronics video signal chain electronics and the analog-to-digitalconverter (ADC). In principle this nonlinearity can be removed by calibration. DNL, on the other hand, is associated withdepartures from linearity between adjacent quantized amplitude levels. Generally, this parameter is designed to track ADCanomalies that result in misassigned or missing binary codes.9
In Section 4 of this paper we synthesize two video images, each with imbedded INL and DNL effects. The amplitude of theINL and DNL is varied for visual effect. The magnitude of these nonlinearities is exaggerated to produce visual effects thatare observable in the hardcopy images in this section. In Section 5, we provide a summary and conclusions.
2. ALIASING AND BEAT PATTERNS
As discussed in the introduction, a sequence of bar patterns having known spatial frequencies was imaged onto a 2-dimensional CMOS imaging array. The discrete spatial frequencies associated with these bar patterns are filtered by theNyquist sampling of the CMOS sensor pixels to produce both aliased and beat patterns. We also used Aerospace-developedimaging simulation tools to synthesize the expected CMOS sensor responses to the bar pattern stimuli. Shown in Figure 1 isa block diagram of one of these imaging simulation tools known by the acronym PHOCAS.10 Clearly PHOCAS is designedto simulate substantial physical details of an imaging sensor (the various key sensor MTF components, radiometric inputs,pixel responses, noise, and image sharpening filters are modeled and imbedded into a final simulated image).
To observe the indicated sampling effects many of these simulation features were not needed. In fact the Òobject sceneÓ inFigure 1 was a synthesis of various bar pattern inputs with appropriate spatial frequencies and spatial extent. The CMOSimager detector aperture or pixel size and associated sampling were defined and a noiseless Òoutput imageÓ was generated.In the next section we provide a brief description of the experimental setup and the CMOS imagerÕs pixel structure.
TDI andvelocity
mismatch
Detectorspectral
response
Opticsspectral
transmission
Opticalaperture
Opticalaberrations
Detectoraperture
LOSjitter
CCDdiffusion
CCDCTE
Detectornoise
Sharpenfilter
Object,scene
Optics
F
F -1+
×
*
× × × × × ×
MTF effects
F
∑W(λ) PSF(λ)
(F -1)2
Outputimage
Figure 1. Block diagram of The Aerospace CorporationÐdeveloped PHOCAS sensor imaging simulation. F and F-1 are the forward andreverse Fourier Transforms.
2.1 CMOS imager bar pattern responses: experimental setup
Figure 2 is a block diagram of the optical setup used to image the bar pattern sequences onto the CMOS imaging array. Theall-reflective reimaging optical system has unity magnification as described in a previous publication.11 Appropriate ACdrive signals and biases were applied to the CMOS device. Its multiplexed output video stream was then digitized (to12 bits) and captured by a high speed SRAM board. The bar pattern sequences were manufactured as chrome-on-glasstransparencies12 and back-illuminated with a bright, stable, incandescent xenon lamp.
The CMOS imaging array had a square format of 256 by 256 pixels with center-to-center spacing of 11.9 µm in both
directions. The pixel structure is illustrated by a 3 by 3 block of pixels in Figure 3. The photo-responsive area of the CMOSimager's pixel occupies approximately one-fourth of the pixel area forming a 4-to-1 aspect ratio as shown. Clearly, the pixelaperture modulation transfer function (MTF) will be rather different in the orthogonal directions. This was not particularlyimportant to our work here, except that the modulation of the aliased spatial frequencies were suppressed less in the directionof the shorter pixel dimension. In the next section, we present and discuss the results of imaging the bar pattern sequencesonto the CMOS array. It should be noted that, while the asymmetric pixel provides higher MTF in one dimension, the lowerÒfill factorÓ results in lower sensitivity.
2.2 Discussion of observed and simulated CMOS imager bar pattern responses: aliasing and beat patterns
In Figures 4 and 5, we display the results of imaging a sequence of bar patterns onto the CMOS array. Figure 4 shows theresults of doing this using a simplified imaging simulation, whereas Figure 5 shows the CMOS imager experimental dataacquired by the setup described in Figure 2. In both cases the input image was a set of ten spatially parallel bar patternsequences spanning the range of 20 to 95 line pairs/mm. The imaging simulation was configured to capture ony the effects ofpixel sampling frequency and pixel aperture MTF. Since the high-frequency atenuations associated with optical aperture and
Imageprocessing
memory
Analog video
CMOS imager
Primary Target image
Secondary
Offner relay
Variablef-stop
Integrating sphere
Controllogic Digital video
High-speed parallel port
Imager video data
Data acquisition computer
Analogvideo
electronics
Bar target reticle
Colorvideo
display
CMOS imager drive electronics and shutter control
Digitalmass
storage
Common center of curvature
Light source
Spectral filter
High-speedDMA
memory
Video ADC
Figure 2. Experimental setup used to image variable spatial frequency bar patterns onto a CMOS array imaging.
1/4 p
p
Figure 3. Detailed view of a three-by-three pixel area of the CMOS imager topology used in the measurements. The photosensitive areacovers about 1/4 of each pixel and has a four-to-one aspect ratio.
MODEL: BAR PATTERNS ON ARRAY
95lp/mm
80lp/mm
65lp/mm
60lp/mm
45lp/mm
40lp/mm
35lp/mm
30lp/mm
25lp/mm
20lp/mm
50 100 150 200 250
50
100
150
200
250
Figure 4. Imaging simulation of 10 bar-pattern sequences (spatial frequencies from 20 to 95 line pairs/mm) imaged onto a 256 by 256 pixelCMOS sensor. CMOS imager pixel pitch is 11.9 µm, giving a spatial Nyquist frequency of 42 line pairs/mm
Bar Patterns: 700nm, f/# = 5
95lp/mm
80lp/mm
65lp/mm
60lp/mm
45lp/mm
40lp/mm
35lp/mm
30lp/mm
25lp/mm
20lp/mm
50 100 150 200 250
50
100
150
200
250
Figure 5. Response of a 256 by 256 pixel CMOS imager to 10 bar-pattern sequence of variable spatial frequency (25 to 95 line pairs/mm).CMOS imager pixel pitch is 11.9 µm, giving a spatial Nyquist frequency of 42 line pairs/mm.
diffusion MTF were not included, the simulation shows greater modulation depth than the experimental data at higher spatialfrequencies. Figures 4 and 5 clearly label each of the bar patterns along the vertical dimension of the image. The bar patternscover the entire 256 pixel length of the CMOS imager in the vertical direction. The spatial frequency at which the Nyquistcriterion is met corresponds to fN = 1/(2p) where p is the pixel pitch and fN is the imager Nyquist spatial frequency. Sincep = 11.9 µm, we have fN = 42 line pairs/mm. As shown in Figures 4 and 5, five of the bar pattern sequences have spatial
frequencies less than fN, four have spatial frequencies between fN and 2fN, and one has a spatial frequency greater than 2fN.Aliasing and ÒbeatÓ patterns are visible in both images. On the image acquired with the CMOS array, it can be seen thatsome of the veat pattens seem ÒtiltedÓ with respect to the vertical and horizontal axes of the array. This additional artifact isdue to a small angular misalignment between the axes of the bar patterns and the CMOS array.
As discussed in a previous paper,13 beat or Moir� patterns are expected due to the spatial sampling of the input bar pattern(s)by the imager pixels. The observed beat pattern spatial frequency is related to the difference |fin Ð fN|, where fin is the inputbar pattern spatial frequency and fN is the imager Nyquist spatial frequency defined above. In cases where the bar pattern
spatial frequency exceeds the imager Nyquist, the observed beat pattern is related to the difference f fina
N− | where fina is
the appropriate aliased frequency of the input bar pattern fin. The formation of beats is nicely illustrated in elementaryphysics texts; for example, by the overlap of two combs with different teeth spacing.14
To explain the origin of the beat patterns produced by imaging bar patterns onto the CMOS imager, consider the output froma one-dimensional array (an image sampling system):
O x O x mpm
m( ) = ∑ −( )δ (1)
where m, the pixel index, ranges over the pixels in the array, and
O dx f xmmp
mp
in= ∫ ( )−
+
l
l
2
22cos π (2)
= ( ) ( )l lsinc π πf f mpin incos 2 (3)
Here, p is the pixel pitch distance, l is the photosensitive length l ≤( )p of the pixel, Om are the discrete pixel responses to an
input sinusoid, O(x) represents the image samples along the x-direction, and sinc(x) is defined as sin(x)/x. The cosinefunction in Eq. (2) approximates the input bar pattern sequence. An analytical treatment of a true bar pattern input ispossible, however the beat pattern description we seek is not hampered by this approximation.
We define
fin = α fN (4)
where fin is the input bar pattern spatial frequency and the parameter α defines fin in terms of the imager Nyquist frequency,
fN. If we ignore the modulation transfer function effects of the pixel sinc function, we can combine Eqs. (1) and (3) to obtain
O x f mp x mpm
in( ) = ∑ ( ) −( )cos 2π δ (5)
Using fin = fN Ð (fN Ð fin), we can expand Eq. (5) to get
O x f mp f mp x mpm
N N( ) = ∑ − −( )[ ] −( )cos 2 2 1π π α δ
= ∑ ( ) −( )[ ]{ } −( )m
N Nf mp f mp x mpcos cos2 2 1π π α δ (6)
Since
cos 2 1πf mpNm( ) = −( )
we have
O x f mp x mpm
mN( ) = ∑ −( ) −( )[ ] −( )1 2 1cos π α δ (7)
In Eq. (7), the factor (-1)m corresponds to the high frequency Nyquist modulation, whereas the factor
cos 2 1π α−( )[ ]f mpN
corresponds to the low frequency beat pattern that we observe.
If we now define the beat frequency, fB, as
f fpB N= − = −
11
2α α
(8)
then Eq. (7) becomes
O x f mp x mpm
mB( ) = ∑ −( ) ( ) −( )1 2cos π δ (9)
The analysis leading to Eq. (9) is essentially an extraction of the Nyquist modulation from the imager output; this allows us tosee the beat pattern explicitly. Equation (9) describes all observed beat pattern spacing found in Figures 4 and 5. Figures 6through 12 correspond to input bar pattern spatial frequencies of 20, 30, 40, 45, 60, 80, and 95 line pairs/mm. These figuresshow pixel amplitude slices along with the amplitude of the discrete Fourier transform (DFT) along those slices for the abovementioned bar pattern inputs. These seven figures give a sample of: three bar patterns in the range of 0 < f < fN; three barpatterns in the range of fN < f < 2fN; and one bar pattern in the range of 2fN < f < 3fN. The seven figures are organized in anÒaÓ set, which shows simulation results, and a ÒbÓ set, which shows the CMOS imager measured output. The ÒbÓ ormeasured cases show a considerable drop in pattern modulation amplitude with increasing spatial frequency. This is due tothe various component modulation transfer function effects present in the measurement (optical, pixel aperture, diffusion).The simulation Òturned these effects offÓ in order to make the sampling effects more visible. Clearly there is excellentagreement between the ÒaÓ and ÒbÓ cases in each of Figures 6 through 12; all of the spatial sampling effects (beatfrequencies, aliasing) are correctly modeled.
The relationship between beat patterns and aliasing effects is interesting. When the bar pattern spatial frequency patterns fallin the range 0 < fin < fN, no aliasing occurs, yet beat patterns are observable (a pure sampling effect). When the range isfN < fin < 2fN, the observed frequency is an alias, fin
a , given by
f f fina
N in= −2 (10)
corresponding to ÒfoldingÓ about fN.
When the bar pattern input is in the range of 2fN < fin < 3fN, the observed aliased frequency is
f f fina
in N= − 2 (11)
corresponding to folding about f = 0.
DN
First 90 pixels
2
pixels
200
Spatial Frequency (Ip/mm)
Mag
nitu
de
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1
0
150
100
50
0
–2
–1
2
1
0
–2
–1
Figure 6a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 20 line pairs/mm input bar pattern.Simulated image; fin = 20 line pairs/mm, fB = 22 line pairs/mm. No aliasing is present since fin < fN.
DN
First 90 pixels
pixels
2.0
Spatial Frequency (Ip/mm)
Mag
nitu
de (
× 10
5 )
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1.5
1.0
0.5
0
4000
3000
1000
2000
4000
3000
1000
2000
Figure 6b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 20 line pairs/mm input bar pattern.Measured image; fin = 20 line pairs/mm, fB = 22 line pairs/mm. No aliasing present since fin < fN.
DN
First 90 pixels
2
pixels
200
Spatial Frequency (Ip/mm)
Mag
nitu
de
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1
0
150
100
50
0
–2
–1
2
1
0
–2
–1
Figure 7a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 30 line pairs/mm input bar pattern.Simulated image; fin = 30 line pairs/mm, fB = 12 line pairs/mm. No aliasing is present since fin < fN.
DN
First 90 pixels
3500
pixels
10
Spatial Frequency (Ip/mm)
Mag
nitu
de (
× 10
4 )
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
3000
2500
8
6
4
0
10001500
2
2000
35003000
2500
1000
1500
2000
Figure 7b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 30 line pairs/mm input bar pattern.Measured image; fin = 30 line pairs/mm, fB = 12 line pairs/mm. No aliasing present since fin < fN.
DN
First 90 pixels
2
pixels
250
Spatial Frequency (Ip/mm)
Mag
nitu
de
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1
0
150
100
500
–2
–1
2
1
0
–2
–1
200
Figure 8a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 40 line pairs/mm input bar pattern.Simulated image; fin = 40 line pairs/mm, fB = 2 line pairs/mm. No aliasing is present since fin < fN.
DN
3500
First 90 pixels
3500
pixels
8
Spatial Frequency (Ip/mm)
Mag
nitu
de (
× 10
4 )
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
2000
2000
3000
3000
2500
2500
6
4
2
0
1500
1000
1500
1000
Figure 8b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 40 line pairs/mm input bar pattern.Measured image; fin = 40 line pairs/mm, fB = 2 line pairs/mm. No aliasing present since fin < fN.
DN
First 90 pixels
2
pixels
300
Spatial Frequency (Ip/mm)
Mag
nitu
de
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1
0
100
0
–2
–1
2
1
0
–2
–1
200
Figure 9a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 45 line pairs/mm input bar pattern.
Simulated image; fina = 39 line pairs/mm, fB = 3 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
DN
3500
First 90 pixels
3500
pixels
6
Spatial Frequency (Ip/mm)
Mag
nitu
de (
× 10
4 )
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
2000
2000
3000
3000
2500
2500
4
2
0
1500
1500
Figure 9b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 45 line pairs/mm input bar pattern.
Measured image; fina =39 line pairs/mm, fB = 3 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
DN
2
First 90 pixels
2
pixels
200
Spatial Frequency (Ip/mm)
Mag
nitu
de
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
1
1
0
0
150
100
50
0
-1
-2
-1
-2
Figure 10a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 60 line pairs/mm input bar pattern.
Simulated image; fina = 24 line pairs/mm, fB = 18 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
DN
3500
First 90 pixels
3500
pixels
4
Spatial Frequency (Ip/mm)
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
3000
3000
3
2
1
0
2500
2000
2500
2000
Mag
nitu
de (
× 10
4 )
Figure 10b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 60 line pairs/mm input bar pattern.
Measured image; fina =24 line pairs/mm, fB = 18 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
0.4
First 90 pixels
0.4
pixels
40
Spatial Frequency (Ip/mm)
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
–0.4
–0.4
0.2
–0.2
30
20
10
0
0
–0.2
0.2
0
Figure 11a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 80 line pairs/mm input bar pattern.
Simulated image; fina = 4 line pairs/mm, fB = 38 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
DN
3500
First 90 pixels
3500
pixels
2.5
Spatial Frequency (Ip/mm)
Mag
nitu
de (
× 10
4 )
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
2000
2000
3000
3000
2500
2500
2.0
1.5
1.0
0.5
0
Figure 11b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 80 line pairs/mm input bar pattern.
Measured image; fina =4 line pairs/mm, fB = 38 line pairs/mm. Input frequency is aliased since fN < fin < 2fN.
First 90 pixels
pixels
80
Spatial Frequency (Ip/mm)
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 45
60
40
20
0
1.0
–1.0
0.5
–0.5
0
1.0
–1.0
0.5
–0.5
0
Figure 12a. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 95 line pairs/mm input bar pattern.
Simulated image; fina = 11 line pairs/mm. Input frequency is aliased since 2fN < fin < 3fN.
First 90 pixels
pixels
15000
Spatial Frequency (Ip/mm)
50 1000 150 200 250 300
10 20 300 40 50 60 70 80 90
5 10 150 20 25 30 35 40 450
10000
5000
3500
1500
3000
2500
2000
3500
1500
3000
2500
2000
DN
Mag
nitu
de (
× 10
4 )
Figure 12b. Magnitude of discrete Fourier transform of an image line and linear pixel plots for a 95 line pairs/mm input bar pattern.
Measured image; fina =11 line pairs/mm. Input frequency is aliased since 2fN < fin < 3fN.
If we look at Figure 8, we see the response to a case where the bar pattern frequency is fin = 40 line pairs/mm. The DFTshows this exact frequency is present. Since the imager Nyquist is fN = 42 line pairs/mm, we expect to see a beat frequencyat fB = |fin Ð fN| = 2 line pairs/mm. One complete beat will cover Nb pixels where, from Eq. (8),
Nb =−2
1 α(12)
based on Eq. (8). Hence for the 40 line pairs/mm case and fB = 2 line pairs/mm we get α = 0.95238 and Nb = 42 pixels. Thisis the length of one beat as shown in Figure 8. Repeating this analysis for Figure 9, the input bar pattern frequency is 45 linepairs/mm. Since the bar pattern frequency exceeds the pixel Nyquist frequency, we expect an alias as given by Eq. (10). Weget fin
a = 39 line pairs/mm; the DFT in Figure 8 shows the same result. The expected beat frequency is the differencebetween the aliased frequency and the pixel Nyquist or fB = |fin - fN| = 3 line pairs/mm. For this case, α = fin/fN = 1.0714 andhence, Eq. (12) gives Nb = 28 pixels, in agreement with one cycle of the beat pattern length observed in Figure 9.
Figures 11 and 12 have rather high input spatial bar pattern frequencies compared to the pixel Nyquist. Figure 11 shows an
80 line pairs/mm bar pattern; the aliased frequency is fina = 4 line pairs/mm using Eq. (10). The beat frequency is
f f fB ina
N= − = 38 line pairs/mm. For this case, α = 1.9048 and hence Eq. (12) gives Nb = 2.2 pixels. This very high
spatial frequency is not particularly evident in Figure 11, but the aliased 4 line pairs/mm pattern has the expected length of21 pixels.
An alternate calculation, which gives results that are equivalent to those above, may be performed that more clearly illustratesthe beat pattern for an output of 80 line pairs/mm. Instead of subtracting the Nyquist frequency as in Eq. (6), a multiple ofthe Nuquist frequency is subtracted. For example, with the subtraction and addition of twice the Nyquist frequency in theargument of Eq. (6), an equation similar to Eq. (9) results, but the (-1)m factor is replaced by (+1)m, the beat frequency in
Eq. (8) becomes fB = |α Ð 2|fN and the number of pixels per beat, Eq. (12), becomes Nb = 2/|α Ð 2|. Then, for the input of
fin = 80 line pairs/mm, α = 1.0948 [from Eq. (4)], fB = 4 line pairs/mm and the number of pixels per beat is Nb = 21. This
result is equivalent to that found above and is clearly seen in Figure 11.
Finally, we look at Figure 12, with an input frequency of 95 line pairs/mm. The input frequency exceeds twice Nyquist andthe input is aliased down to 11 line pairs/mm [Eq. (11)]. The beat pattern may be analyzed as was done for the 80 linepairs/mm input above: we add and subtract twice the Nyquist frequency in the argument of Eq. (6). Then α = 95/42 =
2.2619, the beat frequency is fB = |α Ð 2|fN = 11 line pairs/mm, and the number of pixels per beat is Nb = 2/|α Ð 2| = 7.63, as
seen in Figure 12.
The analysis of the bar pattern responses above (and displayed in Figures 4 and 5), makes it clear that sampling effects suchas beat frequencies are distinct from aliased frequencies. Examples of the interplay between these two effects were illustratedin this section.
3. VELOCITY MISMATCH SMEAR FOR A TDI SENSOR-PHASE REVERSAL EFFECT
As discussed in the introduction, optical defocus creates an optical transfer function that changes from positive to negative atcertain spatial frequencies. This gives rise to phase reversal effects. Similar phase reversal effects can be produced when thescanned image and the rate of image charge motion in a TDI sensor are not precisely synchronized. The theory behind thisOTF degradation is treated in detail in Reference 6. The principal result is given in Eq. (12) of that paper. For a TDI CCDscanner with NTDI stages of TDI and Nph pixel phase gates, the velocity mismatch OTF in the scan direction y can beapproximated by
OTF fN f V T
f V T NVMM y
TDI y y
y y ph( ) =
( )( )
sinc
sinc
π
π
∆
∆int
int
(13)
where fy is the spatial frequency in the scan direction, ∆Vy is the scan velocity mismatch, and Tint is the integration time
associated with a single scan line. Phase reversal occurs whenever the argument in the sinc function is such that OTFVMM
changes sign.
This effect was investigated experimentally using a TDI CCD sensor with 64 stages of TDI (NTDI = 64), and simulated usingPHOCAS as described in Section 2 of this paper (see Fig. 1). The experimental setup is shown in Figure 13. Technicaldetails of the flat plate image scanner are detailed in Reference 13. A key point about this setup is its ability to preciselycontrol both the image scan velocity (using the image scanner) and the TDI CCD image charge transport velocity by
controlling timing signals to the imager. Precise synchronism means that ∆Vy = 0 and hence OTFVMM = 1.0. However, we
are able to precisely control ∆Vy in small increments, thereby deliberately inducing a degradation in OTFVMM. This control
allowed us to investigate phase reversal effects by inducing a known image scan velocity mismatch. Figure 14a shows theresponse of the CCD TDI sensor to a ÒsunburstÓ spatial pattern (using the setup in Fig. 13) with precise scan synchronism orVy = 0. The scan direction is vertical in Figure 14a. The radial pattern has higher spatial frequencies in the center of thepattern (pattern line spacings are smaller), which become systematically lower in spatial frequency as we move outward.Figure 14b shows the same CCD TDI output image except with a deliberate 10% image scan velocity mismatch. Figure 14balso shows considerable blurring of the pattern edges in the scan direction (vertical) and a very distinct pattern near the centerof this figure. This pattern looks like a pair of Òeyes;Ó there are two distinct circular loci of points at which the imagecontrast is obliterated (at these spatial frequencies, OTFVMM = 0). The effects of phase reversal are readily apparent on thetwo sides of these regions of zero contrast. The reversal consists of bar patterns going from light-to-dark and vice-versa.
Figure 15 shows the simulated version of the experimental image displayed in Figure 14b, using PHOCAS. Excellentagreement is clearly evident. Figure 16 shows a plot of the pixel aperture and velocity mismatch components of the CCDTDI sensorÕs OTF. The 10% velocity mismatch (with NTDI = 64) case is shown as the lower curve in Figure 16. At a spatialfrequency of approximately 13.1 line pairs/mm, the OTF goes from positive to negative and stays negative until about 26 linepairs/mm, when it again goes positive. This range of spatial frequencies is represented in Figure 14b as the region betweenthe two loci of points with zero contrast.
Such phase reversal effects are induced by poor scan velocity control. For wide-field CCD TDI systems typically used inremote sensing applications, this velocity mismatch is unavoidable for large off-nadir pointing angles. The effect isaccentuated by maximizing NTDI [see Eq. (13)]. Hence, for low-light level, off-nadir angle scanning applications an OTFzero crossing might be produced, making phase reversal artifacts possible. This impact on real imagers is difficult to predictand will be very scene dependent. However, the ÒsunburstÓ radial bar pattern is a virtually ideal test image for demonstratingthe velocity mismatch effects at most spatial frequencies of interest. Since the spatial frequencies are continuously varying inall directions, a region of constant spatial frequency is infinitesimally small. Nevertheless, it can be seen in Figures 14b and15 that there is an indisputable loss of image quality across a substantial area. The significance is that it is not necessary foran image to contain an Òextended sourceÓ of constant spatial frequency in order for the loss of modulation to be seen.Consider a scene containing, among other things, two small identical features (buildings or boats for example) that areseparated by a distance comparable to their size. For a given velocity mismatch factor, there will be some orientation of thesefeatures, with respect to the scan direction, for which the features will essentially Òdisappear.Ó Analogously, for a givenvelocity mismatch and some particular feature orientation, there will be a feature size and separation distance for which thefeatures will remain undetected (unmodulated) by the scan. In the next section we cover the impact of pixel amplitudequantization on image quality.
4. PIXEL AMPLITUDE QUANTIZATION: INTEGRAL AND DIFFERENTIAL NONLINEARITY
The impact of pixel amplitude quantization effects is very application-dependent. In this section we discuss and illustrate theimpact of integral and differential nonlinearity (INL and DNL). These are pixel amplitude-related effects that will likelyoccur in a digital camera. We do not discuss the effects associated with pixel outages or row/column blemishes.
If the digital camera in question is used to produce pictures for routine viewing by human-beings (compared, for instance, tomachine vision, imaging spectroscopy, or other advanced applications involving image processing) then the characteristics ofthe human visual system must be kept in mind. We note that modern digital camera systems are widely available withdynamic ranges exceeding 64,000:1 and image sensors with greater than 4 million pixels. For a single field, human visioncan distinguish about 64 shades of gray, but the eye can adjust for overall brightness. In fact, humans can perceive light over
Lamp
Filter
Bar target
Sensor board
Sensor
Scanner
Acquisition computer and
memory w/IEEE-488 interface bus
Oscilloscope Display
Offner
relay
Signal
Encode pulse Custombus
High speed video analog-to-digital
converterPI-5800 driver
electronicsand
custom DAQ
IEEE-488 bus
Figure 13. Experimental setup for scanning bar pattern image across a TDI CCD image sensor.
Figure 14a. Measured image of a ÒsunburstÓ radial bar pattern using a 64 TDI CCD scanner with zero image scan velocity/TDI imagecharge velocity mismatch.
Figure 14b. Measured image of a ÒsunburstÓ radial bar pattern using a 64 TDI CCD scanner with 10% image scan velocity/TDI imagecharge velocity mismatch. Phase reversal effects are clearly visible in the center of the image.
Figure 15. Simulated image (using PHOCAS) of the experimental case displayed in Figure 14b.
lp/mm
0.8
0.6
0.4
0.2
– 0.2
1.0
0
10% mismatch
0% mismatch
0 10 20 30 40 50 60 70 80 90 100
Figure 16. Plot of pixel aperture and velocity mismatch components of the TDI CCD OTF for zero and 10% image scan velocity/TDIimage charge velocity mismatch. For the 10% velocity mismatch case, several OTF zero-crossings are evident. The three regions
corresponding to the first three zero crossings are clearly evident in Figures 14b and 15.
an intensity range spanning nine orders of magnitude. The human eye has approximately 150 million ÒpixelsÓ or cone cellspacked into its fovea (region in the center of the human field of vision). While determining relative brightness is not aproblem for human vision, the Òautomatic gain controlÓ feature built into human vision precludes determining absolutebrightness. Human vision is optimized for detecting edges and, hence, can pick up geometric shapes quite easily.
Digital pixel amplitude-related imperfections and artifacts include: gain and offset nonuniformity, noise, step response,harmonic distortion, INL, and DNL. The terminology here derives from measurable characteristics of analog-to-digitalconverters (ADCs), although the entire pixel signal chain, from photons-in-to-bits-out can be characterized in terms of theseparameters.8 The relative importance of these descriptors of pixel amplitude imperfection is very application-dependent.Clearly, fixed pattern noise (gain/offset nonuniformity) and effects associated with data compression and decompression, andimage sharpening, are important in many advanced applications. Our focus in this paper is on the Òfront-endÓ behavior of thepixel signal chain; here we look at the impact of varying INL and DNL on two types of images.
4.1 Imbedding INL/DNL effects into images
We use some of the ideas on how to visualize INL/DNL effects developed by C. Sabolis.15 Figure 17 shows a plot of INLmeasured for a 14-bit ADC that has been remapped to an 8-bit range in order to emphasize the ensuing visual impact. This isclearly an artificial example from a hardware viewpoint, but it serves the useful purpose of making the impact of INL onimage quality clearer. In Reference 8, INL is defined as a measure of the image signal chainÕs deviation from ideal linearityover its entire dynamic range. An INL plot is, therefore, the deviation for each amplitude step (and each input binary code)from a best fit line determined over the dynamic range of the pixel signal chain. The unusual character in Figure 1 derivesfrom the timing effects and the pipeline architecture of the ADC that produced this characteristic.
A DNL plot is given in Figure 18 based on a remapping to an 8-bit dynamic range of data from a 14-bit ADC. The DNL plotdoes not have any particular systematic trend across the 8-bit range. Reference 8 defines the procedure for computing DNL:
–2.0
ADC code
LSB
s
6.0
4.0
2.0
0
–4.0
–10.0
–6.0
0 20 40 60 80 100 120 140 160 180 200 220 240
–8.0
Figure 17. Integral Nonlinearity versus pixel amplitude over an 8-bit range.
0
ADC code
LSB
s
2.0
1.5
1.0
0.5
–0.5
–1.5
–1.0
0 20 40 60 80 100 120 140 160 180 200 220 240
Figure 18. Differential Nonlinearity versus pixel amplitude over an 8-bit dynamic range.
we subtract the difference in the pixel signal chain response between two successive binary input levels, ratio this differenceto the ideal expected step [one least significant bit (LSB)], and then subtract one LSB from each step over the entireamplitude range. DNL is clearly a diagnostic for local nonlinearities (usually caused by missing or misaligned ADC binarycodes).
To illustrate the impact of INL and DNL, the transfer function associated with Figures 17 or 18 respectively, was applied tothe pixel brightness of two different types of images. The images were produced by applying the INL or DNL transferfunctions to relatively clean source images. Figures 17 and 18 are experimentally acquired ADC INL/DNL data that havebeen stored in a look-up table (LUT) transfer function. The brightness value for each pixel from the raw image wastranslated, via the LUT, to produce an altered brightness value that corresponds to the INL or DNL transfer function.Different overall levels of INL or DNL are produced by systematically scaling all of the values in the LUT starting, in themost benign case, with a peak INL or DNL value of one LSB and ending, in the most severely degraded case, with peak INLor DNL values more than an order of magnitude larger. In the next section, the results of this exercise are presented.
4.2 Visual effects associated with INL/DNL imperfections
We imbed the INL/DNL imperfections given in Figures 17 and 18 into an image consisting of a smooth linearly increasingramp in one direction. This ramp spans an 8-bit dynamic range and the shading from the ramp is clearly visible. Figure 19shows the effect of increasing the peak-to-peak INL pattern amplitude from one LSB to 32 LSBs. Streaks become barelyvisible at 8 LSBs and quite pronounced at 32 LSBs. It is difficult to sense the distortion in linearity over the entire dynamicrange in Figure 19. The image in Figure 20 shows the same case as Figure 19, except that the peak-to-peak DNL pattern issystematically increased from 1 to 56 LSBs. Again, streaks appear in the gradient direction. Clearly, INL and DNL effectsat these magnitudes, when applied to images with smooth gradients, will create artifacts.
The more interesting case of applying these same amounts of INL and DNL to a realistic image is shown in Figures 21 and22. The young child shown is the granddaughter of one of the co-authors (R. M. Shima). In this image, the pixel amplitudesare obviously not arranged in a systematic monotonically increasing manner. Hence, the streaking effect seen in Figures 19and 20 is replaced by a granular background effect that is similar to random noise. The onset of a visual effect seems tooccur for a peak-to-peak INL level of 16 LSBs (see Fig. 21) and a peak-to-peak DNL level of 28 LSBs. It is interesting tonote that the INL case corresponding to 32 LSBs in Figure 21 is similar to the appearance of a Òcontrast stretchedÓ picture.
From these simple examples, it is clear that pixel amplitude imperfections such as INL and DNL are very scene dependent.In our work, these levels are greatly exaggerated in order to produce a visual effect. In advanced applications that involvesophisticated processing of images (i.e., to enhance contrast and emphasize certain spatial frequencies) the impact on imagequality of lower levels of INL and DNL is more likely.
5. SUMMARY AND CONCLUSIONS
Digital camera systems are being developed and used for a myriad of advanced imaging applications. The effects of spatialsampling (beat patterns and aliasing), image phase reversal (velocity mismatch in a TDI CCD sensor), and imperfections inpixel amplitude quantization, can produce artifacts that are unique to these types of imagers.
It is difficult to predict how these effects will impact final imagery due to the strong dependence on scene details. In thispaper we review several methods for accomplishing the quantitative characterization of these anomalies, albeit under rathercontrolled conditions. Correctly constructed imaging simulations (e.g., PHOCAS) represent a powerful tool in studying andassessing the impact of these types of artifacts on general and complex imagery. Such simulations are needed in order tospecify appropriate hardware specifications for digital camera pixel signal chains.
INL = 1 LSB INL = 8 LSBs
INL = 16 LSBs INL = 32 LSBs
Figure 19. Image of a one-dimensional, smooth gradient scene with increasing levels of imbedded INL.
DNL = 1 LSB DNL = 14 LSBs
DNL = 28 LSBs DNL = 56 LSBs
Figure 20. Image of a one-dimensional, smooth gradient scene with increasing levels of imbedded DNL.
INL = 1 INL = 8
INL = 16 INL = 32
Figure 21. Image of a young child with increasing levels of imbedded INL.
DNL = 1 DNL = 14
DNL = 28 DNL = 56
Figure 22. Images of a young child with increasing levels of imbedded DNL.
REFERENCES
1. CMOS ≡ Complementary Metal-Oxide-Silicon.
2. Bar-pattern sequences must be sufficiently long so that several cycles of a given pattern are observable.
3. J. E. Greivenkamp, ÒColor Dependent Optical Prefilter for the Suppression of Aliasing Artifacts,Ó Applied Optics, 29,pp. 676-684, 1990.
4. J. W. Goodman, Introduction to Fourier Optics, p. 126, Figure 6-8, McGraw-Hill, NY, 1968.
5. G. C. Holst, CCD Arrays, Cameras, and Displays, p. 253, JCD Publishing and the SPIE Optical Engineering Press,Winter Park, FL, 1996.
6 . J. F. Johnson, ÒModeling Imager Deterministic and Statistical Modulation Transfer Functions,Ó Applied Optics,32, pp. 6503-6513, November 1993.
7 . H. V. Kennedy, ÒMiscellaneous Modulation Transfer Function (MTF) Effects Relating to Sample Summing,ÓProceedings of the SPIE, 1488, pp. 165-176, 1991.
8. R. M. Shima and T. S. Lomheim, ÒPerformance Characterization of a High-Speed Analog Video Processing SignalChain for Use in Visible and Infrared Focal Plane Applications,Ó Proceedings of the SPIE, 3061, pp. 860-883, Section4.6, April 1997.
9. R. M. Shima and T. S. Lomheim, ÒPerformance Characterization of a High-Speed Analog Video Processing SignalChain for Use in Visible and Infrared Focal Plane Applications,Ó Proceedings of the SPIE, 3061, pp. 860-883, Section4.7, April 1997.
10. PHOCAS ≡ Physical Optics Code for Analysis and Simulation.
11. R. Chambers et al., ÒReimaging System for Evaluating High-Resolution Charge-Coupled Device (CCD) Arrays,Proceedings of the SPIE, 1488, pp. 312-326, April 1991.
12. Manufactured by Photosciences, Inc., Torrance, CA.
13. T. S. Lomheim et al., ÒElectro-Optical Hardware Considerations in Measuring the Imaging Capability of Scanned Time-Delay-and-Integrated Charge-Coupled Imagers,Ó Optical Engineering, 29, pp. 911-927, August 1990.
14. P. G. Hewitt, Conceptual Physics, p. 353, Figures 19-20, 8th Edition, Addison-Wesley, NY, 1998.
15. C. Sabolis, ÒSeeing is Believing,Ó Photonics Spectra, pp, 119-126, October 1993.
ACKNOWLEDGMENTS
Karen DeMoss and Patricia Carson are thanked for their expert technical assistance in the preparation of this manuscript.This paper is related to and motivated by the Aerospace Sponsored Research Program. Chris Klein, Stan Kohn, and EdCasey are thanked for carefully reviewing the manuscript.