16
Refraction Geometrical optics uses rays to represent light propaga- tion through an optical system. According to Fermats principle, rays will choose a path that is stationary with respect to variations in path length, typically a time minimum. The quickest path may result in a change of direction at the interface between two refractive media separated by a refracting surface. The change of direction is described by Snells law, n 0 sin I 0 5 n sin I n and n 0 are the refractive indices of the first and second refractive media. The refractive index describes the speed of light in vacuum relative to the medium. I is the angle of incidence, and I 0 is the angle of refraction. Both are measured relative to the surface normal. For rotationally symmetric systems, the optical axis (OA) is the axis of symmetry taken along the z direction. Image formation requires a spherical refracting surface so that an incident ray travelling parallel to the OA can be bent towards the OA. However, spherical surfaces give rise to aberrations and associated image defects. A conceptu- ally simple example is spherical aberration, where rays originating from a unique object point do not converge to form a unique image point at the OA. z (1) (2) Field Guide to Photographic Science 1 Fundamental Optics

Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

  • Upload
    others

  • View
    3

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Refraction

Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’sprinciple, rays will choose a path that is stationary withrespect to variations in path length, typically a timeminimum. The quickest path may result in a change ofdirection at the interface between two refractive mediaseparated by a refracting surface.

The change of direction is described by Snell’s law,

n0 sin I 0 5 n sin I

n and n0 are the refractive indices of the first and secondrefractive media. The refractive index describes thespeed of light in vacuum relative to the medium. I is theangle of incidence, and I 0 is the angle of refraction. Both aremeasured relative to the surface normal. For rotationallysymmetric systems, the optical axis (OA) is the axis ofsymmetry taken along the z direction.

Image formation requires a spherical refracting surface sothat an incident ray travelling parallel to the OA can bebent towards the OA. However, spherical surfaces give riseto aberrations and associated image defects. A conceptu-ally simple example is spherical aberration, where raysoriginating from a unique object point do not converge toform a unique image point at the OA.

z(1) (2)

Field Guide to Photographic Science

1Fundamental Optics

Page 2: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Gaussian Optics

Gaussian optics treats each surface as free of aberrationsby performing a linear extension of the paraxial region toarbitrary heights above the OA.

Since tan u 5 u holds exactly in the paraxial region, eachrefracting surface is projected onto a flat tangent planenormal to the OA, and the surface power Fs is retained.The angles i and i0 are now equivalent to u and u0, where

u5 tan�1

�ys

�5

ys; u0 5 tan�1

��ys0

�5

�ys0

u and u0 are interpreted as ray slopes instead of angles.

Substituting u and u0 into the Gaussian conjugate equationfor a spherical surface yields a new form of Snell’s law,

n0u0 5 nu� yFs

Paraxial imaging is now valid at arbitrary heights h;h0.

OP is the object plane, and IP is the image plane.

i i

R C

s s

p pn n

y

u u

OP IP

h

h

yu

u

s sn n

Field Guide to Photographic Science

3Fundamental Optics

Page 3: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Aberrations

When rays are traced using exact trigonometric equationsbased on Snell’s law (p. 1), aberrations arise from thehigher-order terms in the expansion of the sine function:

sin u 5 u� u3

3!þ u5

5!� u7

7!þ · · ·

Primary (Seidel) aberrations arise from the third-order term:

• Spherical aberration (SA) is caused by variations offocuswith ray height in the aperture (p. 26). A convergingelement has under-corrected SA, so rays come to a focuscloser to the lens as the ray height increases.

• Coma is caused by variation of magnification with rayheight in the aperture so that rays from a given objectpoint will be brought to a focus at different heights on theIP. The image of a point is spread into a non-symmetricshape that resembles a comet. Coma is absent at the OAbut increases with radial distance outwards.

• Astigmatism arises because rays from an off-axisobject point are presented with a tilted rather thansymmetric aperture. A point is imaged as two smallperpendicular lines. Astigmatism improves as theaperture is reduced.

• Petzval field curvature describes the fact that theIP corresponding to a flat OP is itself not naturallyflat. Positive and negative elements introduce inwardand outward curvature, respectively. Field curvatureincreases with radial distance outwards.

• Distortion describes the variation of magnificationwith radial height on the IP. Positive or pincushiondistortion will pull out the corners of a rectangularimage crop to a greater extent than the sides, whereasbarrel distortion will push them inwards. Distortiondoes not introduce blur and is unaffected by aperture.

• Chromatic aberration and lateral color appear inpolychromatic light due to variation of focus with wave-length l and off-axis variation of m with l, respectively.

Aberrations affect image quality and can be measured asdeviations from the ideal Gaussian imaging properties.Modern compound lenses are well corrected for primaryaberrations.

Field Guide to Photographic Science

11Fundamental Optics

Page 4: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Autofocus

Phase-detect autofocus (PDAF) systems used in digitalSLR cameras have evolved from the 1985 Minolta Maxxumdesign. The reflex mirror has a zone of partial transmis-sion, and the transmitted light is directed by a secondarymirror down to an autofocus (AF) module located at anoptically equivalent SP (p. 14). The module containsmicrolenses that direct the light onto a CCD strip.

Consider light passing through the lens from a small regionon the OP indicated by an AF point on the focusing screen.When the OP is in focus at the SP, the light distributionarriving from equivalent portions of two halves of the XPwill produce an identical optical image and signal alongeach half of the CCD strip.

When the OP is not in sharp focus (i.e., out of focus), theseimages will shift either toward or away from each other,and this shift indicates the direction and amount by whichthe lens needs to be adjusted by its AF motor.

A horizontal CCD strip is suited foranalyzing signals with scene detail pres-ent in the horizontal direction, such as thechange of contrast provided by a verticaledge. A cross-type AF point utilizesboth a horizontal and a vertical CCD strip so that scene detailin both directions can be utilized to achieve focus.

XP equivalent SP microlenses CCD signal

Field Guide to Photographic Science

15Focusing

Page 5: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

f-Number

When focus is set at infinity, s0 5 f 0. The ray slope is then

u0 5D2f 0

u0 can be substituted into the formula for the RA (p. 29).The refractive indices can be removed using n0/n5 f 0/f.

The illuminance E at the axial position on the SP becomes

E 5p

4LT

1N2 ; where N 5

fD

The f-numberN depends on the front effective focal lengthand the EP diameter D.

The f-number is the reciprocal of the RA when focus is setat infinity. It is usually marked on lens barrels using thesymbols f/N or 1:N, where N is the f-number.

Beyond Gaussian optics, the f-number can be written

N 5n

2NA0∞; where NA0

∞ 5 n0 sinU 0

NA0∞ is the image-space numerical aperture when s ! ∞,

and U0 is the real image-space marginal ray angle.Provided the lens is aplanatic (free from SA and coma),then sin U0 5 u0 when s ! ∞, according to Abbe’s sinecondition, so the Gaussian expression N5 f/D is exact foran aplanatic lens. However, the sine function restricts thelowest achievable value in air (n5 n0 5 1) to N0 5 0.5.

EP H HXP SP

12 D

XPsEPsf f

dA

n n

uP

OP

Field Guide to Photographic Science

30 Exposure

Page 6: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Color Filter Arrays

The HVS is sensitive to wavelengths between 380–780 nm.The eye contains three types of cone cells with photonabsorption properties described by a set of eye coneresponse functions lðlÞ, mðlÞ, and sðlÞ. These differentresponses lead to the visual sensation of color (p. 53).

A larger number of green filters areused since the HVS is moresensitive to l in the green regionof the visible spectrum.

A camera requires an analogous set of response functionsto detect color. In consumer cameras, a color filter array(CFA) is fitted above the imaging sensor.

The Bayer CFA uses a 23 2 block pattern of red, green,and blue filters that form three types ofmosaic. The filtershave different spectral transmission properties describedby a set of spectral transmission functions TCFA,i(l), wherei is the mosaic label. The overall camera response isdetermined largely by the product of TCFA,i(l) and thecharge collection efficiency h(l) (pp. 37, 40).

The Fuji® X-Trans® CFA uses a 63 6 block pattern. Itrequires greater processing power and is more expensivebut can give improved image quality.

• An infrared-blocking filter is combined with the CFAto limit the response outside of the visible spectrum.

• The spectral passband of the camera describes therange of wavelengths over which it responds.

• In order to record color correctly, a linear transforma-tion should in principle exist between the cameraresponse and eye cone response functions.

• After raw data capture, only the digital value of onecolor component is known at each photosite. Colordemosaicing estimates the missing values so that allcolor components are known at every photosite.

Fuji® X-Trans®Bayer

Field Guide to Photographic Science

38 Raw Data

Page 7: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Color Theory

Color vision is a perceived physiological sensation toelectromagnetic waves with wavelengths in the visibleregion, which ranges from approximately 380–780 nm.

Color can be described by its luminance and chromaticity.Chromaticity can be subdivided into hue and saturationcomponents. Pure spectrum colors (rainbow colors) arefully saturated. Their hues can be divided into six mainregions. Each region contains many hues, and the transitionsbetween regions are smooth.

Colors that are not pure spectrum colors have been dilutedwith white light and are not fully saturated. Pink isobtained by mixing a red hue with white. Greyscale orachromatic colors are fully desaturated.

Light is generally polychromatic, i.e., a mixture ofvarious wavelengths described by its spectral powerdistribution (SPD) P(l). This can be any function of powerat each l, for example, spectral radiance (p. 39).Metamersare different SPDs that appear to be the same color underthe same viewing conditions. The principle of metamerismis fundamental to color theory.

400 450 500 550 600 650 700 750

Field Guide to Photographic Science

52 Color

Page 8: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

CIE RGB Color Space

In 1931, before the eye cone response functions wereknown, the CIE (Commission internationale de l’éclairage)analyzed results from color-matching experiments thatused a set of three real monochromatic primaries:

lR 5 700nm; lG 5 546.1 nm; lB 5 435.8 nm

Using Grassmann’s laws, human observers were asked tovisually match the color of a monochromatic light source ateach wavelength l by mixing amounts of the primaries.From the results, the CIE defined a set of color-matchingfunctions (CMFs) denoted rðlÞ, gðlÞ, bðlÞ.

Since the primaries are necessarily real, negative valuesoccur. This physically corresponds to matches that requireda primary to be mixed with the source light instead of theother primaries. The CMFs are normalized such that thearea under each curve is the same; adding a unit of eachprimary matches the color of a unit of a reference white thatwas chosen to be CIE illuminant E. This hypotheticalilluminant has constant power at all wavelengths.

Adding equal photometric or radiometric amounts of theprimaries does not yield the reference white; the actualluminance and radiance ratios between a unit of eachprimary are 1 : 4.5907 : 0.0601 and 72.0966 : 1.3791 : 1.

CIE RGB tristimulus values can be defined, and each R, G,B triple defines a color in the CIE RGB color space. Thisis again a reference color space. The CMFs and lðlÞ, mðlÞ,sðlÞ are linearly related (p. 53). A different set of primarieswould give a different set of CMFs obtainable via a lineartransformation.

-0.1

0.0

0.1

0.2

0.3

0.4

380 480 580 680 780

wavelength (nm)

( )

( )

( )

Field Guide to Photographic Science

54 Color

Page 9: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

White Balance: Matrix Algebra

The raw to sRGB transformation (p. 61) for scene illumina-tion with a D65 white point can be rewritten:24RLGLBL

35D65

5MRDD65

24RGB

35D65

where MRDD65 5M�1sRGBC

MR is a color rotation matrix (each row sums to 1):

MR 5M�1sRGB CD�1

D65

DD65 is a diagonal white balance matrix of raw WBmultipliers for D65 scene illumination:

DD65 5

24 1=RWP 0 0

0 1=GWP 00 0 1=BWP

35D65

1=GWP 5 1 provided C has been appropriately normalized(p. 59). Typically, 1=RWP . 1 and 1=BWP . 1.

For scene illumination with a different white point, DD65can be replaced by a matrix suitable for the adopted white:24RLGLBL

35D65

5MRDAW

24RGB

35AW

; DAW 5

241=RWP 0 0

0 1=GWP 00 0 1=BWP

35AW

Better accuracy can be achieved using MR derived from acharacterization performed with illuminant CCT closelymatching that of the AW. Camera manufacturers typicallyuse a set of rotation matrices, each optimized for use withan associated WB preset. Two presets from the Olympus®

E-M1 raw metadata are tabulated below (divide by 256).

CCT Scene Multipliers MR

3000 K Tungsten 296, 256, 76024 324 �40 �28�68 308 1616 �248 488

35

6000 K Cloudy 544, 256, 39624 380 �104 �20�40 348 �5210 �128 374

35

Field Guide to Photographic Science

63Color

Page 10: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Image Display Resolution

The pixel count of a digital image is the total number ofpixels. It is often expressed as n5 nh 3 nv, where nh and nvare the horizontal and vertical pixel counts. The imagedisplay resolution is the number of displayed image pixelsper unit distance, e.g., pixels per inch (ppi). It is a propertyof the display device/medium:

1) For a monitor/screen, the image display resolution isdefined by the screen resolution, which typically has avalue such as 72 ppi, 96 ppi, etc.

2) For a hard-copy print, the image display resolution canbe chosen by the user; 300 ppi is considered enough forhigh-quality prints viewed at Dv.

Note that image display resolution is independent of theprinter resolution, which is the number of ink dots perunit distance used by a printer, e.g. dots per inch (dpi). Fora given printer technology, a larger dpi can yield a better-quality print. The image display size is defined as

image display size5pixel count

image display resolution

The print size is the image display size for a print.

Example: The image display size for a digital imageviewed on a 72-ppi monitor will be 300/72� 4.16 timeslarger than that of a 300-ppi print of the same image.

Images cannot be “saved” at a specific ppi since imagedisplay resolution is a property of the display, but a tag canbe added to the image metadata indicating a desired ppi forthe printer software. This tag does not affect the imagepixel count, and the value can be overridden.

In order to match a desired image display size for a givenimage display resolution (or vice versa), the pixel countneeds to be changed through image resampling. Printersoftware will automatically perform the required resam-pling when printing an image. Notably, this does notchange the pixel count of the stored image file. Imageediting software can be used to resample an image. Incontrast to printing, the resampled image needs to beresaved, which will alter its pixel count.

Field Guide to Photographic Science

71Digital Images

Page 11: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Exposure Value

The reflected and incident-light metering equations(pp. 75, 82) can be rewritten as the APEX equation(additive system of photographic exposure) designed tosimplify manual calculations:

Ev5 Avþ Tv5 Bvþ Sv5 Ivþ Sv

• Ev is the exposure value (not to be confused with Ev).• Av5 log2 N2 is the aperture value.• Tv = �log2 t is the time value.• Bv5 log2 (⟨L⟩/(0.3K)) is the brightness value.• Iv5 log2 (E/(0.3C)) is the incident-light value.• Sv5 log2 (S/3.125) is the speed value.

These are associated with specific N, t, ⟨L⟩, S. For example,

N 0.5 0.7 1 1.4 2 2.8 4 5.6 etc.

Av �2 �1 0 1 2 3 4 5

t 4 2 1 1/2 1/4 1/8 1/16 1/32 etc.

Tv �2 �1 0 1 2 3 4 5

Bv and Iv depend upon the choice of reflected light andincident light meter calibration constants K and C.Suitable exposure settings for a typical scene are providedby any combination of N, t, S that yields the recommendedEv. A difference of 1 Ev defines a photographic stop(pp. 34, 45). An f-stop specifically relates to a change of Av.

Exposure compensation (EC, p. 81) can be included inthe APEX equation by modifying the brightness value:

Bv ! Bv� ECPositive EC compensates when Bv is too high by reducing Evand therefore increasing ⟨H⟩. Negative EC compensates whenBv is too low by raising Ev and therefore decreasing ⟨H⟩.

The APEX equation is valid provided the metering isbased on average photometry. It is not valid when usingin-camera matrix/pattern metering modes (p. 82).

Field Guide to Photographic Science

78 Standard Exposure Strategy

Page 12: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

High Dynamic Range: Example

Due to back lighting,negative EC was re-quired to preserve thehighlights in the sun.This rendered the sha-dows much too dark.The shutter speed wast5 1/2500 s.

A frame taken at +2 Evusing a shutter speedt5 1/640 s reveals moreshadow detail but clipsthe highlights.

A frame taken at +4 Evusing a shutter speedt5 1/160 s completelyclips the highlights inorder to reveal the fullshadow detail.

Merging the three rawfiles into a linear HDRimage and then apply-ing a local TMO yieldsan output image withboth highlight andshadow detail visible.

Field Guide to Photographic Science

87Practical Exposure Strategy

Page 13: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Polarizing Filters: Practice

The utility of the polarizing filter is that the ratio betweenunpolarized light and any partially or completely planepolarized light entering the lens can be altered. Dielectricsurface s from which partially or completely plane polar-ized light emerges include glass, wood, leaves, paint, andwater. Rotating the filter plane of transmission to elimi-nate plane polarized light emerging from glass or watercan eliminate the observed reflections. Eliminating reflec-tions from leaves can reveal their true color.

Light arriving from clouds will in general be unpolarizeddue to repeated diffuse reflections (p. 99) and will beunaffected by a polarizing filter. Light arriving from a bluesky will in general be partially plane polarized due toRayleigh scattering from air particles (pp. 99, 101).Therefore, rotating the filter to reduce the partially planepolarized light will darken the blue sky relative to anysources of unpolarized light. The darkening effect is mostnoticeable along the vertical strip of sky that forms a rightangle with the sun and the observer since maximumpolarization occurs at a 90° scattering angle from the sun(or other light source).

• A plane or linear polarizing filter should not beused with digital SLR cameras as it will interfere withthe autofocus and metering systems. These utilize abeamsplitter that functions using polarization.

• A circular polarizing filter (CPL) can be used withdigital SLR cameras in place of a linear polarizingfilter. A CPL functions as an ordinary linear polarizerbut modifies the transmitted (plane polarized) light sothat E rotates as a function of time and traces out ahelical path. This prevents the transmitted light fromentering the beamsplitter in a plane polarized state.

A polarizing filter should be removed from the lens in lowlight conditions since an ideal polarizing filter onlytransmits 50% of all unpolarized light, equivalent touse of a 1-stop neutral density filter (p. 88).

Field Guide to Photographic Science

91Practical Exposure Strategy

Page 14: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Side Lighting: Example

Luoping, Yunnan, China

Side lighting reveals the depth and 3D form.

Llyn Gwynant, Snowdonia, Wales

Side lighting reveals the surface texture.

Field Guide to Photographic Science

96 Lighting

Page 15: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Sync Speed

The shutter speed t is typically set much slower than theflash duration itself when using flash. Shutter speedsquicker than the sync speed should not be used, otherwiseimage artifacts can arise due to conflict between the flashand the shutter method of operation.

Mechanical focal plane shutter: Since very quick t areobtained only when the second shutter curtain starts to closebefore the first curtain fully opens (p. 35), the sync speed is thequickest t at which the first curtain can fully open before thesecond curtain needs to start closing, on the order of 1/250 s.This ensures that the flash can be fired when both curtainsare fully open. Quicker t would shield part of the scene fromthe flash and cause a dark band to appear in the image.

Mechanical leaf shutter: Since this is positioned next tothe lens aperture, the sync speed is limited only by thequickest possible t, typically on the order of 1/2000 s.

Electronic global shutter and CCD sensor: The syncspeed is limited only by the quickest electronic shutterspeed available or by the flash duration.

Electronic rolling shutter and CMOS sensor: Althoughquicker shutter speeds t are available compared to amechanical shutter, exposing all sensor rows to the flashrequires limiting the sync speed to the total frame readouttime due to the rolling nature of the shutter (p. 35). This istypically too slow to be useful.

Electronic global shutter and CMOS sensor: Availableon scientific cameras, the sync speed is limited only by thequickest electronic t or by the flash duration.

If a t faster than the sync speed is required with a focalplane shutter, e.g., when a low N is needed for shallowDoF, the high-speed sync (HSS) mode fires the flashcontinuously at low power throughout the entire duration tin order to eliminate shielding artifacts. In this case, theeffective flash duration and t are the same. Nevertheless,“high speed” refers to shutter speed t and not the effectiveflash duration; high-speed photography requires asingle flash of very short duration, such as 1/32000 s.

Field Guide to Photographic Science

107Lighting

Page 16: Refraction Geometrical optics · 2020. 5. 28. · Geometrical optics uses rays to represent light propaga-tion through an optical system. According to Fermat’s principle, rays will

Camera System MTF

The camera system MTF is the product of all theindividual component MTFs (p. 110):

jHðmx;myÞj5 jH1ðmx;myÞj jH2ðmx;myÞj · · · jHnðmx;myÞjAlthough a huge variety of component MTFs contribute tothe camera system MTF, a basic system can be definedusing the diffraction, detector aperture, and optical low-pass filter (OLPF) components.

The camera system MTF depends greatly upon thediffraction MTF since this varies with lens f-number N.

Example (see p. 115): dx5 3.8 mm, px5 4.0 mm;

mc,det5 263 cycles/mm, mx,Nyq5 125 cycles/mm.

• N 5 22 : Here the diffraction MTF dominates. TheOLPF is not required at this f-number (also see p. 118).

• N 5 1.4 : Here, the OLPFMTF dominates. The detectoraperture MTF would dominate without the OLPF, inwhich case aliasing would occur (see pp. 117, 118).

0

0.2

0.4

0.6

0.8

1

0 100 200 300 400 500 600

MT

F

Spatial frequency (cycles/mm)

detector aperture OLPF

diffraction system

0

0.5

1

0 100 200 300 400 500 600

MT

F

Spatial frequency (cycles/mm)

Field Guide to Photographic Science

119Image Quality: Resolution