Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Project Documentation SPEC-0009
Rev G
DKIST System Error Budgets
Robert Hubbard Systems Engineering
David Harrington
DKIST Science – Polarimetry
January 2017
DKIST System Error Budgets
SPEC-0009 Rev G Page i
Revision Summary:
1. Date: May 2003 Revision: A Initial release in support of enclosure trade study 2. Date: December 2003 Revision: Revision B Changes:
Removed inappropriate wavelength scaling in wind buffeting error budget
The budget for residual figure errors in the coronal seeing-limited error budget was revised downward based on spatial frequency band analysis. The left over budget was given to static optical alignment for now.
Added Appendix B, and moved some material from body of document to that appendix.
The cell pairs in the error budget spreadsheets are no longer top-down and bottom-up, but are now snapshot and Monte Carlo. The body of the document has been modified to describe these
3. Date: October 2005 Revision: Revision C / Initial Approved Release Changes:
Remove some of the old “boiler plate” material carried over from the Gemini Error Budget Plan document. These include the previous section entitled “Comparison of Bottom Up and Top Down Budgets” and “Resolution of Discrepancies.”
Section 5.2 on Performance Prediction Implementation was added (formerly just a placeholder).
The September 2005 coudé baseline telescope design is used in analysis for the seeing limited and diffraction limited error budgets. The Nasmyth baseline is used for the seeing-limited coronal error budget.
Case I-b was added to the diffraction-limited error budget for “good” seeing.
A good deal of bottom-up material was added based on recent work on various subsystem designs and analyses of performance. Included among these is the latest FEA-based windshake estimates, and the air curtain analysis.
Wavefront errors due to the base optical design have been added into the “diffraction” cell.
4. Date: October 2006 Revision: Revision D-1 Changes:
Simplified and clarified long exposure discussion in both 6.2.2 and 6.3.2.
Incorporate latest M1 bottom-up work by E. Hansen
Change “coudé path” to “beam path” since Case 3 is at Nasmyth, not coudé. Then change error values to correspond with lab turbulence measurements and air curtain experiment.
The “Diffraction” entry has been changed to “Diffraction and Optical Design” so that it can include the wavefront residuals present even if the optics are fabricated perfectly. Appropriate values were entered in all error budgets based on ZEMAX analysis of the optimized design.
Bottom-up wind-buffeting analysis has been included in the two seeing-limited error budgets.
DKIST System Error Budgets
SPEC-0009 Rev G Page ii
Bottom-up analysis for Quasi-static optical alignment has been included.
Mirror-seeing allocations are made and justified, including some bottom-up for both Case 2 and Case 3.
The Case 3 error budget was rebalanced after including all available bottom-up analysis. This allowed additional margin to be added to both enclosure seeing entries, the wind shake entry, and the instrument optics entry.
Modified the quoted science requirement to reflect median seeing at 7 cm (rather than 10) based on change order ECR-0002_DIQ1.doc.
The Case 1 error budgets were expanded to include a breakdown of the adaptive optics errors based on TN-0073, and all AO related bottom-up values were adjusted based on this new information.
The Instrument sub-allocations (never more than placeholders) were removed.
5. Date: October 2006 Revision: Revision D-2 Changes:
Returned the constant k in the atmospheric residuals calculation back to 0.26 per changes to TN-0073.
Added a cell for Thermal Control Jitter to all spreadsheets. 6. Date: July 2007 Revision: Revision E-1 Changes: (prior to July 2007 System design review of TEOA and Optical Design)
Changed the Case 3 budget for Transfer Optics now that the NIRSP is in the coudé lab. The budget was doubled, though no rebalance was necessary because it did not change the top-level 0.700EE value.
Increased the Case 3 budget for Dynamic Optical Alignment from 0.020 to 0.050 based on bottom up analysis of the TEOA.
Decreased the Case 3 budget for M2 Static errors from 0.210 to 0.207 based on additional bottom-up analysis.
Decreased the Case 3 budget for Wind Shake from 0.350 to 0.300 and created explicit reserve to balance the budget.
Increased the Case 3 Beam Path Seeing to 0.045 because of the increased path length now that the NIRSP has been moved to coudé.
Modified the Case 3 and Case 2 Enclosure seeing values to be consistent and traceable to RPT-0004, and to beam path seeing assumptions (for interior seeing).
7. Date: November 2007 Revision: Revision E-2 Changes:
Changed the Case 2 budget for M2 Static errors from 0.020 to 0.030, rebalancing from reserve (now at zero).
Changed the Case 2 budget for Transfer Optics Static errors from 0.020 to 0.037, rebalanced be reducing Wind Buffeting from 0.030 to 0.020, and Heat Stop Seeing from 0.030 to 0.020.
8. Date: October 2008 Revision: Revision E-3 Changes:
DKIST System Error Budgets
SPEC-0009 Rev G Page iii
Changed the value and comments for Case 3 top-down value drive jitter, increasing from 0.125 to 0.140 arcsec.
9. Date: April 2009 Revision: Revision F-1 Changes:
Rolled four residuals into one “CIBOLA” value for all cases.
10. Date: April 2009 Revision: Revision F-1 Changes:
Rolled four residuals into one “CIBOLA” value for all cases. 11. Date: April 2010 Revision: Revision G-1 Changes:
Document new sub-allocations in M1 Static Error budget for Case II per E. Hansen.
Eliminated “null” (blank) discussions in bottom up section of Case II, even if just to acknowledge that top-down values were still in play.
Fixed a unit error in the bottom-up discussion of quasi-static alignment residuals. Upton noticed that I said nm when I meant arcsec EE in two places.
Modified error tree to divide the feed optics static errors into telescope feed optics errors and coudé room feed optics errors to give the instrument team their own allocation to apportion as they see fit.
The explanations relating to enclosure interior seeing have been cleaned up, supplemented, and a reference to the Racine paper has been added.
12. Date: January 2011
Revision: Revision G-2
Break Active Optics static errors (2.1.6) into contributions from the wave front correction subsystem (2.1.6.1) and from the M1 subsystem (2.1.6.2). This impacts both Case 1 and Case 2. The detailed differences are spelled out in Appendix A.
13. Date: Jun 2012
Revision: Revision G-3
Add new error tree for instrumentation.
Document the creation of error budgets for each instrument. 14. Date: Dec 2013
Revision: Revision G-4
Updated Polarization Error Budget table per David Elmore. 15. Date: Oct 2014
Revision: Revision G-5
Significant changes to Section 8, the polarimetry error budget, by David Elmore.
DKIST System Error Budgets
SPEC-0009 Rev G Page iv
It includes changes to both verbiage and a rebalancing of the error allocations by component.
Changed the instrument error tree to reflect lessons learned during work with the ViSP team. The new tree identifies a static error due to residual design errors in both the telescope and instrument, and a separate entry for instrument tolerance errors, including both misalignment and manufacturing errors.
16. Date: April 2016 Revision: Revision G-6
Significant changes to Section 8 on polarimetry errors, by David Harrington. 17. Date: December 2016
Revision: Revision G-7
Added two new terms to the error tree for the occulter used in the coronal error budget and for the Gregorian field stop.
18. Date: January 2017 Revision: Revision G
Slight changes to polarization sections.
See CR-0699.
DKIST System Error Budgets
SPEC-0009 Rev G Page v
Table of Contents
1. Introduction ....................................................................................... 1
2. Top-Down Error Budgeting .............................................................. 1
3. Bottom-Up Error Budgeting ............................................................. 2
4. Requirements Flow Down ................................................................. 3
5. Implementation .................................................................................. 4 5.1 Snapshot Budget Implementation.............................................................................................. 4 5.2 Performance Prediction Implementation ................................................................................... 6
6. Delivered Image Quality Error Budgets ........................................... 7 6.1 General Error Budgets ............................................................................................................... 7 6.2 Diffraction Limited Error Budgets ............................................................................................ 8 6.3 Instrument-Specific Error Budgets ............................................................................................ 8
7. Delivered Image Quality Error Budgets ......................................... 10 7.1 Science Case 1, Diffraction Limited Delivered Image Quality ............................................... 10 7.2 Science Case 2, Seeing-limited Delivered Image Quality ....................................................... 10 7.3 Science Case 3, Seeing-limited Coronal Delivered Image Quality ......................................... 10
8. Polarimetric Error List .................................................................... 12 8.1 Static optical polarimetric error terms ..................................................................................... 13
8.1.1 Temporal stability of coatings and mirror polarization properties ................................ 14 8.1.2 Depolarization caused by converging beam on retarders and mirrors ........................... 14 8.1.3 Non-uniformity sensitivity caused by converging beam footprint variations ............... 14
8.2 Calibration optics at GOS ........................................................................................................ 14 8.2.1 Calibration optic orientation uncertainties ..................................................................... 14 8.2.2 Calibration polarizer contrast ratio ................................................................................ 14 8.2.3 Calibration retarder uniformity ...................................................................................... 14 8.2.4 Calibration retarder temperature rise & gradients ......................................................... 15 8.2.5 Calibration retarder stress birefringence ........................................................................ 15 8.2.6 Calibration retarder and polarizer design: angle of incidence effects............................ 15 8.2.7 Calibration retarder interference fringe suppression ..................................................... 15 8.2.8 Calibration retarder beam deflection and displacement ................................................ 15 8.2.9 Calibration Optic Cleanliness ........................................................................................ 16
8.3 System models & calibration method errors ........................................................................... 16 8.3.1 Group model for the telescope feed optics .................................................................... 16 8.3.2 Decompositions of group Mueller matrices for the optics ............................................ 16 8.3.3 Calibration of M1 & M2 via techniques under development ........................................ 16 8.3.4 Dichroic beam splitter multi-layer coatings (FIDO) ..................................................... 17 8.3.5 Birefringence & depolarization (static or induced) of upstream optics ......................... 17 8.3.6 Birefringence & depolarization (static or induced) of downstream optics .................... 17
8.4 Polarization Modulator performance ....................................................................................... 17 8.4.1 Modulation retarder uniformity ..................................................................................... 17 8.4.2 Modulator interference fringe suppression .................................................................... 17 8.4.3 Efficiency reduction from instrument feed optics ......................................................... 17 8.4.4 Cleanliness of modulator optics..................................................................................... 18 8.4.5 Wedge & Beam deflection impacts ............................................................................... 18 8.4.6 Modulator temperature stability .................................................................................... 18
8.5 Instrumentation & sensor polarization error terms .................................................................. 18 8.5.1 Detector non-uniformity ................................................................................................ 18
DKIST System Error Budgets
SPEC-0009 Rev G Page vi of 2
8.5.2 Detector linearity ........................................................................................................... 18 8.5.3 Detector substrate fringes .............................................................................................. 18 8.5.4 Detector electronic noise ............................................................................................... 18 8.5.5 Detector cosmetics ......................................................................................................... 19 8.5.6 "Slow" dynamic optical issues (scanning mirrors) ........................................................ 19 8.5.7 Triggering jitter and modulator synchronization ........................................................... 19 8.5.8 Internal opto-mechanical stability ................................................................................. 19 8.5.9 Data extraction stability ................................................................................................. 19
8.6 b-Field derivation error terms .................................................................................................. 19 8.6.1 Inversion technique........................................................................................................ 19 8.6.2 Intra- and Inter- instrument registration ........................................................................ 20 8.6.3 Post-processing technique ............................................................................................. 20
8.7 Dynamic polarization error terms ............................................................................................ 20 8.7.1 Averaging over dynamic phenomena (impact on sampling/resolution) ....................... 21 8.7.2 Changed Modulation speed Modifying Seeing / Jitter requirements ............................ 21 8.7.3 Coating degradation over long timescales ..................................................................... 21 8.7.4 Optical uniformity for Active Systems .......................................................................... 21 8.7.5 Interpolation between calibrations (time, wavelength, space, etc) ................................ 21 8.7.6 Coudé angle (slit orientation projected during observations) ........................................ 21 8.7.7 Coudé flexure from rotation angle ................................................................................. 21
8.8 Table of Mueller matrix errors ................................................................................................ 22 8.9 Polarization Error List Summary............................................................................................. 22
9. Definition of terms........................................................................... 23
10. References ....................................................................................... 24
Appendix A –Delivered Image Quality Error Tree Elements ................ 26
DKIST System Error Budgets
SPEC-0009 Draft G Page 1 of 50
1. INTRODUCTION
This document describes the DKIST error-budget plan. It will also be used to document and track the
actual error budgets as they are developed and maintained. Hence, it is a living document that will be
updated as the DKIST project moves through its design-and-development, construction, and integration
phases. Error budgets are an indispensable tool for assuring that project requirements can be and are
being met. They represent a simplified allocation and rough performance estimating system, though not a
replacement for more detailed systems performance modeling.
There is a close analogy between error budgeting, and financial budgeting. With a financial budget, the
starting point is a single total budget value that represents the money available to everyone involved in a
given project, and divides it up among the various departments. The analogous process applied to error
budgeting takes a specific maximum allowed deviation from ideal performance called out in a
requirements document, and apportions that “error” among all the components and processes that have
the potential to contribute to that error.
For polarimetry, detailed error budgeting tools do not yet exist for several major issues. Without these
tools, we can still make complete lists of all known issues. With the list, we can ensure that we minimize
all known error terms and design a system to best practices.
Error budgeting is fundamentally a systems-level issue. A given error budget will typically be distributed
across many disparate subsystems that are being designed by different engineers, and fabricated by
different vendors. It is, however, a useful tool at all levels of design since it represents a means to
negotiate design trades in the broadest possible context. Error budgeting is in many ways central to the
mission of systems engineering.
2. TOP-DOWN ERROR BUDGETING
A top-down financial budget process begins with management’s preliminary division of a fixed amount of
money (the total budget) among the various departments. This is initially performed with little or no
input from the departments, hence the “top-down” designation. Top-down error budgets begin from one
fixed value. For the Advanced Technology Solar Telescope (DKIST) the single number upon which a
given error budget comes from the DKIST Science Requirements Document (SRD). One example of this
is the seeing-limited requirement in the SRD: The DKIST should not degrade the seeing profile by more
than 10 percent when the adaptive-optics system is disabled. This budget is developed in detail below,
but used here to illustrate the concepts. The requirement, as stated, will yield a single number for given
seeing conditions –expressed here in terms of encircled energy – that becomes the basis for a family of
error budgets derived for different wavelengths, surface wind speeds, zenith distances, and other
observing parameters. It is sometimes obvious which member of the family will be the most challenging
case, and thus drive the telescope design. This is not always true, however, since a moderate wind can be
good for mirror and enclosure seeing, but bad for wind buffeting and wind shake. We often investigate
several family members in some detail to identify the “worst case.”
With a specific requirement in hand, the next step in the top-down error budgeting process is to make an
error tree. This begins with a list of all system elements and external conditions that have the potential to
degrade the requirement under study. For the seeing-limited example this would include items like
mirror-polishing residuals, internal and external enclosure seeing, and wind buffeting, to name just a few.
These are then organized into a hierarchical outline with several levels of indent. The levels of indent
represent the branches and sub-branches of the error tree. It may make sense to organize an image-quality
budget into errors resulting from the earth’s atmosphere, from the telescope, and from the focal-plane
instrument. The telescope errors can be further divided into static, and dynamic errors, and so forth.
There is a danger inherent in the budgeting process that relates to finding an optimum level of detail for
the error tree. While it may seem desirable to keep branching until we have isolated individual telescope
DKIST System Error Budgets
SPEC-0009 Rev G Page 2 of 30
components, such a budget tree can become extremely large and complex. A thorough top-down
approach taken to this extreme will usually lead to completely unrealistic specifications on individual
components. At some point, when the tree reasonably represents the detailed areas of concern, the top-
down organization is declared finished, and work proceeds from the bottom up. As a goal, the top-down
budget is a single page that can be conveniently presented and easily explained at systems meetings and
project reviews.
The final step in the top-down phase is to apportion the total error among the lowest-level categories
called out in the error tree. This step is performed by systems engineering, and is often based on
information available from similar projects. If no such guidance can be obtained, one might anchor the
initial top-down budget values with data from relevant theory, experiments, or available model data. In
short, the initial top-down allocations are starting values based on the best information available at the
time, and are refined later during the bottom up phase of the budgeting process. Extra margin, if
available early in the top-down allocation process, is distributed among the items with the greatest
uncertainty and largest potential impact on the total error. A good example of this might be enclosure
seeing, since there is very little daytime data available but plenty of anecdotal evidence that its impact
may be significant.
3. BOTTOM-UP ERROR BUDGETING
The initial top-down error budget establishes one example of a “balanced” budget. As design work
proceeds, better information about the expected performance of system elements will be available, and
this needs to be incorporated in the error budget subject to the constraint that the budget must remain
balanced. Once again there is a strong analogy between bottom-up financial planning, and bottom-up
error budgeting. When a financial budget is under development, department heads are asked to estimate
their costs based on detailed knowledge of the scope of their assigned tasks. These values are then
compared to the preliminary top-level budget allocations estimated by management and adjustments are
made until the sums agree. The departments will often continue to refine their budget requests as they
manage their assigned tasks.
The bottom-up error budgeting process is performed much the same way as its monetary analog. The
process begins by approaching the individual engineer responsible for a given subsystem and asking for
an independent estimate of the likely tolerance of that component. The initial estimates may be specific
to DKIST or may begin with values inferred from previous experience. We continue to refine these
values throughout design and development and even during the construction phase as better information
becomes available.
The bottom-up error budget is fundamentally driven by the need to relate achievable manufacturing
tolerances back to the total allowed error specified in the SRD. While the individual engineers provide
the input, systems engineering has the responsibility of combining the results with those derived for all
other system components. Systems engineering must make judgments as to how these component-level
manufacturing tolerances will affect or relate to other subsystems, and negotiate design requirements that
keep the budget in balance.
The first task encountered in bottom-up error budgeting is making the necessary transformations and
conversions from the parameters specified in the manufacturing tolerances to the error budget values and
units. For example, the engineer specifying the primary mirror tolerance will likely do so in terms of
RMS figure errors. Some analysis must be performed to relate this parameter back to the seeing-limited
error budget, which is expressed in terms of the 50% encircled energy diameter in units of arcsecs. The
situation is further complicated by the presence of an actively controlled primary mirror (active optics or
“aO”), which is able to make closed loop figure corrections during the seeing-limited observations. The
aO system is capable of significantly reducing the low-order surface figure errors, thus compensating
some, but not all, of the errors introduced by the primary mirror. While the elements of the bottom-up
budgets can become complex, particularly the steps to transform and combine the various relevant
DKIST System Error Budgets
SPEC-0009 Rev G Page 3 of 30
contributions, “showing the work” is critical to developing credible and defensible error budgets. This
document performs that function.
4. REQUIREMENTS FLOW DOWN
As the preceding discussion implies, error budgets are a critical step in flowing science requirements
down to design requirements. For example, the requirements placed on the design of the telescope mount
may be stated in terms of structural parameters like stiffness, but these are ultimately derived from the
science requirement for image quality and pointing accuracy. Engineers responsible for a given aspect of
telescope design need to know quantitatively how well their element must perform. These questions are
answered by tracing back to a science requirement, and that path will lead through one or more error
budget values.
The design requirements for the entire telescope system are obviously not specified by a single error
budget. Typically one science use case leads to one or more quantitative science requirements, and each
of these can spawn one error budget. This leads to two challenges. The first is finding the minimum
number of error budgets that include a bounding constraint on each subsystem. The second is to find a
breakdown of errors within each of these budgets that leads as directly as possible to design requirements.
The mapping from a given design requirement to a specific entry in an error budget will generally not be
obvious for various reasons:
1. Several error budgets may have the same error tree (see delivered image quality, for example),
but in some cases the value entered for a specific item may be different from one budget to the
next. This is the result of active control systems that may be in use for some science use cases,
but not others. For example, the budgeted image-jitter error will be dramatically higher for
coronal observations when tip-tilt correction is limited or absent. When fast tip-tilt is available
this budget number relaxes the requirements on the telescope mount, but helps to constrain the
tip-tilt servo requirements. For the larger coronal image-jitter error, the value entered helps
constrain the telescope mount and enclosure ventilation requirements.
2. Several budget values may be relevant to a single design requirement, but only one will be the
most constraining.
3. Even in the simple, most direct cases the tolerance parameter or perhaps just the units in which it
is expressed may be different from that specified in the SRD. As a result, some sort of
transformation or conversion is required.
DKIST System Error Budgets
SPEC-0009 Rev G Page 4 of 30
5. IMPLEMENTATION
The error budgets maintained for DKIST are used in two different modes. The first mode represents
snapshots in time, assuming specific observing conditions. The image-quality science requirements, for
example, specify seeing conditions that are good or excellent. We make assumptions about other free
parameters, like ambient temperature, wind speed, and zenith distance and these values are entered as
constant parameters. We will usually adopt “worst case” values when the science requirement does not
include these details.
The second mode used in the error budgets brings additional information into the error calculation:
distributions of expected parameter values that affect the budget values. For example, wind-speed
statistics are available for the candidate sites. With these distributions in place, Monte Carlo simulations
can be performed to randomly select wind speeds weighted by the probability functions (histograms). By
looking at thousands of system manifestations, it is possible to estimate the fraction of time that the
telescope system will deliver images of a given quality.
5.1 SNAPSHOT BUDGET IMPLEMENTATION
The error-tree format described above in the top-down description lends itself well to a spreadsheet
implementation. DKIST has used Microsoft Excel workbooks for this purpose. The DKIST error-budget
template appears in Figure 5.1. It consists of three areas: Parameters, Notes, and the Error Tree itself. As
Figure 1 shows, there are two cells available for each entry in the Error Tree. The left cell (highlighted in
green) is used for the snapshot allocation, and the right cell (highlighted in tan) is used by the
performance prediction feature described below.
The Parameters section contains the variables specific to a given family member of an error budget. For
example, the seeing-limited error budget parameterizes the results in terms of a specific wind speed and
telescope elevation. Additional family members derived using different parameters can be retained as
additional worksheet tabs in the workbook, though the spreadsheets are otherwise identical.
Specific assumptions and conventions used for a given error budget are called out in the Notes section of
the spreadsheet, along with other conventions applied consistently throughout the spreadsheet. For
example, the method used to combine the various budget values at the next-highest level of the tree will
appear in the Notes. Statistical quantities like RMS wavefront errors will usually be added in quadrature
(the square root of the sum of the squares or “RSS”). In other cases the values might be simply summed
or multiplied as appropriate. Where additional justification or explanation is needed that cannot fit within
the Notes area, a reference is given there to the Error Tree Detail section of this DKIST System Error
Budget Plan under the particular science case. Useful equations, graphs, and tables that support the
spreadsheet calculations will appear there also.
In some cases bottom-up values are supported by calculations on separate spreadsheets, ultimately linked
to the top-level sheet. At the lowest spreadsheet level the parameters entered into each cell will be cast in
terms of engineering tolerances in units convenient for the responsible engineer. In cases where a simple
equation or rule-of-thumb is sufficient to convert a value, the spreadsheet (or another linked to it) will
perform the translation. In other cases the single component-level tolerance may be part of a much larger
calculation involving modeling and data reduction not suitable for performance on a simple spreadsheet.
The aO example discussed above is such a case. The final figure of the mirror will depend not only on
the quality of the primary mirror delivered, but also on design features of its mirror cell and its orientation
at the time of the observation. In these cases the detailed process for obtaining the final result is outlined
in the Error Budget Explanations section of each specific error budget in this document.
DKIST System Error Budgets
SPEC-0009 Rev G Page 5 of 30
Figure 5.1. The ATST error-budget template. The color-highlighted cells must either have a number
entered (green, snapshot) or in some cases linked from a supporting calculation (tan, performance).
DKIST System Error Budgets
SPEC-0009 Rev G Page 6 of 30
5.2 PERFORMANCE PREDICTION IMPLEMENTATION
As useful as error budgets are for defining telescope requirements, their use in this limited context can
lead to misunderstandings and inflated expectations. This stems from the specificity of the science
requirement, and the resulting narrow scope of a given error budget. For example, the DKIST Science
Requirements Document places a minimum image quality requirement for seeing-limited observations in
the near infrared given excellent seeing conditions. The telescope must not degrade image quality below
a specified level under the circumstance of r0 100 cm at 1.6 m. We must assure that the telescope can
perform at this level whenever conditions allow, even though such conditions are rare. The error budget
developed to achieve this requirement, however, is just a snapshot of the performance we can expect
given very specific assumptions.
Performance prediction has been implemented using the error budget spreadsheets in conjunction with an
after-market add-in called Crystal Ball® sold by Decisioneering, Inc. of Denver, Colorado. This tool was
created specifically to apply Monte Carlo simulations to spreadsheets. Once installed, Crystal Ball works
from within Excel, adding its own menu elements and toolbars. Crystal Ball is well suited to our
performance modeling task because the snapshot error budgets for DKIST are already cast as Excel
workbooks.
The procedure for adding a Monte Carlo element to the snapshot budgets proceeds as follows:
1. Identify cells on the spreadsheet that should be treated as a range of possible values weighted by a
known probability distribution. Crystal Ball calls these “assumption cells.” For the delivered image
quality error budgets these cells can include (at a minimum) the r0 value, external wind speed, and air
temperature prediction errors.
2. Assign a probability distribution to the cell. The software offers a “gallery” of possible choices, one
of which is a custom distribution. The custom data can be used in its raw form, or can be
automatically fit to one of the gallery distributions to smooth out noise and to speed up calculations.
For the menu choices, the usual parameters are available to manually match the functions to
observations or expectations.
3. Define other cells that represent the dependent variables (called “forecast cells” in Crystal Ball
parlance). For our delivered image quality error budgets this can include, at a minimum, the total
combined error representing the delivered image quality, expressed as 50% EE in arcsec. Crystal Ball
allows a virtually unlimited number of forecast cells to be defined, however, so it is useful to select
intermediate values, like mirror seeing or enclosure seeing, just to gain insight into their relative
contribution to the final result.
4. Run the Monte Carlo simulation. Crystal Ball can run thousands of manifestations of the
performance model in a few seconds, randomly selecting values for the assumption cells weighted by
the assigned probability distributions. The user selects the number of “trials” to perform. Crystal
Ball displays the histograms of all forecast cells in real time during the computation to give the user a
sense of how quickly the results are smoothing out, and how many trials need to be computed to
obtain statistically significant results.
Additional information about the details of the Monte Carlo modeling as applied to DKIST is included in
the “performance modeling sections of the relevant error budgets, and also in the SPIE paper, “Monte
Carlo telescope performance modeling.”
DKIST System Error Budgets
SPEC-0009 Rev G Page 7 of 30
1 Atmosphere (Seeing)
2 Telescope
2.1. Static
2.1.1. Diffraction and Optical Design
2.1.2. M1
2.1.3. M2
2.1.4. Telescope Transfer Optics
2.1.5. Coudé Lab Optics
2.1.6. Active Optics System
2.1.6.1. Wavefront Correction aO errors
2.1.6.2. M1 aO errors
2.1.7. Quasi-static Optical Alignment
2.2. Dynamic
2.2.1. Wind Buffeting
2.2.2. Seeing
2.2.2.1. Enclosure Seeing
2.2.2.2. Telescope Seeing
2.2.3. Dynamic Optical Alignment
2.2.4. Image Jitter
2.2.5. Adaptive Optics Errors
3 Instrument
6. DELIVERED IMAGE QUALITY ERROR BUDGETS
As noted previously, all error budgets contained in this document have been derived based on sciences
cases and constraints found in the DKIST Science Requirements Document. Three science cases have
been given the highest priority initially because they span the most critical science use cases, and because
they are the most stringent, driving the telescope design the hardest. These include, in order of priority,
Diffraction-limited delivered image quality using adaptive optics at visible wavelengths.
Seeing-limited delivered image quality without adaptive optics, but with closed-loop active optics
at near-infrared wavelengths.
Seeing-limited coronal delivered image quality without adaptive optics, and with only open-loop
active optics at near-infrared wavelengths.
They were the first to be developed in detail during the design and development phase of the project, and
will be tracked most closely throughout design, construction, and integration. Note that all three primary
error budgets are specified in terms of delivered image quality, reflecting the high scientific priority given
to image quality in DKIST.
6.1 GENERAL ERROR BUDGETS All three of these delivered image quality error budgets share a common error tree. Errors are initially
classified according to whether they are introduced by the earth’s atmosphere, the telescope assembly, or
the science instrument. The top-level error tree is as follows:
The fundamental difference between the seeing-limited and diffraction-limited budgets is that in the latter
the adaptive optics system is permitted to correct some of the errors. Running the AO system causes
many of the errors, especially atmospheric seeing and dynamic telescope motions, to be reduced
DKIST System Error Budgets
SPEC-0009 Rev G Page 8 of 30
significantly. For the seeing-limited coronal case, some of the errors normally corrected by active control
of M1 will degrade somewhat because the aO must run open loop, depending on the best available
estimates from look-up tables.
See appendix A for a detailed description of the error tree elements.
6.2 DIFFRACTION LIMITED ERROR BUDGETS
The adaptive optics error tree is expanded when AO is switched on. These are the adaptive optics errors
called out under item 2.2.5, and represent errors that degrade the image in a small way relative to an ideal
wavefront correction system with the specified number of actuators. These are broken down as follows:
2.2.5 Adaptive Optics
2.2.5.1 WFS Measurement Error
2.2.5.2 DM residuals
2.2.5.3 WFS-DM misalignment
2.2.5.4 Non-common path error
2.2.5.5 Noise on reference slopes
2.2.5.6 Noise on interaction matrix
2.2.5.7 Generalized Anisoplanatism
See Appendix A for a detailed description of the error tree elements.
6.3 INSTRUMENT-SPECIFIC ERROR BUDGETS
DKIST has five first-generation instruments:
1. The Visible Broadband Imager (VBI), which has a red and a blue channel, is designed to obtain
diffraction-limited images on the disk of the sun.
2. The Visible Spectro-Polarimeter (ViSP), which can observe multiple spectrum lines
simultaneously, will produce IQUV images when scanned spatially perpendicular to the slit
direction on the disk of the sun.
3. A Diffraction-Limited Near-Infrared Spectro-Polarimeter (DL-NIRSP), also capable of observing
multiple spectrum lines simultaneously, providing similar polarimetric capability out to 2.5 µm
on the disk of the sun.
4. A Visible Tunable Filter (VTF) that can obtain diffraction-limited IQUV images by scanning
through spectral lines on the disk of the sun.
5. A Cryostatic Near-Infrared Spectro-Polarimeter (Cryo-NIRSP) that does spectro-polarimetry of
the sun’s corona.
These five instruments span all three science use cases at a variety of wavelengths, and generate families
of delivered image quality error budgets. The fried parameter scaled by wavelength relative to 500 nm
according to the assumption that 0r scales as wavelength to the six-fifths power,
6
5
0 0 500500
r r
.
DKIST System Error Budgets
SPEC-0009 Rev G Page 9 of 30
Instrument-specific error budgets that are maintained by systems engineering are summarized in the
following table:
EB no. Instrument Science Case Wavelength r0 Strehl / EE
1 VBI Case 1. DL on disk 430 nm 6 cm Strehl = 0.2
2 VBI Case 1. DL on disk 630 nm 9 cm Strehl = 0.2
3 ViSP Case 1. DL on disk 500 nm 7 cm Strehl = 0.3
4 ViSP Case 1. DL on disk 630 nm 20 cm Strehl = 0.6
5 VTF Case 1. DL on disk 500 nm 7 cm Strehl = 0.3
6 VTF Case 1. DL on disk 630 nm 20 cm Strehl = 0.6
7 DL-NIRSP Case 1. DL on disk 900 nm 25 cm Strehl = 0.6
8 DL-NIRSP Case 1. DL on disk 2500 nm 80 cm Strehl = 0.6
9 DL-NIRSP Case 2. SL on disk 1600 nm 100 cm EE50 = 0.15 arcsec
10 Cryo-NIRSP Case 3 SL Coronal 1000 nm 50 cm EE50 = 0.7 arcsec
The Instrument element of the error tree (3.0) can generally be expanded to support the development of
these Instrument Specific Error Budgets:
3 Instrument
3.1 Static
3.1.1 Beam Splitters
3.1.2 Modulator Wavefront
3.1.3 Residual Design Errors
3.1.4 Instrument Tolerance Errrors
3.1.5 Camera MTF
3.2 Dynamic
3.2.1 Beam Path Non-Common Path
3.2.2 Internal Instrument Non-Common Path
3.2.3 Modulator Wobble
A more detailed description of each of these is also included at the end of Appendix A.
DKIST System Error Budgets
SPEC-0009 Rev G Page 10 of 30
7. DELIVERED IMAGE QUALITY ERROR BUDGETS
7.1 SCIENCE CASE 1, DIFFRACTION LIMITED DELIVERED IMAGE QUALITY
The DKIST Science Requirements Document places the following requirement on diffraction-limited
observations with adaptive optics:
The DKIST shall provide diffraction-limited observations (at the detector plane) with high Strehl
(S>0.6 required, S>0.7 goal) at 630 nm and above during excellent seeing conditions (r0 (630
nm) > 20 cm) and S > 0.3 at 500 nm and above during good seeing (r0 (500 nm) = 7 cm).
The specific error budget contained in this statement for median seeing has the following
parameters (DIQ Case 1a):
Wavelength: 500 nm r0: variable, = 7 cm Active Optics: Closed Loop Adaptive Optics: Closed Loop S > 0.3
A second version of this error budget (DIQ Case 1b) applies the same requirements to the “excellent”
seeing conditions at 630 nm:
Wavelength: 630 nm r0: variable, =20 cm Active Optics: Closed Loop Adaptive Optics: Closed Loop S > 0.6
7.2 SCIENCE CASE 2, SEEING-LIMITED DELIVERED IMAGE QUALITY
The DKIST Science Requirements Document places the following requirement on seeing-limited image
quality in the near infrared:
[For] excellent seeing conditions, (r0 at 1.6 micron 100 cm)… Minimum requirement: 50%
Encircled Energy Diameter < 0. 15 arcsec.
“Seeing Limited” implies that the adaptive optics loop is not closed. We presume, however, that
wavefront sensing is available to perform fast tip-tilt corrections and slow active optics. Hence, the
specific error budget contained in this statement has the following parameters:
Wavelength: 1600 nm
r0: variable, 100 cm
Active Optics: Closed Loop Adaptive Optics: Open Loop Tip/tilt: Closed loop
7.3 SCIENCE CASE 3, SEEING-LIMITED CORONAL DELIVERED IMAGE QUALITY The DKIST Science Requirements Document places the following requirement on seeing-limited image
quality during coronal observations:
Off-pointing up to 1.5 solar radii, wavelength 1 micron, excellent seeing conditions: r0 (1 micron)
50 cm, FWHM seeing limited PSF 0.4 arcsec. The minimum resolution required for coronal
magnetometry is 2 arcsec. The Telescope shall deliver the following image quality:
50% Encircled Energy Diameter < 0 .7 arcsec
DKIST System Error Budgets
SPEC-0009 Rev G Page 11 of 30
85% Encircled Energy Diameter < 2 arcsec
The pointing and tracking science requirements for the coronal case suggest that coronal exposures will
typically last approximately one hour. This places additional constrains on subsystems that will be
running open loop during coronal observations, as noted in the discussions below.
Wavelength: 1000 nm
r0: variable, 50 cm Active Optics: Look-up tables only Adaptive Optics: Open Loop Tip/tilt: Closed loop if performed by the Cryo-NIRSP
Note that in RPT-0021, DKIST Site Survey Working Group Final Report “excellent seeing” was
defined to be r0(500nm) > 12 cm, not 25 cm as was assumed in
DKIST System Error Budgets
SPEC-0009 Rev G Page 12 of 30
8. POLARIMETRIC ERROR LIST
The error terms for polarimetry consists of several components that must be treated in detail. In this
section, we create a list of known polarization issues that must be mitigated by DKIST to perform
precision polarimetry. This list will form the basis of procurement, test and mitigation activities. As with
image quality and wave front metrics, there are many specific configurations that would require
independent consideration of many terms contributing to the polarimetric error terms. We list in this
section the high level summary of polarimetric error terms and reference ongoing activities to predict,
estimate, model, measure, test and mitigate these errors. The intent is to list high-level topic summaries
here and track the details of progress in TN-0245 Polarization Systems Engineering.
Some error items reflect polarimetric properties of the basic optical design from M3 down to coudé.
Errors associated with optical calibration techniques must also be included in a systems-level error list
(e.g. group model technique limitations, errors from calibration optic design and fabrication at GOS, field
of view effects, calibration by slit-scanning instruments, etc). Some error items reflect uncertainty over
methods used to calibrate the primary and secondary mirror (correlation method, daytime sky method,
etc). Many polarimetric performance items depend greatly on dynamic system performance. For example,
seeing induced cross talk and residual "speckles" from imperfect adaptive optics correction introduce
limitations to the temporal, spatial & spectral data that depend on wind, atmospheric conditions, WFS
target contrast, modulation speed, etc. See: Seeing induced cross-talk in the study "On the Detection of
Polarized Light: A Case Study for the DKIST" Tritchler 2011, Parts a, b &c. The time to smooth
speckles, the corresponding spatial sampling (and delivered optical resolution) required to demonstrate
polarimetric accuracy all drive the polarization error budget.
Under most observing cases, we can reduce the spatial and spectral resolution via optics, different masks,
data-processing, temporal averaging and other averaging techniques to achieve high polarimetric
sensitivities. In most science use cases, the SPEC-0001 and associated science descriptions simply state
magnetic field orientation and some basic requirements on derived data product properties without
connecting the measured field through any particular inversion techniques to specific instrument
configurations. The creation of a polarimetric error term list requires considering the science deliverables
in a case-by-case basis to flow down high level magnetic field measurables in to specific polarimetric
instrument performance requirements. To keep this document high level and practical, we simply
summarize the several types of polarimetric errors and point to the references documenting ongoing
efforts.
From SPEC-0001, we can compile a listing of some highest priority deliverables noting that they are very
unspecific about instrumentation used to measure the various solar features:
3.1.1 High res. observations of the photosphere - (Magneto) Convection: 10-4 Ic (visible)
3.1.2 Flux emergence and disappearance
Intrinsic field strengths for stronger fields (>200 G) should be determined within +/- 100G. This
requirement is driven by the need to view changes in weak, horizontal fields as they emerge, migrate, and
intensify. Orientation for stronger fields within +/-20 deg -- discern the approximate inclination to
vertical In the visible; 10-3 Ic for stronger fields, 10-4 Ic for weak internetwork fields. In the near IR
these can be relaxed by a factor of 3.
3.1.3 Dynamics of Kilogauss Flux Tubes
kG flux tube formation from equipartition field (400-500G) requires precision of +/- 50G for intrinsic
field strength measurements for each temporal and spatial data point. The clarification of the origin of
Stokes profile asymmetries requires precise knowledge of field inclination in the range of +/- 10 deg.
DKIST System Error Budgets
SPEC-0009 Rev G Page 13 of 30
Critical: interference between neighboring magnetic features results in strong asymmetric profiles. 99%
of polarimetric signal shall be contained within 0.”3. This requires very high Strehl ratios (see section
5.1.1) Measurements of Doppler velocities in the immediate non-magnetic surrounding (a few 10 km) of a
flux tube are required (Canopy effect). Such velocity measurements must not be contaminated from
surrounding granular flows by stray light. Requirement: < 1% scattered light from surrounding
photosphere
3.1.4 Internal Structure of Flux Tubes/ Irradiance Variations: I, V: 10-4. U,Q: 10-3. Small-scale fields
predominantly vertical in the photosphere.
3.1.5 Turbulent / Weak Fields: Both Hanle observations and Zeeman observations will require the
highest possible sensitivity, which should be limited by the photon statistics only. A sensitivity of at least
10-5 should be reached in all four Stokes parameters. It is unlikely that Zeeman observations will be
performed in Q and U because the linear polarization decreases much more rapidly with field strength
than the circular polarization. Hanle observations will mostly focus on linear polarization, but cross- talk
from circular polarization should be adequately suppressed by minimizing the instrumental cross-talk to
less than 1% and correcting offline to less than 10-3.
3.1.6 Hanle Diagnostics: The relative polarimetric sensitivity should only be limited by photon statistics
down to a level of at least 10-5 Ic. It means that the random noise in the Stokes images should be smaller
than this value when sufficient trade-off with spatial and temporal resolutions are made to achieve the
required photon statistics.
3.1.7 Magneto-convection in Sunspots: 10-3 photosphere. 10-4 chromosphere.
3.1.8 Generation of Acoustic Oscillations: No polarimetric requirements stated.
From the SPEC-0001 stated science cases above (which do not call out specific instrument
configurations, spectral lines, integration times, seeing conditions, etc), there are specific observing
scenarios that flow in to the individual instrument requirement documents (ISRD). These science cases
can be broken out in to some typical observing conditions that specify the target brightness, AO system
performance (if applicable), wavelengths, expected signal strengths, spectral resolutions, etc for the
purposes of identifying limiting polarimetric errors.
From the above AO use cases, we have these basic image quality scenarios relevant to polarimetry. Each
case represents different system configurations, active control by an adaptive optics system and specific
observing targets:
Case 1, Diffraction Limited Delivered Image Quality (ON DISK)
Case 2, Seeing-limited Delivered Image Quality (ON DISK)
Case 3, Seeing-limited Coronal Delivered Image Quality (CORONA)
ViSP and DL-NIRSP mostly have diffraction limited use-cases with Strehl ratios above 10% (well
developed image core, small spatial sampling and fast moving plasma creating short pixel-crossing times
at the reimaged focal planes).
8.1 STATIC OPTICAL POLARIMETRIC ERROR TERMS There are several sources of error that do not have fast temporal dependence (>1Hz) and can be treated as
quasi-static terms that can be dealt with through calibration, design and modulation techniques. These
static errors relate mostly to basic optical design issues, optical coatings, fabrication errors, static
polarization calibration errors, and the various techniques used to fit for some (reduced) number of
DKIST System Error Budgets
SPEC-0009 Rev G Page 14 of 30
variables describing the system polarization response as functions of all relevant variables (wavelength,
telescope pointing, field of view, modulation state, grating configuration, etc).
In this section, we identify several major contributors to the polarimetric error of a quasi-static system
(ignoring atmospheric seeing, vibration, timing jitter, or other "fast" phenomena).
8.1.1 Temporal stability of coatings and mirror polarization properties FIDO optics are interchangeable and must be replaced in a repeatable manner. The optics must also be
stable in time such that the beam footprint samples the optics similar enough to produce stable
polarization calibration.
8.1.2 Depolarization caused by converging beam on retarders and mirrors There are 16 degrees of freedom in a physical Mueller matrix. There are 9 degrees of freedom
corresponding to depolarization terms. These terms may require inclusion in a system level polarization
model. For instance, the calibration retarders are ~0.5% depolarizing in the visible due to the averaging
over the f/13 beam range of incidence angles. The M1+M2 mirror group produce 0.1% to 0.2% diagonal
depolarization in the visible for similar reasons. (Sueoka 2015, Harrington et al. 2017 JATIS, Chipman et
al. Noble et al. 2012).
8.1.3 Non-uniformity sensitivity caused by converging beam footprint variations Optical coatings with non-uniform properties can cause depolarization and / or introduce time-
dependence to the calibrations. The variation in polarization response across the footprint is strongly
nonunifrom from the f/2 primary mirror, creating sensitivity to vignetting and transmission issues in
downstream optics.
8.2 CALIBRATION OPTICS AT GOS The DKIST design includes calibration optics at the Gregorian Optical Focus (GOS). This includes
separate super-achromatic retarders (SARs) with different properties for the instruments ViSP, DL-
NIRSP and Cryo-NIRSP. These SARs will be used in conjunction with one (or more) polarizers in a
Polarization State Generator (PSG) type system. These optics are used during observations of the sun and
/ or of calibration light sources to inject known polarization states from after M2 through the instruments.
By creating a quality calibration source and making assumption of "known polarization states" injected in
to the optical system allows for accurate calibration of the system. However, such an optical unit has
several possible errors.
8.2.1 Calibration optic orientation uncertainties A Polarization State Generator (PSG) generally rotates the polarizer and retarder through sequences of
known orientations. Any uncertainty in the orientation of the optic carries both statistical errors through a
repeatability tolerance and a systematic error through an accuracy tolerance (positional instabilities or
offsets).
8.2.2 Calibration polarizer contrast ratio The contrast ratio of the polarizer (extinction ratio) in the PSG limits the calibration fidelity. The PSG
calibration techniques typically rely on the assumption that the polarizer creates 100.00% linearly
polarized light. Typical contrast ratios achievable are over 1000 but a major limiting assumption is the
contrast at all wavelengths, angles of incidence, uniformity across the clear aperture and performance
over all operating temperatures must be considered.
8.2.3 Calibration retarder uniformity The calibration retarder will be located at or near Gregorian focus. This creates a substantial performance
difference between the on-axis beam that is centered on the optic and off-axis field points that have
DKIST System Error Budgets
SPEC-0009 Rev G Page 15 of 30
footprints decentered on the optic. As the calibration retarder is rotated through a typical calibration
sequence, the on-axis beam always samples the same patch of the optic while the off-axis beams sample
different patches of the optic. This creates a field dependent impact from polishing non-uniformities.
TN-0226 Impact of polishing non-uniformities on Cryo-NIRSP SAR & PCM Retarders.
8.2.4 Calibration retarder temperature rise & gradients The calibration retarders will be used in a 150W to 300W beam above the field stop at Gregorian focus.
This creates several heating issues. Any retarder typically has temperature dependent properties. Crystal
and / or liquid crystal retarders respond to uniform rises in temperatures as well as temperature gradients
across multi-element stacks. Temperature rises and temperature gradients would impact both Panchratam
style super achromatic retarders made of multiple crystal plates and the polychromatic ferro-electric
liquid crystal based on sandwiches of zero-order retarers and matched liquid crystals. For crystal-based
optics, as the calibration retarder is rotated through a typical calibration sequence, the on-axis beam
always samples the same patch of the optic while the off-axis beams sample different patches of the optic.
This also creates a field dependent impact for any temperature variations. TN-0219 PA&C Calibration
Retarder Performance Considering Thermal Gradient
8.2.5 Calibration retarder stress birefringence The calibration retarders will be used in a 150W to 300W beam above the field stop at Gregorian focus.
This creates several heating issues. Any retarder can succumb to heat induced stress effects that cause
stress birefringence of the optical materials. Stress birefringence can be both time dependent and spatially
dependent across the optic. As the calibration retarder is rotated through a typical calibration sequence,
the on-axis beam always samples the same patch of the optic while the off-axis beams sample different
patches of the optic. This creates a field dependent impact for stress birefringence. TN-0225 Thermal
Calculations for PA&C Optics
8.2.6 Calibration retarder and polarizer design: angle of incidence effects Retarders typically have properties that vary with angle of incidence. Near a focal plane, the different
field points have different average angles of incidence in addition to a footprint that contains rays over a
range of incidence angles. Design consideration must include these AOI effects to ensure the
performance is consistent across the AOI and FOV range for the optic (Sueoka Dissertation).
8.2.7 Calibration retarder interference fringe suppression There are several coherent effects that give rise to intensity variations through polarizing optics. Coherent
internal reflections in multi-crystal SARs can cause intensity fringes that need to be reduced, smoothed
over time and optical footprint, averaged out by calibration procedures and / or filtered through post-
processing data reduction algorithms. In addition, for typical calibration observations a polarizer is used
upstream creating a highly polarized input beam. In this limit, retardance fringes are also possibly
limiting at high spectral resolution. These intensity fringes and retardance fringes must be considered for
calibration in the GOS beam and minimized via anti-reflection coatings, choices of optical materials, or
other techniques. Data processing techniques can also mitigate some fringes (TN-0256).
8.2.8 Calibration retarder beam deflection and displacement The calibration retarders are known to have some beam deflection from the manufacturing process. This
deflection causes the beam footprints to change with rotation of the retarder. This beam wobble can
introduce possible differences between the calibration illumination and the observation illumination on
downstream optics. Displacement of the beam caused by imperfect alignment of the optic causes a similar
effect.
DKIST System Error Budgets
SPEC-0009 Rev G Page 16 of 30
8.2.9 Calibration Optic Cleanliness The optics accumulate dust and contaminants which degrades performance. Optics scatter light, create
spatially variable transmission across the optical path and possibly damage coatings if not kept clean.
8.3 SYSTEM MODELS & CALIBRATION METHOD ERRORS Typical instrument calibrations require removing the signature of the optical system from the
observational data over a wide range of conditions. The altitude-azimuth dependence of the telescope
optics during tracking, field of view dependence of all optics, wavelength dependence, spatial dependence
across sensors or optical footprints (flat fielding) and any other static terms must all be measured and
removed to some tolerance.
8.3.1 Group model for the telescope feed optics The 4x4 Mueller matrices for each optic in the path for all optics under all system configurations for all
instruments present an overwhelming number of variables. Even with calibration optics at many locations
in the optical path there are severe uniqueness, stability and feasibility issues. In most complex optical
systems, the degeneracies of calibration parameters make fitting for more than just a few variables
impractical or impossible. For DKIST, the polarization calibration process must reduce the dimensionality
of the system polarization model to a few simple parameters that can be accurately fit with calibration
observations. Relevant variables are wavelength, altitude, azimuth, table angle, beam splitter
configuration, field angle and any optical settings inside each instrument. Mueller matrices of optics that
remain in a fixed static orientation with respect to one another can possibly be treated as a single grouped
Mueller matrix with limiting assumptions. The number of variables in this group model, the choice of
the physical optical groupings and stability of the group model across the relevant dependencies (field
angle, wavelength, time) must provide a calibration of the system to within tolerances. Introduction of
optics (e.g. a window at the f/50 focus to isolate the coudé lab) could introduce several more variables and
dependencies that complicates the calibration requirements. Instruments have scanning mirrors or moving
slits (CryoNIRSP & ViSP) which change internal instrument optics against external optics.
8.3.2 Decompositions of group Mueller matrices for the optics Mueller matrices can generally be decomposed into components with a reduced number of variables.
Typical parametrizations include diattenuation, linear retardance and circular retardance. The number of
variables in a Mueller matrix model for each optical group must be sufficient to capture the measured
behavior of the optics as functions of field, wavelength, etc to within some error tolerance.
8.3.3 Calibration of M1 & M2 via techniques under development The DKIST primary and secondary mirror are too large, inaccessible and highly powered (f/2) for
effective calibration by independent fabricated optical systems. Several techniques are "in development"
and / or in use at other observatories. Techniques could include cross-correlation of detected magnetic
features to identify polarization cross-talk (called the correlation method). Other telescopes use the
daytime sky as a linearly polarized source to measure cross-talk under certain circumstances (called the
daytime sky method). Zemax predictions (TN-0220) estimate a "relatively benign" Mueller matrix for the
GOS focus at DKIST. The IQ and QI terms will be present at 0.03% levels. The UV & VU terms will be
present at the 4% level. Field dependence is fairly small across the full 5 arc minute field. However, the
various techniques to calibrate M1 and M2 as functions of wavelength, field, time (coating degradation)
are "research topics" and carry risk. RPT-0045 outlines many work packages, some presently in progress,
to validate techniques and quantify the static polarization calibration errors introduced from uncertainty in
the M1 & M2 polarization performance.
DKIST System Error Budgets
SPEC-0009 Rev G Page 17 of 30
8.3.4 Dichroic beam splitter multi-layer coatings (FIDO) The beam splitter coating formulas required to isolate specific wavelength band passes are certainly
multi-layer stacks that can have strong polarization responses, intensity fringes at high spectral frequency,
exacerbated non-uniformities, strong angle of incidence effects, field of view effects and temporal
degradation. These terms represent static (but configuration dependent) performance metrics that impact
the amplitude of the system Mueller matrix elements. These multi-layer dielectric coatings should be
treated independently from a polarization perspective to ensure any artifacts can fit in to the group model,
field dependence model and be mitigated to within tolerances.
8.3.5 Birefringence & depolarization (static or induced) of upstream optics Transmissive optics in the beam upstream of the modulator are also polarizing requiring calibration and
inclusion in the “group model”. These optics can potentially introduce intrinsic birefringence from the
substrates, stress birefringence (mounting or thermal), retardance or otherwise polarimetricaly impact the
beam. Examples include dichroic beam splitter substrates, coudé isoloation windows at the f/50 focus, the
AO wave front sensor beam splitter (WFS-BS1).
8.3.6 Birefringence & depolarization (static or induced) of downstream optics Transmissive optics in the beam downstream of the modulator change the modulation efficiency of the
system and can also cause spatially variable effects impacting calibration. Examples include the gratings,
dewar windows, fold mirrors and powered relay optics.
8.4 POLARIZATION MODULATOR PERFORMANCE The modulators are designed to have efficient polarimetric modulation as a stand-alone optic rotating
through several (8+) orientations within instrument sub-systems. These modulators are designed to be
efficient through wide wavelength regions. Coupling modulators in to complex feed optics introduces
several possible errors.
8.4.1 Modulation retarder uniformity The modulators in the instruments are typically located at or near a focal plane (e.g. slit or field stop).
This creates a substantial performance differences between the on-axis beam that is centered on the optic
and off-axis field points that have footprints decentered on the optic. As the modulator is rotated through
a typical calibration sequence, the on-axis beam always samples the same patch of the optic while the off-
axis beams sample different patches of the optic. This creates a field dependent impact from polishing
non-uniformities. The modulation matrix can be fit independently across the field, slit position, etc during
the calibration process. The modulation matrix model must be determined within tolerances. TN-0226
Impact of polishing non-uniformities on Cryo-NIRSP SAR & PCM Retarders.
8.4.2 Modulator interference fringe suppression There are several coherent effects that give rise to intensity variations through polarizing optics. Coherent
internal reflections in multi-crystal modulating retarder can cause intensity fringes that need to be
reduced, smoothed over time and optical footprint, averaged out by calibration procedures and / or filtered
through post-processing data reduction algorithms. In addition, for some instrument configurations, the
optics is modulating a somewhat polarized input beam. In the limit of high polarization, retardance
fringes are also possibly observable at high spectral resolution. These intensity fringes and retardance
fringes must be considered for calibration in the GOS beam and minimized via anti-reflection coatings,
choices of optical materials, or other techniques. Data processing techniques can also mitigate some
fringes.
8.4.3 Efficiency reduction from instrument feed optics Modulator efficiency was designed and computed for the modulator as a stand-alone optic. In the actual
instrument designs, there are multiple polarizing optics between the modulator and dual-beam analyzer
DKIST System Error Budgets
SPEC-0009 Rev G Page 18 of 30
(polarizing beam splitter). Coupling the modulator in to complex optical paths leads to reduced
modulation efficiency and possible extra calibration impacts to the field dependence.
8.4.4 Cleanliness of modulator optics Modulators are located near or around focal planes for the DKIST polarimetric instruments. Scattered
light reduces the performance of the optic. Optics near focal planes cause spatially variable transmission
across the beam path mimicking polarimetric modulation.
8.4.5 Wedge & Beam deflection impacts Modulators have a beam deflection caused by imperfections in manufacturing of the optic. As a wedged
optic rotates, the beam footprints on all downstream optics move. At the focal plane, this wobble of the
beam causes image motion. As pointed out in ViSP-5263-TN-9202, “it is possible to combine the effects
of beam deviation and beam displacement, in order to reduce the amplitude of the image shift at the
camera”. However, this beam wobble is not minimized on all optics, just at the camera focal plane. The
wedge will cause the beam to move across all optics during the rotation cycle.
8.4.6 Modulator temperature stability The modulator design should be resistant to temperature changes in the sense that temperature changes do
not ‘substantially’ change the Mueller matrix of the optic. However, temperature gradients, bulk
temperature changes and temperature spatial variations can cause instabilities in the Mueller matrix of the
optic. Thermal estimates show very small impact from the absorption of heat in these optics, but
temperature stability and uniformity caused by the environment and / or mounting should be considered.
8.5 INSTRUMENTATION & SENSOR POLARIZATION ERROR TERMS The instruments themselves have active optics, electronic sensors, opto-mechanical control and
instabilities that can cause polarimetric errors. Many of these instrument performance issues couple in to
the static polarimetric error terms.
8.5.1 Detector non-uniformity All sensors are subject to (wavelength dependent) efficiency errors. These pixel-to-pixel efficiency
corrections (sometimes called gain corrections, gain tables or "part of" flat fielding).
8.5.2 Detector linearity All sensors are (potentially) subject to linearity corrections across the full well of the sensor. Different
use cases may image high contrast ratio scenes or spectra (deep spectral lines, dark features next to bright
features, etc). Continuum polarization is typically computed near maximal intensity levels which can vary
substantially across deep spectral lines. Polarimetric modulation of 10% polarized signals can also imprint
some polarization-dependent artifacts from non-linearities.
8.5.3 Detector substrate fringes At longer wavelengths, multiple internal reflections inside the detector create interference fringes that
depend on wavelength, band pass, location, etc. These effects are typically worse in silicon-based sensors
in the NIR but can be present in other sensors provided the absorption depth in the sensor substrate is long
compared to the charge storage region depth.
8.5.4 Detector electronic noise Detector control electronics can imprint noise sources across a sensor during readout. In some cases, this
noise can look like varying spatial offsets, high frequency interference patterns or other kinds of
background issues.
DKIST System Error Budgets
SPEC-0009 Rev G Page 19 of 30
8.5.5 Detector cosmetics Some sensors have poor manufacturing yield leading to high rates of damaged or underperforming pixels.
These cosmetic issues are typically compensated in the data reduction and / or post-processing.
8.5.6 "Slow" dynamic optical issues (scanning mirrors) CryoNIRSP (and other instruments) use field scanning mirrors that effectively changes the beam
footprints on upstream optics to effectively select a sub-region of the field at the present telescope
pointing. The beam footprint on downstream optics is no longer matched to upstream optics. Other
instruments have mechanized stages that are unstable and unrepeatable at some level. DKIST has a fast
steering mirror (FSM, M5) that also corrects tip/tilt. Beam footprints are "averages" over a dynamic
configuration. Often these effects are kept well below typical limiting error levels by building systems
with good stability and precise positioning. Otherwise, an error term may enter the static error budget as
residual variation due to mismatch between calibration configurations and as-observed configurations.
8.5.7 Triggering jitter and modulator synchronization Timing jitter is a fast, highly dynamic phenomena that represents the timing errors between the actual
position of the modulator, triggering of a camera integration start or stop and other errors in the associated
control systems. However, when averaging over many repeated integration cycles in a stable system, the
timing jitter essentially introduces a static term that averages the modulation matrix over slightly different
parts of the optic during the rotation cycle. The camera to modulator synchronization and drift was
outlined in TN-0207. The exposure time is also slightly different due to timing jitter, giving a variable
intensity to different modulated images. This error term contributes to the static errors through the drift
and instability in the modulation matrix.
8.5.8 Internal opto-mechanical stability Internal motion control and even static optics (grating turrets, filter wheels, movable boom arms, etc) can
introduce flexure, drifts and cause calibration changes. Instrumentation should be investigated for
internal stability (e.g. the DL-NIRSP fiber stability requirement for flat fielding).
8.5.9 Data extraction stability Every instrument has a data reduction pipeline that imperfectly calibrates the flat field, imperfectly
removes any field dependent calibrations, wavelength scale, etc. Post-processing, fitting, correlating can
provide some (imperfect) matching, aligning, etc to achieve specific magnetic field sensitivities.
8.6 B-FIELD DERIVATION ERROR TERMS To compute an uncertainty in magnetic field strength, orientation, or other derived plasma properties, the
higher level science products combine (multiple) measurements from multiple instruments (sometimes at
multiple wavelengths) and (sometimes) average with time, wavelength, and across the spatial domain
(often across multiple images from multiple instruments). Errors include variations in forward & inverse
modeling techniques (used to derive B-fields from Stokes vectors), data post-processing algorithms (e.g.
PSF fitting and speckle reconstruction errors), spectral extraction techniques (wavelength solution jitter
from instrument instabilities), intra- and inter- instrument optical registrations and other artifacts. These
issues all play a role in deriving magnetic fields with associated uncertainties from "calibrated measured
Stokes vectors". To produce an estimate of the magnetic field uncertainties measured in magnetic field
orientation (in degrees) projected on to the sun with an associated magnetic field strength uncertainty (in
Gauss), several additional errors must be considered.
8.6.1 Inversion technique What techniques are used to combine what Stokes profiles from what instruments at what sampling in to
what magnetic field configurations?
DKIST System Error Budgets
SPEC-0009 Rev G Page 20 of 30
8.6.2 Intra- and Inter- instrument registration Science cases that require combining lines measured from different scans from different instruments
suffer from alignment, registration variation and timing jitter. Intra and inter- instrument registrations for
achieving multi-line analysis of magnetic fields implied in SPEC-0001.
8.6.3 Post-processing technique Error is introduced by any techniques used to do advanced post-processing or forward models (e.g.
speckle removal, image / instrument profile reconstruction). When fitting spectral data, removal of
instrument spectral response profile is usually accomplished in the forward-model by including an
instrument profile assuming instrument parameters (with error associated in the fit).
8.7 DYNAMIC POLARIZATION ERROR TERMS
There are several polarimetric issues presented by dynamic optical systems. Dynamics can range from
atmospheric motions on the r_0 / v crossing time changing the point spread function substantially faster
than 100Hz to the “dynamic” evolution of mirror coatings that degrade at 0.000005Hz (months). In
practice, repeatability and stability of mechanical systems often limit the stability of a system. For
DKIST, many-layer dichroic coated substrates are going to be moved and adjusted with intrinsic
repeatability error. M3 is dynamic, translates the pupil and changes footprints on all downstream optics.
M5 is a fast steering mirror at a pupil, repointing the downstream optics against the upstream optics (also
changing footprints, hence changing polarization properties through calibration instabilities).
Seeing induced cross-talk is one of the more common dominating errors limiting the temporal stability of
the delivered optical resolution. AO performance depends on several external factors. The contrast of the
target scales the AO correction efficiency. The offset angle between the AO lock point and the field point
imaged by each instrument scales the Strehl ratio through several atmospheric and instrumental
performance parameters. The isoplanatic patch size varies with ground layer & high altitude turbulence
properties with AO performance generally degrading with increasing field angle. Often, telescope shake
caused by wind loading and other mechanical motion causes higher amplitude tip/tilt variation. Speckle
motion inside the PSF core (within AO bandwidth) evolves differently than the lower amplitude (higher
speed) speckles outside the AO control bandwidth. There are several parameters that depend on weather
(r_0, wind speed, ground layer turbulence, high altitude turbulence, telescope pointing jitter {wind
loading}). Others depend on the signal morphology itself (line depth, polarization morphology, spatial
intensity contrast of the scene, etc). Many of these effects can be mitigated by the active and adaptive
optics, but residual errors depend on may control system performance parameters. The 3-part study "On
the Detection of Polarized Light: A Case Study for the DKIST" Tritchler 2011, Parts a, b &c discusses
cross-talk induced when varying a few active parameters in spectral instruments. In aggregate, most of
these control system performance parameters combine to set the time-scale, spatial-scale and wavelengths
over which observations need to be combined to create “smooth, stable” polarization measurements to
call “calibrated”. In DKIST, temporal modulation is provided by the rotating modulators. The speed of
this modulation couples with the system performance and the atmospheric conditions to set the amplitude
of polarimetric artifacts. Dual-beam polarimetry, beam swapping and data analysis techniques can reduce
but not completely remove the seeing induced polarization artifacts.
For the “dynamic” polarization calibration, the slow terms (<<1Hz) can typically be compensated by
planning calibrations with coarse enough spectral/spatial sampling at a frequent cadence. Configuration
changes may trigger re-calibration if optical repeatability issues are found. For the fast terms (>1Hz),
planning the modulation rate, spatial/spectral averaging, observing only under specific atmospheric
conditions, AO system performance and target availability will all drive the calibration requirements.
DKIST System Error Budgets
SPEC-0009 Rev G Page 21 of 30
8.7.1 Averaging over dynamic phenomena (impact on sampling/resolution) Most of the “fast” dynamic terms imply a requirement to average over many statistically independent
realizations of the dynamic phenomena. Seeing-induced cross-talk should be stable over the nominal
cross-talk values. Active and adaptive optics center about the nominal aberration-free optical
configuration. The polarization calibrations need to be specified to average over all relevant dynamic
phenomena. The calibrations are only valid if the calibration configuration shares the same “average” as
the observation configuration (e.g. putting a constant phase pattern on the DM does introduce a
systematic non-zero average difference between calibration configuration and observation configuration).
8.7.2 Changed Modulation speed Modifying Seeing / Jitter requirements Depending on individual science case requirements, calibrations at the diffraction limit require fast
modulation. For example, ViSP maintains a 400Hz windowed mode that can substantially reduce seeing-
induced cross-talk at the diffraction limit though at reduced flux-per-pixel-per-exposure. The nominal
modulation rate of the rotating retarders is much slower than typical atmospheric timescales so DKIST
modulation is subject to atmospheric artifacts. SNR calculations should include consideration of
atmospheric conditions and AO system performance creating polarization artifacts when quoting “time to
SNR” at specific spatial samplings for specific science cases (e.g. seeing limited vs diffraction limited
sampling).
8.7.3 Coating degradation over long timescales Coatings evolve with time, exposure to oxygen, exposure to UV flux, thermal cycles, etc. Calibrations
must be planned to the accuracy levels required considering measured coating degradation timescales.
Coating sample measurements (or direct measurements using in-house ellipsometry) may be necessary to
assess temporal stability of the system polarization model.
8.7.4 Optical uniformity for Active Systems Coatings and flat-fielding calibrations need to be uniform in active systems over the varying footprints.
Changing footprints caused by repointing of the active optics should not cause any variation in
transmission, polarization properties (through angle-of-incidence variations or non-uniformities) or other
optical performance properties. Most active systems “average over” many realizations of the beam
footprint variation. However, calibration stability should be considered. (E.g, moving M3 to
compensate beam wobble in GOS optics translates the pupil, which may cause variations between
observation / calibration footprints through wobble on the ViSP shadow-cast grating).
8.7.5 Interpolation between calibrations (time, wavelength, space, etc) There are several possible interpolations in the calibration schemes. For example, the “correlation
method” relies on specific spectral lines being used to derive the Mueller matrix of M1 & M2.
Uncertainty is introduced through interpolation between wavelengths in the group model.
8.7.6 Coudé angle (slit orientation projected during observations) Several science cases will use image / slit orientations fixed on the sky, requiring constant motion of the
coude angle to preserve projected focal plane orientation. The group model, provided enough dynamic
variables, should account for the dynamic optical configuration changes during a long data collection
series tracking the sun. Uncertainty from an imperfect group model (e.g. field edge variation from a
reduced number of variables when fitting for FOV) may add error.
8.7.7 Coudé flexure from rotation angle The coudé platform is known to flex as a function of rotation angle. The flexure will cause some
differences in illumination of the optics. This possible source of optical instability should be considered
as a possible error source.
DKIST System Error Budgets
SPEC-0009 Rev G Page 22 of 30
8.8 TABLE OF MUELLER MATRIX ERRORS The DKIST Science Requirements Document places requirements on polarimetric observations. Knowing
the system Mueller matrix is a component of providing polarization calibration. To describe these Mueller
matrix requirements, one option is using the method of Ichimoto et al (Polarization Calibration of the
Solar Optical Telescope onboard Hinode, Solar Physics, Volume 249, No 2, 2008). The DL-NIRSP and
Cryo-NIRSP ISRDs elaborate on polarimetric accuracy using a table of errors sometimes called the “error
matrix”. This table is not a matrix in any mathematical sense but is a list of errors for each Mueller matrix
element. The amplitude of each element Mij of the end-to-end system Mueller matrix M (telescope plus
polarimeter) must be known to an accuracy ± ij at all times Mij ± ij. The error, , is a table of the required
maximum uncertainty in knowledge of each Mueller matrix element. The error table values for DL-
NIRSP, CryoNIRSP and some of the calibration optics is derived using the method of Ichimoto et al 2008
using a required accuracy of 510-4
, maximum input Q, U, and V of 10%, and flat fielding to 10%.
2 2 2 2
4 2 3 3
4 3 2 3
4 3 3 2
10 10 10 10
5 10 10 5 10 5 10
5 10 5 10 10 5 10
5 10 5 10 5 10 10
The terms of the table above define the crosstalk between polarization states that are the primary source
of calibration error. For the system polarization calibration there are several contributing groups of optics.
The primary and secondary mirror (M1 & M2) is a large source of uncertainty. Due to the lack of a
controllable polarized light source for their calibration, M1 and M2 carry substantial calibration
uncertainty.
The optics from M3 to the instrument represent the error associated with the stability of the telescope
polarization signature. Since calibration optics reside at the GOS, uncertainty in calibration of these
telescope polarization groups is estimated to be less than for M1 and M2.
Uncertainties exist in the individual components which are used to calibrate the polarization signature of
the telescope and perform the polarization measurement.
To perform a system level polarization error model, all sources of error need to be estimated under the
range of intended observing conditions for the specific designs under consideration. In many cases,
current best manufacturing practices must be employed to minimize error sources. Several work
packages and research projects are ongoing to improve the state of the art in calibration of large optics
using astronomical sources, new optics and new techniques.
8.9 POLARIZATION ERROR LIST SUMMARY The error sources listed in section 8.1 through 8.8 include the known contributors to polarimetric errors
caused by instruments in typical solar observations. This list is not a budget. This list does contain a set
of error terms that must be reduced below the desired polarization accuracy, before that accuracy can be
achieved. Hopefully in the near future, methods of doing proper polarimetric error budgeting will allow
for trading of error allocations between items on this list as befitting a proper budget. This list will be
used to define significant areas of effort. The list identifies where design choices, modeling efforts,
laboratory and on-summit test activities will impact schedule, budget, performance, etc. TN-0245 will
contain more detailed and up-to-date information on the progress of testing and modeling efforts,
referencing this high level summary of the error list topics.
DKIST System Error Budgets
SPEC-0009 Rev G Page 23 of 30
9. DEFINITION OF TERMS
AO – Refers to adaptive optics, which for DKIST presumes a high-order (approximately 1000 degree-
of-freedom) deformable mirror.
aO – Refers to the active optics correction provided by the M1 mirror support system. This system is
run closed loop for on-disk observations, but must be run open loop (based on best available
information in a look-up table) during coronal observations.
DM – The deformable mirror is the part of the adaptive optics system used to dynamically correct the
wavefront. It is generally located at or very close to a pupil plane.
DoF – Degrees of freedom, initially assumed to be of order 1000 for DKIST.
EE – See “Encircled Energy.”
Encircled Energy – The angular diameter (usually expressed in arcsec) of a circle containing the specified
percentage of the total energy in an image (often 50% EE or 85% EE).
FEA – Refers to finite element analysis, a method used to model the characteristic dynamic vibration
modes of mechanical systems.
Influence Function – The actual shape of a deformable mirror when a single actuator pushes it. For a
continuous faceplate DM, the shape is roughly Gaussian.
Monte Carlo Simulation – A method of estimating system errors by sampling each individual error
distribution at random (i.e., according to its own probability distribution which need not be
normal) and looking at the resulting system error distribution after many of these random
manifestations.
PSF – The point spread function of the system, which is the power within the image of a point source
plotted as a function either position in the focal plane, or angle subtended on the sky.
RSS – A method of combining statistically based error values, taking the square root of the sum of the
squares of the individual errors. In the strictest sense this is only appropriate when the errors
have a normal distribution, equal weight, and are uncorrelated. It is often extended to other types
of error as well to maintain consistency.
See also Appendix A: Delivered Image Quality Elements
DKIST System Error Budgets
SPEC-0009 Rev G Page 24 of 30
10. REFERENCES
Cho, M. K., Stepp, L, Kim, S., “Wind buffeting effects on the Gemini 8m primary mirrors,” in
Optomechanical Design and Engineering, Alson E. Hatheway; Ed.,Proc. SPIE 4444, pp. 302-314,
2001.
Bierent, R., Rimmele, T, and Marina, J., “Assessment of local seeing within a telescope lab
environment,” Larry M. Stepp, editor, Proc SPIE 7012. 2008.
Hardy, J. W., 1998. Adaptive Optics for Astronomical Telescopes, Oxford University Press, Oxford, New
York.
Hubbard, R. P., Oschmann, J. M., “Monte Carlo telescope performance modeling,” in Modeling and
Systems Engineering for Astronomy, Simon C. Craig and Martin J. Callum, eds., Proc SPIE 5497, pp.
129-139, 2004.
Hubbard, R. P. et al, “Controlling wavefront distortions across a thermal boundary”, in Ground-based and
Airborne Telescopes, Larry M. Stepp, editor, Proc SPIE 6267 [6267-22], 2006.
Hudgin, R., 1977. Wave-front compensation error due to finite corrector element size. J. Opt. Soc. Am
67 393-395.
Racine, R., et al. Mirror, Dome, and Natural Seeing at CFHT. PASP 103 1020-1032, September 1991.
Schroeder, D. J., 2000. Astronomical Optics. Second Edition, Academic Press, San Diego.
Tyson, R. K., 1991. Principles of Adaptive Optics. Second Edition, Academic Press, San Diego.
Upton. “Active alignment and figure correction of the Advanced Technology Solar Telescope.” In
Modeling, Systems Engineering, and Project Management for Astronomy II, Editors: M. J. Cullum,
G. Z. Angeli. Proc SPIE 6271, 2006.
2004 (VLT-PF report, p102, SPIE paper).
DKIST Science Requirements Document, SPEC-0001, Thomas Rimmele and the DKIST Science Team.
ATST Enclosure Seeing Report, RPT-0004, Nathan Dalrymple.
ATST Site Survey Working Group Final Report, RPT-0021, the Site Survey Working Group.
TMA Finite Element Analysis, TN-0056, Myung Cho.
High Order Adaptive Optics System Reference Design Performance Modeling, TN-0073, Thomas
Rimmele and Jacqueline Roche.
Recent TNs:
TN-0219 PA&C Calibration Retarder Performance Considering Thermal Gradient
TN-0220 Comparison of Zemax and IDL Mueller matrices for a reflection.
TN-0223 Spectroscopic Flux Budget Explanation
TN-0225 Thermal Calculations for PA&C Optics
TN-0226 Impact of polishing non-uniformities on Cryo-NIRSP SAR & PCM Retarders
TN-0231 Evaluating Laboratory Spectropolarimeter Performance
TN-0238 Artificial Light Source Trade Study
TN-0239 GOS polarizer back reflection analysis
TN-0242 GOS wedged retarder beam deflection analysis
TN-0245 Polarization Systems Engineering
DKIST System Error Budgets
SPEC-0009 Rev G Page 25 of 30
TN-0251 Characterizing the Upgraded NLSP+
TN-0253 Coating Absorption for Retarders
TN-0254 FIDO Dichroic Coating Options
TN-0256 Polarization Fringes & Power Spectra
TN-0257 Investigating Group Models: Polarization optical modeling
TN-0260 Retarder Acceptance Testing
TN-0261 Wire Grid Polarizer Investigations
Retarder Drawings & Descriptions:
TN 0180 - Cryo-NIRSP Modulator Design Performance
TN 0181 - Cryo-NIRSP Calibration Retarder Design Performance
TN 0182 - ViSP Modulator Design Performance
TN 0183 - ViSP Calibration Retarder Design Performance
TN 0191 - DL-NIRSP Modulator Design Performance
TN 0192 - DL-NIRSP Calibration Retarder Design Performance
Older But Relevant TNs:
TN-0207 Camera and Modulator Timing Variation Sensitivity Analysis
TN 0155 - Measured and Modeled Mirror Reflectivity
TN 0194 - DKIST Telescope and Instrument Photon Flux Model
TN 0215 - Polarization Optics Modeling and Optimization
TN 0153 - Thermal Testing of Bi-crystalline Achromatic Retarder
TN 0137 - Polarization Issues
TN 0115 - Telescope Polarization Model
TN 0105 - Configurations for ATST Coude Room beam separation
DKIST System Error Budgets
SPEC-0009 Rev G Page 26 of 30
APPENDIX A –DELIVERED IMAGE QUALITY ERROR TREE ELEMENTS
Atmosphere:
Atmospheric Seeing – Turbulence in the earth’s atmosphere, excluding the immediate environment of the
telescope, will degrade the delivered image quality. When mitigation can be applied (for example
with AO running closed loop), this value is reduced to the RSS of the residual errors due to fitting
errors, AO Servo Response, and Tip/Tilt Response (see below).
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO
Fitting Errors (1.1) – This is the residual error remaining in the wavefront correction because of the
inability to measure the wavefront exactly, and fit the wavefront with a finite number of degrees of
freedom for a specific influence function.
Mitigation: None
Reconstruction Error (CIBOLA) (1.2) – In Revisiton F of the error budgets four separate errors related to
adaptive optics were rolled up into this one item. Previously this was divided among Aliasing
Errors (1.2), AO Servo Response (1.3), Tip/Tilt Response (1.4), and WFS Measurement Errors
(2.2.5.1). CIBOLA is a Matlab toolbox developed by Brent Ellerbroek.
Mitigation: None
Telescope:
Diffraction and Optical Design (2.1.1) – Optical diffraction from the finite entrance aperture of the
telescope sets a wavelength dependent upper limit on performance. Even if the optics are
manufactured and aligned perfectly, however, a wavefront error above and beyond the diffraction
limit will still result from limitations in the optical design, which vary as a function of telescope
configuration (altitude, azimuth, and coudé orientation) and field point. This has also been included in
this error-budget value.
Mitigation: None for diffraction, aO and AO for optical design limitations.
M1 (Primary) (2.1.2) – Quasi-static figure errors in the primary mirror caused by polishing residuals,
support print-through, gravity-induced flexure, or thermal distortion will degrade image quality.
Mitigation: aO, AO
M2 (Secondary) (2.1.3) – Quasi-static figure errors in the secondary mirror caused by polishing residuals,
support residuals, gravity-induced flexure, or thermal distortion will degrade image quality.
Mitigation: aO, AO
Telescope Transfer Optics (2.1.4) – Quasi-static figure errors in the feed mirrors (M3-M6) caused by
polishing residuals, support residuals, gravity-induced flexure, or thermal distortion will degrade
image quality.
Mitigation: aO, AO
Coudé Lab Optics (2.1.5) – Quasi-static figure errors in the mirrors and beam splitters used to distribute
the beam to instruments in the coudé lab. Analogous to the Telescope Transfer Optics, these errors
are caused by polishing residuals, support residuals, gravity-induced flexure, or thermal distortion that
degrades image quality.
Mitigation: aO, AO
DKIST System Error Budgets
SPEC-0009 Rev G Page 27 of 30
Active Optics System (2.1.6) – Quasi-static errors that account for the negative effects of running the aO
system. Because this tree element is split between two subsystems (WFC and M1) it is subdivided
into two budget values:
Wavefront Correction aO Errors (2.1.6.1) – These errors include WFS measurement errors, correctable
errors in the aO optical path not common to the science path, and errors in the calibration of
wavefront errors in the aO optics. It does not take in fitting errors as these are included in the static
errors associated with M1, M2, and the transfer optics.
Mitigation: AO, if there is a separate WFS.
M1 aO Errors (2.1.6.2) – This includes load cell calibration errors, matrix mapping errors, actuator
stability, and latency errors.
Mitigation: AO
Quasi-static Optical Alignment (2.1.7) – Errors in the relative positions of the telescope mirrors will
introduce optical aberrations (including defocus) that degrade the image. Note that this quasi-static
error budget addresses mechanical positional tolerances, and slow, gravity-induced alignment errors.
It explicitly excludes dynamic alignment problems (e.g., windshake), which are listed separately.
Mitigation: M2 Hexapod motions, M3 and M6 for bore site, aO, and AO for residuals
Wind Buffeting (2.2.1) – Wind pressure will dynamically deform the shape of a thin meniscus mirror
introducing aberrations, primarily astigmatism.
Mitigation: Wind-flow throttling, AO
Seeing (2.2.2) – This is the “self seeing” introduced by the enclosure and the telescope. It includes the
effects introduced by the next six subcategories.
Enclosure Exterior-Shell Seeing (2.2.2.1.1) – Seeing due to temperature differences (relative to the
ambient air) of the outer surface of the enclosure will cause image degradation. This and the
following telescope seeing error items is treated separately from atmospheric seeing because its
spatial and temporal frequency spectrum is likely different from that of the earth’s atmosphere. As a
result, the effects of mitigation will also differ in detail.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, wind flow, active cooling
Enclosure Interior (2.2.2.1.2) – This is seeing induced by the enclosure interior and floor, or in the case of
a “non-enclosure,” just the floor.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, active and passive cooling
Mirror Seeing (2.2.2.2.1) – This is seeing due to convection in the vicinity of the telescope mirrors, M1-
M13 caused by the insolation on these mirrors.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, active and passive cooling
Heat Stop Seeing (2.2.2.2.2) – This is seeing due to convection in the vicinity of the heat stop caused by
the elevated temperature of its reflecting surface.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, active and passive cooling
Mount Seeing (2.2.2.2.3) – This is seeing due to convection in the vicinity of insolated elements of the
telescope mount.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, active and passive cooling
DKIST System Error Budgets
SPEC-0009 Rev G Page 28 of 30
Beam Path Seeing (2.2.2.2.4) – Seeing due to convection along the vertical path from M6 to the coudé
room(s), which includes the position where the beam makes the transition from ambient outside
temperature to the “shirtsleeve” environment of the coude room.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO, air-knife isolation, temperature equalization, window
Occulter Seeing (2.2.2.2.5) – Seeing due to convection above the occulter when it is in use for coronal
observations..
Mitigation: M2 Tip/Tilt with error signal generated by the limb sensor arrays associated with the
occulter.
Field Stop Seeing (2.2.2.2.6) – Seeing caused by residual heat of the field stop at the Gregorian focus
where the 5-arcmin FOV transmitted by the prime-focus heat stop is reduced to 2.8 arcmin.
Mitigation: Active cooling of the stop. Note that HOAO cannot correct for this as it is in a focal
plane.
Dynamic Optical Alignment (2.2.3) – Errors in the relative positions of the telescope mirrors will
introduce optical aberrations (including defocus) that degrade the image. Note that this element of
the dynamic error budget addresses wind-induced alignment problems, and explicitly excludes
mechanical positional tolerances and slow gravity-induced alignment errors, which are covered as
quasi-static errors above.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO
Wind Shake (2.2.4.1) – These are rapid changes of telescope pointing due to wind and other induced
vibrations of the telescope structure.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO
Altitude, Azimuth, and Coudé Drive Jitter (2.2.4.2) – All servo loops, including the telescope and coudé
drives, will have residual jitter due to their finite bandwidths and other servo performance limitations,
thus causing image jitter that is separate and distinct from driving forces (like the wind) that induce
vibrations in the telescope structure.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO
Thermal Control Jitter (2.2.4.3) – The impingement cooling and liquid cooling of the telescope mirrors is
a source of image jitter. This comes from three primary sources: the air impingement itself,
transmitted vibrations from liquid flows, and motor vibrations from the associated fans and pumps
that may couple into the telescope structure.
Mitigation: M2 Tip/Tilt, Fast Tip/Tilt, AO
Adaptive Optics Errors (2.2.5) – These include the negative effects of running the AO system as outlined
in the next six subcategories.
Mitigation: None
WFS Measurement Errors (2.2.5.1) – This represents the limitations of the wavefront sensor due to
detector efficiency limitations and read noise. These should include scintillation errors, isoplanatic
angle errors, and cross-coupling between the two.
Mitigation: None
DM Residuals (2.2.5.2) -- After flattening the DM, residual aberrations on spatial scales smaller than the
inter-actuator spacing remain.
WFS-DM misalignment (2.2.5.3) – Dynamic misalignment between the WFS lenslet array and the DM
actuator grid can occur because of mechanical and/or thermal flexure of optical and mount systems
DKIST System Error Budgets
SPEC-0009 Rev G Page 29 of 30
that are located in the optical path between the DM and the wavefront sensor lenslet array. This
misalignment leads to reconstruction errors that cause spurious high frequency waves on the
reconstructed wavefront, which in turn produce faint ghost images and hence reduced Strehl.
Mitigation: The QSA system is expected to keep the image of the pupil aligned with respect to the
DM (M9) actuator grid. Residual static misalignment between the DM and wavefront sensor lenslet
array can be calibrated out by the interaction matrix.
Non Common Path (NCP) Errors (2.2.5.4) – Non-common path aberrations are caused by optical
elements that feed the science instrument and are not common to the wavefront sensor optical path.
Mitigation: Phase Diversity
Noise on reference Slopes (2.2.5.5) – The reference slopes are measured using a back illuminated pinhole
placed at the entrance aperture stop of the wavefront sensor. While the pinhole provides maximum
possible subaperture image contrast, some residual wavefront sensor noise effects this measurement.
Mitigation: None
Noise on Interaction Matrix (IM) (2.2.5.6) – The interaction matrix is recorded by poking actuators and
measuring the response of the wavefront sensor. As a target a back-illuminated pinhole serves again.
This is noise inherent in that measurement.
Mitigation: None
Anisoplanatism (2.2.5.7) Also known as Generalized Anisoplanatism is caused by our sampling of a 10-
arcsec square field of view. Instead a measuring a point source at the center of the field of view, our
HAOA wavefront sensor averages over a finite inverted pyramid of angles. While this has a
negligible impact within the ground layer, high-altitude seeing at high telescope zenith distance will
cause this term to be significant.
Mitigation: None short of MCAO.
Instrument:
Beam Splitters (3.1.1) – Each beam splitter in the optical train will introduce static wavefront errors due
to manufacturing tolerance.
Mitigation: Static errors for one instrument can be measured and programmed onto the DM, but this
would be at the expense of the performance of any other instrument in simultaneous use.
Modulator Wavefront (3.1.2) – Wavefront errors introduced by imperfect modulators. These will be static
for some types of modulators and dynamic for others.
Mitigation: Static errors for one instrument can be measured and programmed onto the DM, but this
would be at the expense of the performance of any other instrument in simultaneous use.
Residual Design Errors (3.1.3) – Static wavefront errors introduced by an imperfect basil optical design,
including both the DKIST telescope design and the instrument design. These two sources are
inseparable because the instrument designs largely compensate for residual errors in the telescope
design. In practice this comes from the end-to-end analysis of the ZEMAX file for each instrument
plus telescope.
Mitigation: Static errors for one instrument can be measured and programmed onto the DM, but this
would be at the expense of the performance of any other instrument in simultaneous use. Apparently
there are now several sources that will make refractive static error correctors that when place at a
pupil can address a large fraction of these errors if they can be measured and remain stable.
Instrument Tolerance Errors (3.1.4) – Includes both manufacturing tolerance errors and misalignment
tolerance errors on the instrument feed optics and internal optics.
DKIST System Error Budgets
SPEC-0009 Rev G Page 30 of 30
Mitigation: Static errors for one instrument can be measured and programmed onto the DM, but this
would be at the expense of the performance of any other instrument in simultaneous use. Apparently
there are now several sources that will make refractive static error correctors that when place at a
pupil can address a large fraction of these errors if they can be measured and remain stable.
Camera MTF (3.1.5) The modulation transfer function of a typical CMOS device is not perfect. The
primary cause is that not all electrons generated by a photon interaction with a pixel will be collected
within that pixel. A significant amount of charge will diffuse to adjacent pixels. This problem
becomes worse at high incident angles (fast focal ratios) and at redder wavelengths where the photons
penetrate farther.
Beam Path Non-Common Path (3.2.1) – These are non-common path seeing errors between the optics
internal to the wavefront sensor and the beam path from the wavefront-correction beam splitter to the
entrance aperture of the instrument (e.g., slit jaws where applicable).
Mitigation: Possible mitigation by tip-tilt or other low-order wavefront sensing at the instrument.
While this is not currently planned, it is an upgrade option should the need arise.
Internal Instrument Non-Common Path (3.2.2) – These are non-common path seeing errors between the
optics internal to the wavefront sensor and the beam path within the instrument.
Mitigation: None
Modulator Wobble (3.2.3) – The image within an instrument will wobble in the focal plane if the
modulator is a rotating retarder located at a pupil position. The cause would be manufacturing and
alignment tolerance.