Upload
others
View
17
Download
0
Embed Size (px)
Citation preview
Project Documentation Document SPEC-0107
Rev A
Advanced Technology Solar Telescope 950 N. Cherry Avenue Tucson, AZ 85719 Phone 520-318-8102 [email protected] http://atst.nso.edu Fax 520-318-8500
Visible Broadband Imager Critical Design Definition
William McBride, Scott Gregory, Andrew Ferayorni, Friedrich Wöger
VBI Instrument Group
September 12, 2012
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page i
REVISION SUMMARY:
1. Date: December 16, 2010 Revision: Draft 1 Changes: Initial document - started with SPEC-0089 VBI Preliminary Design Definition.
2. Date: June 30, 2011 Revision: Draft 2 Changes: Major Clean-up
3. Date: July 20, 2011 Revision: Draft 3 Changes: Prepared for VBI Blue CDR
4. Date: September 12, 2012 Revision: Rev A Changes: Prepared for VBI Red CDR – move compliance matrix to CMX-0001. Initial formal release.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page ii
Table of Contents
1 OVERVIEW ........................................................................................................... 1
1.1 SCOPE OF THE DOCUMENT ...................................................................................... 1 1.2 PDR RECOMMENDATIONS ....................................................................................... 1 1.3 CDR DELIVERABLES ............................................................................................... 1 1.4 RELATED DOCUMENTS ............................................................................................ 3 1.5 RELATED ATST PROJECT DOCUMENTS .................................................................... 3
1.6 INTERFACE CONTROL DOCUMENTS AND DRAWINGS ................................................... 3 1.7 SPECIFIC DEFINITIONS AND TERMINOLOGY ................................................................ 3 1.8 VBI TEAM ORGANIZATION ........................................................................................ 5 1.9 COMPLIANCE MATRIX .............................................................................................. 5 2 INTRODUCTION TO THE VBI DESIGN ............................................................... 6
3 VBI OPTICAL DESIGN .......................................................................................... 7
3.1 VBI OPTICAL DESIGN REQUIREMENTS: ..................................................................... 8 3.2 INTERFACE TO VBI .................................................................................................. 8 3.2.1 Image Quality ................................................................................................................10 3.2.2 Angle of Incidence .........................................................................................................10 3.2.3 Pupil Footprints on Filters ..............................................................................................12 3.2.4 Grid Distortion ...............................................................................................................12 3.3 OPTICAL TOLERANCE ANALYSIS AND ERROR BUDGET .............................................. 14 3.3.1 The Zemax Tolerance Model .........................................................................................14 3.3.2 Monte Carlo Results ......................................................................................................16 3.4 OPTICAL ALIGNMENT ............................................................................................. 18
4 VBI HARDWARE DESIGN .................................................................................. 20 4.1 CAMERA ............................................................................................................... 22
4.2 FILTER WHEEL ...................................................................................................... 22 4.2.1 Filter Wheel Repeatability Testing .................................................................................25 4.2.2 Filter Wheel Vibration Testing ........................................................................................27 4.2.3 Modeling and Design Analysis .......................................................................................28 4.3 CAMERA STAGE .................................................................................................... 29 4.3.1 Modeling and Design Analysis .......................................................................................32 4.4 FOCUS STAGE ...................................................................................................... 32 4.4.1 Modeling and Design Analysis .......................................................................................34 4.5 OBJECTIVE LENS MOUNT ....................................................................................... 35 4.6 FOLD MIRROR #1 MOUNT ...................................................................................... 35 4.7 FOLD MIRROR #2 MOUNT ...................................................................................... 37 4.8 FIELD & IMAGING LENS MOUNTS ............................................................................ 37 5 CONTROL SYSTEM DESIGN ............................................................................ 38
5.1 MOTOR DRIVES .................................................................................................... 39 5.2 FILTER WHEEL DRIVE ............................................................................................. 39 5.2.1 Overview .......................................................................................................................39 5.2.2 Analysis .........................................................................................................................39 5.3 CAMERA STAGE DRIVE .......................................................................................... 40 5.3.1 Overview .......................................................................................................................40 5.3.2 Analysis .........................................................................................................................40 5.4 REGENERATION .................................................................................................... 40
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page iii
5.4.1 Overview .......................................................................................................................40 5.4.2 Analysis .........................................................................................................................40 5.5 POWER FEED ....................................................................................................... 40 5.5.1 Overview .......................................................................................................................40 5.5.2 Analysis .........................................................................................................................41 6 THERMAL SYSTEMS ......................................................................................... 42 7 SOFTWARE DESIGN ......................................................................................... 43 7.1 INTRODUCTION ..................................................................................................... 43
7.2 TERMINOLOGY ...................................................................................................... 44 7.2.1 General Terminology .....................................................................................................44 7.2.2 VBI Instrument Controller Terminology ..........................................................................44 7.3 DESIGN OVERVIEW ............................................................................................... 46 7.3.1 Context ..........................................................................................................................46 7.3.2 System Modules ............................................................................................................46 7.3.3 Instrument Control .........................................................................................................47 7.3.4 Data Processing and Display .........................................................................................48 7.3.5 User Interfaces ..............................................................................................................49 7.4 SOFTWARE SYSTEM USE CASES ............................................................................ 51 7.4.1 Check Observing Task Configuration Feasibility ............................................................51 7.4.2 Simulate Observing Task Configuration .........................................................................52 7.4.3 Input VBI Observing Task Parameters...........................................................................53 7.4.4 Execute VBI Observing Task Configuration ...................................................................54 7.4.5 Process Data .................................................................................................................60 7.4.6 Verify Data Quality.........................................................................................................61 7.4.7 View Final Data .............................................................................................................62 7.4.8 Control / Monitor Instrument ..........................................................................................63 7.5 GRAPHICAL USER INTERFACES DESIGN .................................................................. 64 7.5.1 Overview .......................................................................................................................64 7.5.2 VBI Explorer ..................................................................................................................64 7.5.3 OCS Instrument Tabs ....................................................................................................65 7.5.4 Engineering GUI ............................................................................................................66 7.5.5 Image Data Displays .....................................................................................................68 7.6 INSTRUMENT CONTROLLER DESIGN ........................................................................ 70 7.6.1 Introduction ...................................................................................................................70 7.6.2 Decomposition Description ............................................................................................71 7.6.3 Dependency Description ...............................................................................................72 7.6.4 Interface Description ......................................................................................................76 7.6.5 Detailed Design .............................................................................................................79 7.7 OBSERVING TASK SCRIPTS DESIGN ...................................................................... 100 7.8 DATA PROCESSING PIPELINE DESIGN ................................................................... 115 7.8.1 Introduction ................................................................................................................. 115 7.8.2 Decomposition Description .......................................................................................... 116 7.8.3 Dependency Description ............................................................................................. 136 7.8.4 Interface Description .................................................................................................... 150 7.8.5 Detailed Design ........................................................................................................... 184 7.9 OTHER DELIVERABLES ........................................................................................ 221 7.9.1 Documentation ............................................................................................................ 221 7.9.2 Security ....................................................................................................................... 221 7.10 SOFTWARE ANALYSIS .......................................................................................... 222 7.10.1 Real-time Performance for Time Critical Actions ......................................................... 222
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page iv
7.10.2 Synchronization and Timing for VBI Observing Use Cases ......................................... 223 7.10.3 Speckle Image Reconstruction .................................................................................... 230 7.10.4 Using jCUDA to bridge Java to CUDA C libraries ........................................................ 237 8 HAZARD ANALYSIS ......................................................................................... 240
9 COST AND SCHEDULE ESTIMATES .............................................................. 241 9.1 COST ESTIMATE .................................................................................................. 241 9.1.1 Final Design ................................................................................................................ 242 9.1.2 Construction Phase Labor ........................................................................................... 242 9.1.3 Construction Phase Non-labor ..................................................................................... 242 9.1.4 Construction Phase Totals ........................................................................................... 243 9.1.5 Detailed Budget Items ................................................................................................. 243 9.1.6 Contingency ................................................................................................................ 244 9.2 PROJECT SCHEDULE ............................................................................................ 245 9.3 RISK ASSESSMENT .............................................................................................. 246 9.3.1 VBI Risk Register ........................................................................................................ 248 9.3.2 Risk Description and Mitigation Plan ........................................................................... 250 10 CONSTRUCTION PHASE PLANNING ............................................................. 253 10.1 FABRICATION PLAN ............................................................................................. 253 10.2 QUALITY CONTROL AND QUALITY ASSURANCE PLAN .............................................. 253 10.2.1 Definitions ................................................................................................................... 253 10.2.2 Quality Control Tasks .................................................................................................. 253 10.2.3 Quality Assurance Tasks ............................................................................................. 255 10.3 VERIFICATION TEST PLAN .................................................................................... 257 10.3.1 Unit Tests .................................................................................................................... 257 10.3.2 General Verification of VBI ISRD requirements ........................................................... 262 10.3.3 Science Verification Plan ............................................................................................. 264 10.4 TRANSPORTATION PLAN ...................................................................................... 267
10.5 IT&C SUPPORT PLAN .......................................................................................... 267
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 1 of 267
1 OVERVIEW
1.1 SCOPE OF THE DOCUMENT
The VBI Critical Design Definition (CDD) describes the design of the VBI instrument as conceived and
developed by the VBI instrument design team. The design decisions were developed and presented in
SPEC-0089, the VBI Preliminary Design Definition, and reviewed in the Preliminary Design Review in
Sunspot, NM on December 14th and 15th, 2010. The recommendations of the design review committee
are incorporated into the VBI critical design as presented in this document.
The design presented in the CDD is based upon the requirements in SPEC-0090 - VBI Design
Requirements Document (DRD) which itself flows from SPEC-0054 - Instrument Science Requirements
Document (ISRD). The ISRD captures the purpose and intent of the appropriate instrument requirements
as defined in SPEC-0001 - ATST Science Requirements.
1.2 PDR RECOMMENDATIONS
The VBI Preliminary Design Review (PDR) was conducted in December, 2010. The review committee
made recommendations to the VBI team. These recommendations can be found in the
ATST_VBI_PDR_Final_Report.
A summary of the recommendations are:
Ensure that the VBI is capable of operating in a parasitic, stand-alone mode, and is capable of
operating in customized modes
Investigate a possible burn or fire hazard at the instrument focus
Clarification of the Speckle reconstruction project plan and cost
Clarification of the Speckle system and DHS interfaces
Define engineering product deliverables for CDR
Include labor inflation costs in the budget
Provide external schedule dependencies in the instrument schedule
Provide the review committee with an organizational chart
Consider an x-y filter stage design instead of the filter wheel design
Consider the need for an enclosure or scattered light baffling
Include more examples of realistic observational scenarios
Provide more detail in the budgets and schedules presented
1.3 CDR DELIVERABLES
The list of recommended deliverables for CDR is detailed in PMCS-0017 - Instrument Management Plan.
The deliverables for the VBI CDR are primarily contained in two documents: the Design Requirements
Document (DRD) and the Critical Design Definition (CDD) which is the document you are reading. In
addition, the Interface Control Documents (ICDs) are included - the ICDs define the VBI interfaces to the
project in detail and serve as negotiated contracts between the VBI team and the project.
The DRD contains the flow-down of requirements from the Instrument Science Requirements Document
(ISRD). The DRD contains the development and explanation of all requirements for the VBI.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 2 of 267
Requirements are then captured and summarized in the CMX-0001 compliance matrix, along with all
external project requirements (see 1.9 Compliance Matrix).
The CDD contains the details of the VBI design. The CDD contains the following deliverables:
VBI Optical Design
o Optical tolerance analysis
o Optical error budget
o Optical alignment plan
VBI hardware design
o Motion stages
o Optical mounts
o Error budget
Control system design
o Delta Tau control system
o Test results
Thermal system
Software design
o Use cases
o Speckle image reconstruction
o Interfaces
o Detailed design
Hazard Analysis
Budgets
Schedule
VBI risk assessment
Construction Phase
o Fabrication Plan
o Quality control and Quality assurance plan
o Verification test plan
o Transportation plan
o IT&C support plan
The document CMX-0001 VBI Compliance Matrix shows the traceability of CDD design elements back
to their source DRD requirements.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 3 of 267
1.4 RELATED DOCUMENTS
SPEC-0001 - Science Requirements Document (SRD)
SPEC-0054 - VBI Instrument Science Requirements Document (ISRD)
SPEC-0089 – VBI Preliminary Design Definition
SPEC-0090 - VBI Design Requirements Document (DRD)
1.5 RELATED ATST PROJECT DOCUMENTS
SPEC-0005 - Software Design Requirements
SPEC-0012 - ATST Acronym List and Glossary
SPEC-0013 - Software Concepts Definitions
SPEC-0014 - Software Design
SPEC-0022 - Software Users’ Manual
SPEC-0023 - ICS Specification
SPEC-0036 - Operational Concepts Definitions
SPEC-0037 - Risk Management Plan
SPEC-0045 - Contingency Management Plan
SPEC-0061 - ATST Hazard Analysis Plan
SPEC-0063 - Interconnects and Services Specification Document
PMCS-0017 - Instrument Management Plan
TN-0065 - Data Handling System Reference Design Study and Analysis
TN-0089 - Java Engineering Screens Users Manual
TN-0102 - Instrument Control System Design Document
TN-0114 - ASI Design
TN-0154 – Motion Controller Performance
1.6 INTERFACE CONTROL DOCUMENTS AND DRAWINGS
ICD 3.1.3/3.2 Coudé Station to VBI
ICD 3.1.4/3.2 Instrument Control System to VBI
ICD 3.1.4/3.6 ICS to Camera Systems
ICD 3.2/3.6 VBI to Camera Systems
1.7 SPECIFIC DEFINITIONS AND TERMINOLOGY
Acronym Meaning
ATST Advanced Technology Solar Telescope
AO Adaptive Optics
CWL Central Wavelength
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 4 of 267
DRD Design Requirements Document
FWHM Full Width at Half Maximum
FOV Field-of-view
fps Frames per second
ISRD Instrument Science Requirements Document
MFBD Multi-Frame Blind Deconvolution
ms millisecond, 10-3 second
nm nanometer, 10-9 meter
OCD Operational Concepts Definition
PD Phase Diversity
PDD Preliminary Design Definition
pm picometer, 10-12 meter
SRD Science Requirements Document
VBI Visible Broadband Imager
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 5 of 267
1.8 VBI TEAM ORGANIZATION
1.9 COMPLIANCE MATRIX
The requirements in this document all trace back to the Visible Broadband Imager Design Requirements
Document SPEC-0090 (DRD) through the compliance matrix in document CMX-0001 VBI Compliance
Matrix. CMX-0001 VBI Compliance Matrix serves two purposes. First, CMX-0001 lists all DRD
requirement numbers and traces to their original source documents and source requirement numbers.
This provides clear visibility as to where the DRD requirements were sourced from. Secondly, CMX-
0001 traces all design requirements from the DRD to the section(s) of design documents that provide the
information that confirms compliance with the requirement. Figure 2 below illustrates the flowdown
from requirements, to DRD, to CDD, to compliance matrix.
Developed
Requirements
VBI DRD
Spec 90
VBI CDD
Spec 107
Compliance
Matrix
VBI OCD
Spec 106
VBI ISRD
Spec 54
Various ATST
Project
Specifications
CMX-0001
Final Design
Traceability
Figure 2: Requirements Flow-Down
Figure 1: VBI Team Org Chart
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 6 of 267
2 INTRODUCTION TO THE VBI DESIGN
The VBI is a first-light imaging instrument for the ATST. The VBI will provide images and movies of
solar features with the highest spatial resolution achievable with the ATST and high temporal cadence.
Once commissioned, the VBI will provide solar images in the wavelength regime of visible light with
considerably higher spatial resolution than any solar images previously observed over a field of view of 2
arc minutes.
There are two separate channels in the VBI, the blue channel and the red channel, each capable of
imaging four discrete wavelengths and operated either simultaneously or independently. The future of the
red channel is still being assessed, and the remainder of this document focuses only on the blue channel
design.
The VBI is located in the Coudé Lab and receives a light feed from a beam splitter immediately following
the adaptive optics deformable mirror. The beam splitter provides blue light to the VBI and passes the
remaining light to other instruments, making it possible to operate the telescope with multiple instruments
simultaneously. While the VBI is a non-polarimetric instrument, it has the capability of addressing
specific scientific questions regarding the small-scale solar features that need to be observed at high
spatial and temporal resolution. In addition, featuring a large field of view, it is ideally suited to the role
of context viewer for other instruments.
In order to achieve the highest possible image quality, the VBI utilizes both the adaptive optics system of
the ATST and post-facto speckle image reconstruction. Having available near real-time image
reconstruction will give personnel at the ATST the ability to observe fine detail in solar structures in near
real-time (e.g. for target selection), and will provide relief to the data handling system by eliminating the
need to store raw data (although the option of storing raw data for a limited time period will be available).
The VBI design was made as simple as possible, consistent with meeting requirements for high image
quality and cadence. The VBI consists of four lenses, a filter wheel with four filter positions, two fold
mirrors, and a camera. The field of view of the VBI requires a fast acquisition sensor with a format that is
larger than what is currently available. Thus, the camera has been mounted on an x-y stage so that it can
scan the entire field of view. This allows the option of covering the entire 2 arc-minute field of view of
the instrument by taking a mosaic of images that can later be stitched together. As camera formats
increase, it will be feasible to replace the camera. The VBI will require a 12K 12K camera to take a 2
arc-minute, diffraction limited image in one exposure at its bluest wavelength.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 7 of 267
3 VBI OPTICAL DESIGN
Figure 3: VBI Optical Design
The primary driver for the VBI optical design choice was to keep the instrument as simple as possible
consistent with achieving the science requirements. The most technically challenging aspect of the design
is the set of filters, since in order to produce scientifically useful data, the design must achieve very
narrow bandwidths. The use of interference filters was mandated by the budget, but the filters required
are of the largest diameter that can be produced by filter vendors. Considering the field of view of the
VBI, the use of smaller diameter filters is not possible because the angle of incidence of the beam on the
filters would cause a prohibitive shift of the filter central wavelength towards the outer parts of the field
of view. A balance was found between: limiting the angle of incidence (AOI) on the filter, the f-ratio of
the instrument, and the filter central wavelength shift. Even with this balance, the field of view of the
VBI had to be limited to 2 arc minutes square to prevent the filter size from increasing beyond 70 mm
which is the largest manufacturable size without initiating a design effort that is beyond the scope of the
VBI effort.
The chosen solution was to put the filter at a pupil - this allows the use of filter diameters than can be
manufactured. A further complication that comes from placing the filter at the pupil (beyond the AOI
constraints) is that the design becomes very sensitive to the transmitted wavefront error of the filter. This
error is primarily caused by aberrations in the coating of the filter and is difficult for the filter
manufacturer to control. The VBI team met with filter vendors and determined that the filters can be
manufactured within the budget constraints of the VBI.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 8 of 267
Figure 3 shows the four lens design which includes two doublet and two singlet lenses. The collimator
lens is used to compensate the axial chromatic aberration for various filters and travels a total of 3.2mm
between the longest and shortest wavelengths. The F/31 focal plane is tilted 6.54˚ (due to the tilted field
provided by the ATST) and is fixed for all wavelengths. Focusing is accomplished by translating the
collimator lens along the optical axis.
The objective lens forms an F/13 focal-plane ~ 2600 mm downstream. The doublet is comprised of S-
TIL6 and BK7 glass and has one conic surface. The silica field lens works with the collimator to form a
65 mm diameter pupil at the location of the filter. The collimator is another silica singlet whose main
function is to collimate the F/13 focal plane. The image doublet is very similar in composition to the relay
doublet and images the F/31 focal plane at the camera detector.
3.1 VBI OPTICAL DESIGN REQUIREMENTS:
The following were the requirements for the VBI optical design:
FOV: ≈ 2.8 arcmin (round)
Performance: Diffraction limited over the FOV
F#: 31 at the detector plane
Wavelength range: 390 to 490 nm
Optimized wavelength: 430 nm
Chromatic focal shift: less than 25 mm at the detector between 390 - 490 nm
Filter acceptance angle: ≤ 1.4 degrees
The filter must be located near a pupil within a collimated field.
Filter clear aperture: 65 mm
3.2 INTERFACE TO VBI
The optical interface to VBI is the facility beamsplitter shown in the red circle below. This interface is
described in detail in the Coudé Station to VBI Interface Control Document (ICD 3.1.3 to 3.2).
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 9 of 267
Figure 4: VBI-ATST Optical Interface
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 10 of 267
3.2.1 Image Quality
Notes on Spot Diagrams (Figure 5):
Spot diagrams on focus, FOV: 2.8 arcmin.
Circles indicate Airy disk sizes for corresponding wavelengths.
3.2.2 Angle of Incidence
The field angles produce the center ray tilt and the collimate errors produce the collimate ray tilts. In this
design the largest central ray tilt is 1.383°, and a maximum possible tilt of 1.399° (Figure 6).
Figure 5: Image Quality represented by Spot Diagrams. Left: Spot Diagram for 390 nm. Right: Spot Diagram for 490
nm. The circles represent the Airy disc.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 11 of 267
Figure 6: Angular Aberration on the VBI Blue Filters
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 12 of 267
3.2.3 Pupil Footprints on Filters
Figure 7: Pupil Footprints on all Filters (all wavelengths). Filter CA is 65 mm, Filter Diameter is 70 mm.
3.2.4 Grid Distortion
The maximum grid distortion is always below 0.16% for all wavelengths (Figure 8, Figure 9).
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 13 of 267
Figure 9: Grid Distortion Tables for 390 nm (top), 430 nm (middle), and 490 nm (bottom)
Figure 8: Distortion Grid for 430 nm. The numbering of grid points (see tables below) follows the scheme "left to
right" then "top to bottom".
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 14 of 267
3.3 OPTICAL TOLERANCE ANALYSIS AND ERROR BUDGET
The tolerance analysis has been done by two independent optical engineers. Following the VBI PDR,
two fold mirrors were added and the second tolerance analysis was completed. The second analysis gave
the team confidence that the tolerances are well defined. The top-down error budget given in the DRD is
½ wave P-V at 430 nm. This error budget includes the optical interface, optical manufacturing error,
mechanical misalignment, and the filter wavefront error.
The most critical tolerances found in the recent analysis include:
The flatness of the two fold mirrors. They need to be 1/8 wave P-V at 633 nm.
The homogeneity of the glass, particularly the objective lens elements.
The radii on the objective lens elements.
3.3.1 The Zemax Tolerance Model
The table below is the Zemax model used in the Monte Carlo analysis performed by ASE Optics. The
table shows the defined nominal, min, and max ranges of the compensators and tolerances. It uses just the
key tolerances that individually cause more than about 0.005 increase in RMS WFE. The table is used to
generate the Monte Carlo analysis shown in Figure 10.
OPER # Type
Surface
# Nominal Min Max Comment
COMPENSATORS
1 TOFF - - - - Element spacing compensators:
2 COMP 68
-
2533.092 -300 300 Objective focus
3 COMP 74 -885.102 -200 200 Collimator focus
4 COMP 92 1449.872 -300 300 Image focus
5 TOFF - - - -
Image lens x-y position
compensator:
6 CPAR 81 0.04 -3 3 x position
7 CPAR 81 0.474 -3 3 y position
8 TOFF - - - - Focal plane tilt compensator:
9 CPAR 93 6.795 -3 3 x tilt
10 CPAR 93 -0.025 -3 3 y tilt
11 TOFF - - - - Config 2,3 focus compensator:
12 CMCO 6 -1.812 -10 10 Config 2 focus
13 CMCO 6 -3.204 -10 10 Config 3 focus
14 TOFF - - - -
15 TWAV - - 0.633 - Default test wavelength.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 15 of 267
16 TOFF - - - - Pupil filter wafefront model:
17 TEZI 80 0
-5.40E-
05
5.40E-
05
TOLERANCES
18 TOFF - - - - Glass homogeneity tolerances:
19 TEZI 63 0
-1.69E-
05
1.69E-
05 Objective LLF1 grade H3
20 TEZI 65 0
-2.97E-
05
2.97E-
05 Objective BK7 grade H3
21 TEZI 82 0
-1.04E-
05
1.04E-
05 Image Lens PSK3 grade H3
22 TEZI 85 0
-7.68E-
06
7.68E-
06 Image Lens LLF1 grade H3
23 TOFF - - - - Tolerances on surface radii:
24 TRAD 63 2132.8 -22 22 Objective
25 TRAD 64 752.6 -8 8 Objective
26 TRAD 65 -3692 -40 40 Objective
27 TRAD 83 133.9 -1.5 1.5 Image lens
28 TRAD 84 133.9 -1.5 1.5 Image lens
29 TOFF - - - - Tolerance on conic constants:
30 TCON 65 -16.657 -0.2 0.2 Objective
31 TCON 83 -2.385 -0.03 0.03 Image lens
32 TCON 84 -2.385 -0.03 0.03 Image lens
33 TOFF - - - - Index of refraction tolerances:
34 TIND 63 1.548
-1.00E-
03
1.00E-
03 Objective
35 TIND 64 1.517
-1.00E-
03
1.00E-
03 Objective
36 TIND 82 1.552
-1.00E-
03
1.00E-
03 Image lens
37 TIND 84 1.548
-1.00E-
03
1.00E-
03 Image lens
38 TOFF - - - - Element tilt/decenter tolerances:
39 TETX 63 0 -0.1 0.1 Objective lens tilt
40 TEDX 76 0 -2 2 Collimate lens decenter
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 16 of 267
41 TETX 76 0 -0.2 0.2 Collimate lens tilt
42 TOFF - - - - Surface tilt/decenter tolerances:
43 TSTX 63 0 -0.2 0.2 Objective tilt X
44 TSTX 65 0 -0.1 0.1 Objective tilt X
45 TSTX 71 0 -0.2 0.2 Field lens tilt X
46 TSTY 71 0 -0.2 0.2 Field lens tilt Y
47 TSTX 76 0 -0.2 0.2 Collimate lens tilt X
48 TSTY 85 0 -0.4 0.4 Image lens tilt Y
49 TOFF - - - - Irregularity tolerances:
50 TIRR 63 0 -0.5 0.5 Objective surface 1
51 TIRR 65 0 -0.5 0.5 Objective surface 3
52 TIRR 76 0 -0.25 0.25 Collimator surface 1
53 TIRR 77 0 -0.25 0.25 Collimator surface 2
54 TIRR 82 0 -0.25 0.25 Image lens surface 1
55 TIRR 83 0 -1 1 Image lens surface 2
56 TIRR 84 0 -1 1 Image lens surface 3
57 TIRR 85 0 -0.25 0.25 Image lens surface 4
58 TOFF - - - - Fold mirror irregularity tolerances:
59 TIRR 67 0 -0.125 0.125 fold mirror 1
60 TIRR 91 0 -0.125 0.125 fold mirror 2
61 TOFF - - - -
Table 1: Zemax model used in the Monte Carlo
3.3.2 Monte Carlo Results
The Monte Carlo result indicates the design will meet the top-down error budget of ½ wave P-V at 430
nm 99% of the time.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 17 of 267
Figure 10: Monte Carlo Results
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 18 of 267
Figure 11: Ranges for the Compensators
3.4 OPTICAL ALIGNMENT
By simplifying the optical design to an all-refractive design, we have significantly mitigated alignment
risk. In our reflective designs, the sensitivity of off-axis parabola alignment was far more severe.
The dichroic beamsplitter that sends blue light to the VBI is the optical interface and provided by the
facility. It is the responsibility of the appropriate ATST staff to ensure that this interface is properly
aligned before starting the VBI alignment.
It is highly likely that the VBI objective and large fold mirror will share an optical table with other facility
fore-optics. Thus, we expect this table to be properly aligned when installing the objective and fold
mirror. The remaining optics and detector will most likely reside on another optical bench.
The installation and alignment will begin with installing the objective the proper distance from the
dichroic beamsplitter. All lenses will have masks to define the center of the lens. With the mask installed,
we will use a pinhole at the Gregorian Optical Station (GOS) to center the lens in the optical beam. We
will adjust tilt by aligning the reflection from the lens with the beam on the fore-optics. The large fold
mirror will be installed next. The optical bench for the remaining optics and detector will then be installed
and aligned to a row of threaded holes on the bench using a pinhole at the Gregorian Optical Station
(GOS). The remaining lenses will be installed and aligned using the same procedure.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 19 of 267
This type of alignment procedure has been used successfully at the DST for many years. In addition, we
anticipate using the facility wavefront sensor, fiber interferometer, and the VBI detector to optimize the
focal plane optical performance.
Zemax exercises have demonstrated that a fair amount of misalignment (tilt & decenter) of the objective
lens and collimator lens is easily correctable by tilting and decentering the field lens and the image
doublet. In the example below, the objective and collimator are decentered in XY by 2 mm and tilted in
XY by 0.1 and 0.2 degrees respectfully. The field lens and image lenses were then allowed to compensate
by moving a small amount in decenter and tilt.
Figure 12: Nominal alignment (left), Misaligned (center), Realigned (right)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 20 of 267
4 VBI HARDWARE DESIGN
A general overview of the VBI hardware design is presented in this section. For a detailed discussion of
performance calculations, modeling, and design decisions that led to the current hardware design, see
SPEC-0089 – VBI Preliminary Design Document.
The VBI hardware consists of an optical bench with optical mounts, a linear stage for focusing with the
collimator lens, a rotary stage for changing the filter, and an x-y stage for positioning the camera within
the field of view as shown in Figure 13. Figure 14 and Figure 15 show the VBI instrument in relation to
the ATST feed optics and position on the Coudé floor respectively.
In designing the hardware for the VBI, the design philosophy and requirements of the ATST were
carefully considered. The ATST design philosophy dictates the following:
performance and functionality be obtained through elegance of design
part counts shall be minimized as much as possible
preference for off-the-shelf design
designs using efficient and effective manufacturing processes
modular design approach to aid in servicing
parts needing alignments mounted with adjustment screws and locking mechanisms
Each element’s analysis is presented within its own subsection.
Figure 13: VBI Layout
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 21 of 267
Figure 15: VBI Layout on Coude Platform (top down)
Figure 14: VBI Layout including ATST Light Feed
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 22 of 267
4.1 CAMERA
Instrument cameras are being provided by the ATST facility. Currently, there are no commercial cameras
available with a large enough format to cover the entire field of view of the VBI instrument. Due to the
rapid evolution of camera technology, the ATST approach is to wait as long as possible to select facility
cameras to allow the technology to mature. This approach creates risk in that the selected camera pixel
scale may differ from the pixel scale used for the optical instrument design. In the VBI design, this risk
has been minimized by the selection of the all-lens design since the only change necessary to adapt to a
different pixel scale is to change the image doublet lens.
The current facility baseline camera is the 5.5 megapixel scientific CMOS PCO Edge camera. The image
size of the VBI at the focal plane is 71.7mm wide by 71.4mm high whereas the camera sensor is 16.6mm
wide by 14.0mm high. To image the entire 2.0 arc-minute field of view using this camera, it will be
necessary to take a mosaic of camera images five wide by six high to produce one full-field image. It is
expected that larger format cameras will be available over time and therefore the number of images to
cover the field of view will decrease. It is hoped that a 12K 12K (150 mega-pixel) image sensor will
eventually be developed so that the entire field of the VBI can be imaged at once.
An alternate baseline camera choice is the 5.5 Megapixel Andor Scientific sCMOS camera.
4.2 FILTER WHEEL
The filter wheel requirement for the VBI is larger and faster than commercial off-the-shelf filter wheels,
so an in-house design was developed (Figure 16, Figure 17). The filter wheel uses a direct drive servo
motor with an absolute ring encoder as shown in Figure 18.
Figure 16: Filter Wheel Assembly
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 23 of 267
The motor is an Aerotech S-76-149, 14 pole, frameless, brushless, slotless torque motor. The motor
specifications are shown in Figure 19. The rotor, wheel, and encoder are all mounted to a single, direct
drive shaft, which rotates in a pair of duplex angular contact bearings at the wheel end and a single
floating ball bearing at the other end. A Renishaw 18 bit absolute angle encoder provides ±10 arc second
accuracy with ±5 arc-second resolution. The frame is made of copper because of its high heat
conductivity and furnace brazing capabilities. The heat generated by the stator coils will be removed by
cooling fluid channels in the copper frame of the motor.
The wheel and other major mechanical components are machined from aluminum and black anodized.
The wheel assembly has been light-weighted while retaining rigidity and is capable of holding either
75mm or 80mm filters (see Figure 18).
Figure 18: Wheel Assmebly - ring encoder in light gray
Figure 17: Filter Wheel Assembly - side and top section
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 24 of 267
Several move times can be achieved. Various examples are displayed in Figure 20
Figure 19: Aerotech S-76-149-A Motor Specifications
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 25 of 267
4.2.1 Filter Wheel Repeatability Testing
An experiment was set up to test the prototype filter wheel repeatability while under closed loop servo
control. The test was intended to measure the angular repeatability of the filter wheel by using a laser
reflected from a mirror mounted on the edge of the filter wheel onto a Newport PSD9 laser position
sensor. This sensor can detect laser position changes at the micron level. The sensor was set up at 1
meter from a mirror attached to the edge of the filter wheel as shown in Figure 21. The x-axis of the
sensor was oriented in a horizontal plane parallel to the optical bench and perpendicular to the rotation
direction of the wheel with the y-axis oriented in a vertical plane along the rotation direction of the wheel.
With everything secured to the bench and sitting at rest, there was some fluctuation in the laser position
readout that is attributed to local seeing conditions. The magnitude of this fluctuation along the non-
moving x-axis at the time this data was recorded was 7 µm and a graph of this data is shown in Figure 22.
The spikes toward zero in the data shown in both graphs represent ±90º moves out of position and then
Figure 20: Comparison of Filter Wheel Move Times
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 26 of 267
back to the same position the mirror was mounted at. Figure 23 shows a plot of the data recorded along
the y-axis in the direction of the filter wheel rotation. The data shows the accuracy, repeatability and
stability of the wheel position to be within a total of 14 µm. Half of this fluctuation could be attributed to
the local seeing conditions and an additional half to the reflected mirror angle, but even ignoring this and
considering the total deviation to be the positional error of the wheel yields, sin-1
(.014 / 1000) * 602 = ~3
arc seconds. This demonstrates that the filter wheel is easily being held to a positional accuracy of within
1 encoder count, which equates to a maximum linear error of sin(5” / 60^2) * 91.5 = 0.002mm.
Figure 21: Filter Wheel Position Test Setup
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 27 of 267
4.2.2 Filter Wheel Vibration Testing
Vibration of the optical bench by the filter wheel was a concern of the VBI team due to the nature of the
split-bench arrangement that has the objective lens on one bench and the remaining optical train on
another bench. With the fast motions of the filter wheel, vibrations of the primary bench were a concern.
The laser detector has a readout rate of 5 Hz so was unsuitable for vibration testing. An inexpensive
accelerometer was obtained and used to measure the acceleration of the optical bench. The accelerometer
can be seen mounted to the optical bench in Figure 24.
Figure 23: Filter Wheel Detector Position in Direction of Rotation
-0.1200
-0.1000
-0.0800
-0.0600
-0.0400
-0.0200
0.0000
1 10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
Po
siti
on
Se
nso
r Y-
Axi
s (m
m)
Filter Wheel Position
Figure 22: Local Seeing in Detector Axis Perpendicular to Rotation Direction
1.2000
1.2200
1.2400
1.2600
1.2800
1.3000
1.3200
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
Po
siti
on
Se
nso
r X
-Axi
s (m
m)
Local Seeing
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 28 of 267
The accelerometer was programmed to output data at a 1 kHz rate and the filter wheel was set for 90
degree moves with a 200ms move time. The sensitivity of the accelerometer is 1 mg (milli-g or 0.0098
m/s2) and the frequency response is 1 kHz.
A data set was collected and analyzed. The low frequency moves could be detected in the power
spectrum, but were buried in the noise. There were resonances seen at around 1kHz - these were found to
be due to the servo system locking in after the move and even these were slight. No vibrations were
identified that would impact instrument performance.
4.2.3 Modeling and Design Analysis
4.2.3.1 Motor and Filter Wheel Analysis The filter wheel and motor design is fundamentally unchanged since the preliminary design effort. See
SPEC-0089 VBI Preliminary Design Definition for detailed analyses, design choices and trade-offs that
led to the current filter wheel and motor capabilities.
Filter Wheel Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
Move time 0.54 sec. 0.34 sec. 0.115 sec.
Accuracy ±0.01 mm ±0.05 mm 0.020 mm (See Note 1)
X/Y Tilt 0.05º (See Note 2)
Repeatability ±0.05 mm 0.018 mm (See Note 1)
Cell diameter 70 mm min. 70 mm
Clear aperture 65 mm min. 68 mm
Note 1: This is maximum theoretical mechanical error. Actual testing shows accuracy and repeatability to be within
a single encoder count (5 arc seconds or 0.002mm).
Note 2: Wheel will be assembled and inspected to comply with required tilt angle.
Figure 24
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 29 of 267
4.3 CAMERA STAGE
The camera stage consists of two linear stages mounted in an x-y configuration to move the camera to
image any part of the 2 arc minute field of view. The camera stage is shown in Figure 28 (left).
In order to cover the full field of view using the baseline cameras currently being considered, the camera
will have to be scanned across the focal plane in both x and y directions to acquire a mosaic of images
(the coordinate system used to describe the VBI image plane is that the y coordinate is vertical, the z
coordinate is along the beam axis, and the x coordinate is horizontal and perpendicular to the beam).
Since focus is accomplished with the collimator lens stage, the camera stage does not require a focus
dimension. Using current baseline camera choices, the stages will need a travel of 7cm in both the x and
y coordinate planes to cover the field of view. The stages need to move 20mm within a requirement of
0.54 seconds with a goal of 0.34 seconds. There is a further requirement that the camera stage speed not
be the limiting factor in instrument cadence. The camera stage is required to have an accuracy of 10µm
with a goal of 5µm and a repeatability of 5µm.
Commercially available off-the-shelf stages meet these targets.
A design using Parker 404XR stages with a travel of 100mm is shown in Figure 28 (left). These stages
are available with brushless, slotless servo motors and built-in rotary encoders. A 'power off' brake will
assure the camera is held in place during exposure. Encoder resolution is 5000 line counts. The 5000 line
encoder has a built-in interpolation of 4X for a final encoder output of 20,000 counts and gives a
resolution of 0.25 um with a 5mm screw pitch which is adequate to ensure the repeatability requirement
of 5um.
An off-the-shelf right angle adapter is also available for the vertical stage. The stages are available pre-
assembled in the X-Y configuration from the manufacturer with a 30 arc second orthogonality. This
would be a repeatable error which could be compensated for in motion control parameters.
If the camera format increases, the camera stage may be required to travel up to ½ the distance of the
image plane in the required time; this distance is 36mm. The Parker stage with the baseline camera is
capable of moving 65mm in 200ms. Although this eventuality is not in the requirements, the Parker stage
will meet these requirements.
The only piece of the camera stage assembly that must be manufactured is the adapter plate to attach the
camera to the stage platform. There is also the likelihood of needing a light stop, CCD chip mask and
some sort of light baffle tube attached to either the camera or its stage to control scattered light issues, but
these details cannot be realized until the final camera selection is made.
The specifications of the Parker linear stage are shown in Figure 25 and the specifications for the standard
Parker SM232 motor are shown in Figure 26.
Figure 27 gives a summary of the move times.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 30 of 267
Figure 26: SM232A Motor Specifications.
Figure 25: 404XR Linear Stage Specifications.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 31 of 267
Figure 27: Summary of Camera Stage Move Times.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 32 of 267
4.3.1 Modeling and Design Analysis
See SPEC-0089 VBI Preliminary Design Definition for initial design considerations.
Camera Stage Opto-Mechanical Design Requirements Compliance
Req.
Description
Requirement Goal As Designed Value
Move range 7 cm 10 cm
Velocity 20 mm/0.54
sec.
20 mm/0.34
sec.
20 mm/0.2 sec.
Accuracy ±10 µm ±5 µm ±8 µm (See Note 1)
X Tilt ±1.6º (See Note 2)
Y Tilt ±1.3º 0.35º max. (See Note
3)
Repeatability ±5 µm ±1.3 µm
FOV 2’ square 2’+
FOV 2’ round 2’+
Note 1: Setting appropriate motion controller parameters should yield better accuracy since each scan is a repetition
of the same move positions and encoder resolution is 0.25µm.
Note 2: Dependent on actual placement of mount at focal plane.
Note 3: This is the maximum tolerance stack-up of manufacturing dimensions and Parker 30 arcsecond
orthogonality spec.
4.4 FOCUS STAGE
Instrument focus is achieved by translating the position of the collimating lens along the optical axis. The
collimating lens is mounted on a linear stage to accomplish the focusing.
Figure 28: X-Y Camera Stage (left) and Focus Stage (right)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 33 of 267
In order to focus the full wavelength range of the VBI blue channel, the collimating lens will have to be
translated a short distance (<10 mm) along the optical axis. The focus stage is required to have an
accuracy of 10µm with a goal of 5µm and a repeatability of 5µm. A commercially available off-the-shelf
stage meets these targets.
The design using a Parker 404XR linear stage with a travel of 100mm is shown in Figure 28 (right). This
will be the same stage, motor and encoder as the camera stages for compatibility, ease of motion control
programming, spares, etc. This is the same as the camera stage encoder resolution of 5000 line counts.
The 5000 line encoder has a built-in interpolation of 4X for a final encoder output of 20,000 counts and
gives a resolution of 0.25 um with a 5mm screw pitch which is adequate to ensure the repeatability
requirement of 5um.
The focus stage specifications are given in Figure 29 and a comparison of move times is shown in Figure
30.
Figure 29: 404XR Linear Stage Specifications.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 34 of 267
4.4.1 Modeling and Design Analysis
See SPEC-0089 VBI Preliminary Design Definition for initial design considerations.
Collimator Focus Stage Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
Move range 20 mm 100 mm
Velocity 20 mm/0.54 sec. 20 mm/0.34
sec.
20 mm/0.2 sec.
Accuracy ±10 µm ±5 µm ±8 µm (See Note 1)
X Tilt ±0.2º (See Note 2)
Y Tilt ±0.2º (See Note 3)
X Decenter ±2 mm (See Note 2)
Y Decenter ±2 mm ±0.39 mm max. (See
Figure 30: Comparison of Focus Stage Move Times.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 35 of 267
Note 4)
Repeatability ±5 µm ±1.3 µm
Note 1: Setting appropriate motion controller parameters should yield better accuracy since each wavelength focus
position is a repetition of the same move positions and encoder resolution is 0.25µm.
Note 2: Dependent on actual placement of mount within optical beam.
Note 3: Mount will be assembled and inspected to comply with required tilt angle.
Note 4: This is the maximum tolerance stack-up of manufacturing dimensions.
4.5 OBJECTIVE LENS MOUNT
The objective lens mount will be a fixed position 'tombstone' type mount. No adjustments are needed for
this mount. The objective lens mount is shown in Figure 31 (top left).
Objective Lens Mount Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
X Tilt ±0.1º (See Note 1)
Y Tilt ±0.1º (See Note 2)
X Decenter ±2 mm (See Note 1)
Y Decenter ±2 mm ±0.28 mm max. (See
Note 3)
Note 1: Dependent on actual placement of mount within optical beam.
Note 2: Mount will be assembled and inspected to comply with required tilt angle.
Note 3: This is the maximum tolerance stack-up of manufacturing dimensions.
4.6 FOLD MIRROR #1 MOUNT
The first fold mirror needs to be ~350mm in diameter in order to make the 90 degree fold after the
objective lens. Known commercially available mirror mounts in this size range generally have a
centerline distance higher than the specified Coudé beam height above the optical benches of 250mm, so
a custom mount will be made. This custom fold mirror mount is designed to make use of THK preloaded
crossed roller bearings for its ALT-AZ movements and Newport 100 thread per inch screws for its fine
adjustment. The mount design of the first fold mirror is shown in Figure 31 (top right).
Fold Mirror 1 Mount Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
X Tilt Adjustable (See Note 1) 0.01 mm (See Note 2)
Y Tilt Adjustable (See Note 1) 0.01 mm (See Note 2)
Note 1: Requirement is to steer beam to filter wheel within ±0.05 mm over a distance of >3 m.
Note 2: Pointing resolution per degree of adjusting screw over 3000 mm.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 36 of 267
Figure 31: Objective Lens Mount (top left), Fold Mirror #1 Mount (top right), Fold Mirror #2 Mount (bottom left),
and Field and Imaging Lens Mount (bottom right).
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 37 of 267
4.7 FOLD MIRROR #2 MOUNT
The beam size of the second fold mirror mount is small enough that standard Newport model 605-4
gimbal optic mount will work. A riser plate will be made to position the mirror centerline at the 250mm
beam height. The mount and riser plate are shown in Figure 31 (bottom left).
Fold Mirror 2 Mount Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
X Tilt Adjustable (See Note 1) 0.02 mm (See Note 2)
Y Tilt Adjustable (See Note 1) 0.02 mm (See Note 2)
Note 1: Requirement is to steer beam to image plane within ±1.0 mm over a distance of >1.4 m.
Note 2: Pointing resolution per degree of adjusting screw over 1400 mm.
4.8 FIELD & IMAGING LENS MOUNTS
The optical tolerance analysis shows that if the objective is held to within 2.0mm decenter and 0.1
degrees tilt and the collimating lens is held to within 2.0mm decenter and 0.2 degrees tilt, optical errors
can be compensated for by decentering and tilting the field and imaging lenses by equal tolerance
amounts. This is the logical approach since these are the smallest and easiest to manipulate lenses.
Therefore, these lens mounts will have x-y-z and tip/tilt adjustments. This can be done with Newport
stages and mounts as shown in Figure 31 (bottom left). The x-y-z stages are Newport model M-426
crossed roller bearing linear stages and the tip/tilt stages are Newport model U300 optic mounts.
Field Lens Mount Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
X Tilt ±0.2º 0.0005º (See Note 1)
Y Tilt ±0.2º 0.0005º (See Note 1)
X Decenter 2 mm 0.0007 mm (See Note 1)
Y Decenter 2 mm 0.0007 mm (See Note 1)
Note 1: Per degree of adjusting screw.
Imaging Lens Mount Opto-Mechanical Design Requirements Compliance
Req. Description Requirement Goal As Designed Value
X Tilt ±0.2º 0.0005º (See Note 1)
Y Tilt ±0.2º 0.0005º (See Note 1)
X Decenter 2 mm 0.0007 mm (See Note 1)
Y Decenter 2 mm 0.0007 mm (See Note 1)
Note 1: Per degree of adjusting screw.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 38 of 267
5 CONTROL SYSTEM DESIGN
The VBI control system consists of a Linux computer which performs all communications to the ATST
computer network through the ICS (see software design Section 7). The control computer interfaces to
the motion control computer (the Delta Tau Power PMAC) which handles the details of the motion
control routines and status reporting to the control computer.
The Delta Tau Power PMAC card will be mounted in a UMAC chassis (an example can be seen in Figure
33 (top)) along with the encoder readers, I/O, and PWM drives. The UMAC chassis provides a flexible
and expandable system using plug-in I/O cards.
Power
Supply
Power
PMAC
Motion
Controller
Acc
24E3
Axis
Expansion
3U042
dual
PWM
Amp
4/8A
3U042
dual
PWM
Amp
4/8A
3U081
single
PWM
Amp
8/16A
Camera
Az stageCamera
El stage
Focus
stage
Filter
wheel
Acc
24E3
Axis
Expansion
Encoder & limit signals
PWM signals
Ethernet
VBI
Instrument
Computer
Ethernet
Instrument
Control
System
Network
UMAC
Chassis
Motor Drive
Chassis
Regeneration
resistor
Acc
84E
Serial
Encoder
Interface
Copely
JSP-090-
10
Amp
Proportioning
valve
U36E
Analog
Input
Temperature inputs
Figure 32: VBI Control System Hardware Layout
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 39 of 267
The motor drive amplifiers (Figure 33 (bottom)) will be mounted in a separate UMAC chassis to keep the
high powered electronics separate from the low level electronics.
5.1 MOTOR DRIVES
The motor drives receive low-level PWM drives from the axis expansion cards and provide the electrical
energy to move the actuators. All of the VBI actuators will use UMAC drives with the exception of the
coolant flow drive which will be an inexpensive Copely drive that is compatible with the PWM signals
provided by the Delta Tau.
5.2 FILTER WHEEL DRIVE
5.2.1 Overview
The filter wheel drive will be a Delta Tau 3U081 PWM amplifier, mounted in a second UMAC chassis
that contains only motor drives.
5.2.2 Analysis
The motor drive for the VBI filter wheel requires a peak current of between 2.8 and 5.3 Amps (depending
on move profile and curve shaping) for a 200ms move. The Delta Tau 3U081 is rated with a continuous
current of 8 Amps and a peak current of 16 Amps. The de-rating factor for the 11,000 ft. elevation at
Haleakala for motor drives is 74.4% so the 3U081 will be capable of 6 Amps continuous and 12 Amps
peak drive making it a good match to the filter wheel motor.
Figure 33: Delta Tau UMAC Cassis (top) and Delta Tau UMAC Motor Drive Amplifiers (bottom)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 40 of 267
5.3 CAMERA STAGE DRIVE
5.3.1 Overview
The Camera Stage and Focus Stage drives will be Delta Tau 3U42 dual-axis PWM amplifiers, mounted in
the UMAC motor drive chassis.
5.3.2 Analysis
The motor drive for the camera stage actuators and collimating lens actuator will require peak drive
currents on the order of 1 Amp for a 200ms move. The Delta Tau 3U42 is a 2-axis 4Amp (8A peak)
amplifier that suits the requirement well. With de-rating applied, the 3U42 is capable of providing
3Amps continuous.
5.4 REGENERATION
5.4.1 Overview
Power regeneration is the ability of the motor drive to recover energy from decelerating motors for use in
the next acceleration. The motors in linear stages have very little stored mechanical energy and so
regeneration is of little concern, but the filter wheel has a move profile of a quick acceleration followed
by a quick deceleration of a high inertial load, making it advantageous to utilize regeneration.
5.4.2 Analysis
When directly rectified, 208V produces a DC motor bus voltage of 294VDC. The filter wheel requires
about 90 Watt-seconds of drive energy (or less, depending on the speed of the move) during each
acceleration, and then returns the energy back to the drive’s DC motor bus during deceleration. This will
increase the voltage of the DC motor bus by about 80VDC (depending on the size of the filter capacitor)
raising the voltage to 374VDC. Above 380V, the motor drive will begin dumping energy into a
regeneration resistor to prevent damage to the electronics. So the bus voltage provided by 208V three-
phase is ideal for the filter wheel in that it is high enough to provide full torque to the motor, yet low
enough to take full advantage of regeneration. Saving 90 Watt-seconds per move will decrease the power
consumption of the VBI by an average of 30W during normal operation.
5.5 POWER FEED
5.5.1 Overview
The VBI will be provided a 208V three-phase circuit and will have a dedicated distribution panel. This
panel will provide filtered 208V three-phase power along with filtered 120V single phase power. The
power distribution panel will include logic to provide push button on/off switches to enable/disable the
294V motor bus. For electrical safety reasons, the circuitry includes an interlock that will be used to
automatically remove motor bus power from the system in the event that a motor connector is unplugged.
Also included in the interlock system is an over-temperature switch for the filter wheel regeneration
resistor.
Due to the high frequency switching currents generated by the motor drive amplifiers, filtering has been
included in the power distribution unit. Details of the power distribution unit can be found in the
drawings appendix.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 41 of 267
5.5.2 Analysis
The choice of power feed to motor drives is important. Motor bus voltage is obtained by direct
rectification of the AC power line voltage, therefore a 117V circuit will produce a motor bus voltage of
117Vrms * √2 = 165VDC and a 208V circuit will produce a bus voltage of 208Vrms * √2 = 294VDC.
The motor bus voltage must be high enough to produce the necessary torque, but cannot be higher than
the maximum rating of the motor.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 42 of 267
6 THERMAL SYSTEMS
The Coudé Lab of the ATST maintains a downward flow of air at ½ m/s that is controlled to a tolerance
of 0.5˚C. This cool air will aid with the cooling of the optics and motors.
The thermal requirements in the Coudé Lab prohibit unmitigated thermal loads of more than 20W or
surface temperatures that exceed +1.5˚C above ambient or -3˚C below ambient.
The linear stage motors will have a surface temperature rise of less than 1˚C which is not expected to be a
problem. The filter wheel surface temperature is expected to be about 7.5˚C above ambient without
active cooling; this exceeds the requirements for the Coudé Lab and may also pose a problem with the
beam path due to the proximity of the motor to the beam.
The thermal facility supplies chilled liquid coolant to the VBI which will be used to cool the filter wheel
motor frame which contains coolant passages for this purpose. Control of the coolant flow will be
accomplished by a proportioning valve and a temperature sensor on the motor coolant outlet port. The
Delta Tau controller will read the temperature sensor and control the proportioning valve to maintain an
outlet temperature of 20 ˚C.
Camera thermal control will also be needed and is provided by the project. Since the final camera
selection will not be made until mid-2014, the final camera cooling system is unknown at this point.
Figure 34: Proportioning Valve
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 43 of 267
7 SOFTWARE DESIGN
7.1 INTRODUCTION
The purpose of the VBI is to achieve a subset of the top level science requirements, specifically to record
images at the highest possible spatial and temporal resolution of the ATST at a number of scientifically
important visible wavelengths. The VBI software system achieves this by coordinating and controlling
the activities of the instrument’s mechanical, detector, and image processing components under
instruction from the Instrument Control System (ICS).
The following sections describe the design for the Visible Broadband Imager software system. The
intention of these sections is to describe the structure of the software that constitutes the ATST Visible
Broadband Imager software system, how it interfaces to the remainder of the ATST software systems, and
how the functional and behavioral software requirements expressed in the VBI DRD are met.
The layout of the following sections is as follows:
Section 7.2 defines terminology used in describing the VBI software systems. Section 7.3 introduces the
VBI software system modules in the context of other ATST systems. Section 7.4 reviews the operational
use cases for the VBI and discusses the software modules that will support them. Section 7.5 presents the
design for the many graphical user interfaces of the VBI software system. Section 7.6 presents the critical
design of the VBI Instrument Controller. Section 7.7 reviews the design of the observing task scripts and
provdes an example script. Section 7.8 presents the critical design for the VBI Data Processing Pipeline.
Section 7.9 reviews other non-source code related deliverables of the VBI. Finally, Section 7.10 presents
results of software analysis done during the time frame between the PDR and CDR.
A compliance matrix can be found in Error! Reference source not found., cross referencing how the
design described in this document meets the VBI design requirements definition.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 44 of 267
7.2 TERMINOLOGY
7.2.1 General Terminology
The following set of terminology applies to the VBI instrument from the user perspective.
7.2.1.1 Observation An observation with the VBI is defined as the process of placing mechanism and detector components in
a fixed configuration and acquiring one or more frames of data.
7.2.1.2 Cycle (of Observations) A cycle is defined as the process of executing a predefined sequence of observations. A cycle may be
performed one or more times.
7.2.1.3 Field Sampling Field sampling is defined as the process of repeating an observation at different camera mount positions
thus allowing the entire FOV to be imaged. The user specifies the pattern of camera mount positions to
follow.
7.2.1.4 Observation Cadence Observation cadence is defined by the time intervals between the start of observations in a cycle.
7.2.1.5 Cycle Cadence Cycle cadence is defined by the constant time interval between the start of each cycle when executing a
cycle multiple times.
7.2.2 VBI Instrument Controller Terminology
The following set of terminology applies to the VBI Control System Software instrument controller.
7.2.2.1 Observing Task Configuration An observing task configuration is defined as the input given by the ICS to the public interface of the VBI
control system. This input consists of an observing task and a set of parameters to be used in that task.
7.2.2.2 Observation Parameter Set An observation parameter set is defined as the user specified information needed to setup mechanism and
camera components such that the desired observation is performed.
7.2.2.3 Cycle (of Observation Parameter Sets) A cycle is defined as a user specified sequence of observation parameter sets to be executed one or more
times.
7.2.2.4 Fixed Observation Cadence Fixed observation cadence is defined as the process of maintaining an equidistant time interval between
the start of data acquisition for each observation in a cycle.
7.2.2.5 Loose Observation Cadence Loose observation cadence is defined as the process of maintaining an equidistant time interval between
the start of data acquisition for an observation from one cycle to the next cycle.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 45 of 267
7.2.2.6 Mechanism Configuration A mechanism configuration is defined as the input given by the VBI IC to the controller of a mechanism
(i.e. filter, focus, etc.) that represents the demand action to be performed. This input consists of a
controller mode and a set of parameters to be used in that mode.
7.2.2.7 Camera Configuration A camera configuration is defined as the input given by the VBI IC to the Virtual Camera Controller
(VCC) that represents the demand settings. This input consists of the camera mode and a set of
parameters to be used in that mode.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 46 of 267
7.3 DESIGN OVERVIEW
7.3.1 Context
The ATST control system consists of four principal systems, the telescope control system (TCS), the
observatory control system (OCS), the data handling system (DHS) and the instrument control system
(ICS). The OCS is responsible for high level observatory operations like scheduling, allocating resources
and running experiments. Experiments consist of a series of observations with a particular
instrumentation setup. The OCS uses the ICS for management of the instruments during an observation.
The data from these experiments is stored and displayed by the DHS.
The VBI software system consists of several modules, some of which are used on-mountain for operation
of the instrument, and others which are used off-mountain for analysis purposes. Figure 35 shows the
modules of the VBI software system as yellow boxes, and illustrates how they are distributed throughout
the ATST principal systems.
Figure 35 VBI Software Systems Context
The following sections will introduce each of the VBI software system modules in more detail.
7.3.2 System Modules
The VBI blue channel software system is composed of several modules that work together to provide a
solution that meets the design requirements definition. These modules can be grouped into three different
functional areas: Instrument Control, Data Processing and Display, and User Interface. The deployment
Off Mountain
CSF
ICS
OCS
OMS
TCS DHS
Camera SystemsTRADS
VBI IC
VBI Instrument
Tabs
VBI Camera Line
VBI Engineering
GUI
ICD 3.1.4 / 3.2 ICD 3.1.4 / 3.6
ICD 3.6 / 4.3
SPEC-0089
ICS 3.1.4 / 4.2
VBI IA
SPEC-0094
VBI Explorer VBI Simulator
On Mountain
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 47 of 267
diagram for the VBI Blue Channel is shown in Figure 36 below. The <<device>> boxes that are coloared
black represent the computer systems hardware that will be delivered by the VBI software team. The
<<artifact>> boxes represent the software deliverables and are color coded blue, green, and yellow to
represent the Instrument Control, Data Processing and Display, and User Interface functional areas
respectively.
The next few sections will provide an overview of these functional areas and the software modules that
will support each of them. The design of each deliverable module is covered in the detailed design
sections of this document along with specifications for the hardware devices.
7.3.3 Instrument Control
One of the primary functions of the VBI software system is to provide users the ability to configure,
control, and monitor the sub-systems of the instrument. The major deliverables in this area are as
follows:
7.3.3.1 Instrument Controller The Instrument Controller (IC) is a hierarchical set of software components that provide command and
control of the mechanism and detector elements of the VBI. The VBI application specification defines
Figure 36: VBI Blue Channel Deployment Diagram
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 48 of 267
the name and type of each component in the IC and also establishes the hierarchical relationship between
the components. The VBI properties definition provides the information (name, type, range, etc.)
necessary to register a property with the CSF component it belongs to. The VBI software team will
deliver the IC software components, application specification, and properties definition to provide an
instrument control solution. The detailed design for the Instrument Controller can be found in section 7.6
of this document.
7.3.3.1.1.1 Observing Task Scripts It is often desirable to perform a sequence of coordinated actions with the VBI sub-systems in order to
produce a desired result. Scripting provides the capabilities necessary to achieve this coordination, and
helps maintain flexibility in how the instrument can be used. The VBI software team will deliver a set of
observing task scripts that produce the required system behavior for each observing task. The Instrument
Controller will provide support for execution of observing task scripts. For design information about the
observing task scripts please refer to section 7.7 of this document.
7.3.3.1.1.2 Motion Programs The VBI contains several motion stages that move optical and imaging components of the instrument into
different configurations. These motion stages will be run under closed loop control using the Delta Tau
motion control system described in section 5 of this document. In addition, custom motion programs will
be executed on the Delta Tau to ensure accurate control of the VBI motion stages to the specifications for
the VBI. The VBI software team will deliver these custom motion programs and provide the ability for
users to specify their input parameters through the Instrument Controller.
7.3.3.1.1.3 Motor Configurations The motion stages of the VBI (filter, focus, camera x and y) must be setup in the Delta Tau system as
motor configurations. The VBI software team will perform the setup and tuning necessary to operate the
motion stages to the required precision and performance.
7.3.3.1.1.4 General Functionality The VBI IC will be a sub-system of the ICS, and therefore must follow all ATST software standards as
defined in the ATST Software Design Requirements. The VBI IC is built using the ICS SIF, and
therefore inherits the general functionality (logging, health, default state, etc.) supported by its
components and controllers.
7.3.4 Data Processing and Display
Processing of image data collected from the instrument cameras is one of the key elements of the VBI
software system. The major deliverables in this area are as follows:
7.3.4.1 Quick Look Display The Quick Look Display allows the user to view data being produced from any node of the VBI camera
line. It is provided by the ATST software group as part of the DHS.
7.3.4.2 Detailed Processing Plug-In and Display Users often wish to view image data after it has been calibrated using the current mask, gain, and dark
calibration files. In addition, several sub-images taken to capture a field larger than the camera sensor
size should be stitched together to provide a view of the whole field. These features will be provided via
the VBI Detailed Display. The DHS supports any custom instrument data processing by providing a
framework for a data processing pipeline (DPP). The Detailed Processing plug-is a part of this pipeline
and when used in conjunction with a Quick Look Display provides the required detailed display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 49 of 267
capabilities. The VBI software team will deliver the detailed display plug-in as part of the VBI camera
line DPP. For detailed design information on this plug-in please refer to section 7.8 of this document.
7.3.4.3 Frame Selection Plug-In Data reduction is an important part of the VBI camera line. The DHS supports any custom instrument
data reduction processing by providing a framework for a data processing pipeline (DPP). The Frame
Selection plug-in provides the ability to select a subset of frames in a frameset based on input parameters
to a selection algorithm. The VBI software team will deliver the Frame Selection Plug-In as part of the
VBI camera line DPP. For detailed design information about this plug-in please refer to section 7.8 of
this document.
7.3.4.4 Speckle Reconstruction Plug-In The VBI must be capable of producing images at the highest possible spatial resolution, preserving the
Strehl ratio of the telescope over the FOV of the instrument. In order to achieve this goal the VBI must
provide facilities for image reconstruction to complement the telescope’s AO system. The DHS supports
any custom instrument data processing by providing a framework for building a data processing pipeline
(DPP). The Speckle Reconstruction plug-in provides near real-time reconstruction of images produced
by the VBI. The VBI software team will deliver the Speckle Plug-In as part of the VBI camera line DPP.
For detailed design information about this plug-in please refer to sections 7.8 and 7.10.3.
7.3.4.5 Gain Calibration Plug-In The VBI will use gain calibration data for flat fielding the camera and post facto processing of raw
frames. The Gain Calibration plug-in will use input frames to produce gain calibration files and save
them to the calibration store. The VBI software team will deliver the Gain Calibration plug-in as part of
the VBI camera line DPP. For detailed design information about this plug-in please refer to section 7.8 of
this document.
7.3.4.6 Dark Calibration Plug-In The VBI will use dark calibration data for post facto processing of raw frames. The Dark Calibration
plug-in will use input frames to produce dark calibration files and save them to the calibration store. The
VBI software team will deliver the Dark Calibration plug-in as part of the VBI camera line DPP. For
detailed design information about this plug-in please refer to section 7.8 of this document.
7.3.4.7 Data Distribution Data produced by the VBI camera lines must be put into FITS format and transferred off the mountain to
the NSO digital library (NDL). The DHS provides a customizable Data Distribution Node (DDN)
component that supports FITS file creation and transfer to external locations such as NDL. The VBI
software team will deliver a Data Distribution Node to meet these requirements.
7.3.5 User Interfaces
User interfaces provide a means for gathering inputs needed for instrument control as well as displaying
output from data processing components. The major deliverables in this area are as follows:
7.3.5.1 VBI Instrument Tabs Experiments in the OCS may be composed of one or more observations. Each observation will specify an
observing task, the instruments to be used, and the input parameters for each instrument. The VBI must
therefore provide screens that allow the user to specify the input parameter values for each observing task.
The VBI software team will deliver the VBI Instrument Tabs as integrated UI screens in the OCS for this
purpose.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 50 of 267
7.3.5.2 VBI Engineering User Interface Most users will operate the VBI as part of OCS experiment observations. However the interface provided
by the VBI Instrument Tabs in the OCS is limited to high level settings and a more granular interface is
still needed for engineering purposes. The VBI software team will deliver the VBI Engineering GUI to
support setup, troubleshooting, direct control, and detailed monitoring of the instrument.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 51 of 267
7.4 SOFTWARE SYSTEM USE CASES
The VBI Operation Concepts Definition (OCD) document provides a high level view of the users, their
roles, how they will use the VBI software systems, and when the use will occur. This information helps
us understand the main use cases of the VBI software system, and in turn visualize which software
modules will support them. Figure 37 below shows the use cases derived from the OCD and the
supporting software modules. Those use cases shown in grey are provided by other systems (OCS, ICS,
etc.) while those shown in white are specific to the VBI software systems.
This high level view of the software design will help reader more easily see which VBI software system
modules provide the functionality needed to support each use case. The following sections will discuss
each of the VBI specific use cases briefly and explain which software modules support them. References
to the appropriate detailed design sections are also provided.
7.4.1 Check Observing Task Configuration Feasibility
Access to the VBI control system will initially be limited to those at the ATST summit and those located
at the ATST base facility. As a result potential users of the VBI will not be able to test configurations
without the aid of an instrument scientist who has access to a simulated VBI system (see section 7.4.2).
Thus it is required that an off-line tool be provided that can aid potential users in understanding the
available observing tasks, configurations, and attributes of the VBI, as well as performing limited
validation of user inputs. To meet this requirement the VBI software team will provide the VBI Explorer
tool.
Figure 37: VBI Software System Use Cases
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 52 of 267
The VBI Explorer tool will be a packaged executable Java applet that can easily be run on any modern
operating system with a Java Runtime Environment (JRE) installed. This tool will not require other
ATST software such as CSF and therefore can easily be installed via web download. Where possible, the
tool will utilize configuration files and validation code libraries from the actual VBI control system to
ensure consistency and ease of maintenance.
For more details on the design of the VBI Explorer Tool please refer to Section 7.5.2.
7.4.2 Simulate Observing Task Configuration
The VBI control system will provide the ability to simulate the execution of an observation task
configuration. This will allow scientists and engineers to test configurations on the VBI without actually
operating its mechanical and detector hardware elements. To facilitate this functionality the VBI control
system will utilize the simulated component feature provided by CSF.
The CSF provides the ability to deploy a component as a simulated component. When this feature is used
CSF will automatically flag the component as simulated (.isSimulated=true), tag all events as originating
from a simulated component (<event>.simulated=true), and automatically prevent the component from
submitting configurations to non-simulated controllers. These features are built into CSF and help to
prevent undesired system behavior resulting from interactions between simulated and non-simulated
components.
When deployed as simulated, the components that comprise the VBI control system will fully validate
configurations. However, please note that validation of camera configurations will require the virtual
camera components to also be deployed as simulated. If the configuration is valid, the VBI control
system will immediately return successfully to the caller. Therefore execution of the configuration is
limited only to validation. A simulated VBI control system will initially only be available at the ATST
summit and base facilities. For more information on simulated components please refer to the CSF user’s
manual.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 53 of 267
7.4.3 Input VBI Observing Task Parameters
The VBI control software can be operated in any one of a set of observing tasks. Each observing task
requires user input parameters specifying how the system should be configured to produce the desired
results. The observing task and related input parameters are referred to as attributes of a VBI observing
task configuration.
During creation of an experiment in the OCS, the instrument scientist may need to define an observing
task configuration for the VBI. When defining the observing task configuration, the instrument scientist
must have the ability to define a sequence of one or more observations for the VBI to execute. Each of
the observations in the sequence is defiend as a parameter set and must be specified by the user,
validated, and saved as part of the observing task configuration in the experiment before execution can
occur. In addition, users must be able to load observing task configurations and parameter sets for re-use
or editing at a later time. To support this functionality the VBI software team will provide a set of VBI
instrument tabs, one for each observing task, that are integrated into the OCS.
The VBI instrument tabs will be built using the Java Engineering Screens (JES) framework provided by
the ATST software group. The OCS is also built with JES and therefore integration of the VBI
instrument tabs into the OCS will be seamless. Figure 38 below shows a prototype VBI instrument tab
for the Setup observing task built using the JES framework.
Although a majority of the VBI instrument tabs will be built using the standard widgets provided by JES,
customized JES widgets may be developed to support unique needs of the VBI. The VBI software team
will work with the ATST software team to publish standards and guidelines for the look and feel of
instrument tabs. This effort will help increase usability of the OCS by ensuring common functionality of
different instrument tabs is presented to the user in the same manner. Figure 38 shows a prototype
instrument tab created for the Setup task. For more details on the design of the VBI instrument tabs
please refer to Section 7.5.3.
Figure 38: VBI Instrument Tab (Setup)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 54 of 267
7.4.4 Execute VBI Observing Task Configuration
During creation of an experiment in the OCS, the instrument scientist will use the VBI instrument tabs
(see 7.5.3) to specify an observing task configuration, which may contain a sequence of VBI instrument
configurations to be executed. Execution of these configurations will occur as planned during the overall
experiment execution. When an observing task configuration is submit to the VBI control system for
execution, it will first be validated to ensure the required attributes have been specified and that their
values are valid. The behavior of the system will depend on the given observing task and attribute values
of the configuration. In the next few sections we will discuss the observation related use cases of the VBI
control system and the observing tasks and inputs used to support them.
7.4.4.1 Setup Instrument Setup of the VBI instrument can occur during different phases of an experiment. The most common uses
include testing instrument configurations on the VBI to determine feasibility, testing instrument
configurations just prior to use in a scheduled observation so that adjustments can be made for current
conditions, and finally to prepare the VBI control system for observations after a re-boot or critical
system error. The Setup observing task is provided by the VBI control system to support this
functionally.
In the Setup observing task the VBI control system will accept the following input as part of the
configuration:
Sequence of parameter sets:
This input is an ordered list of parameter sets that will be executed sequentially. Each parameter set
consists of attributes that will be used to configure the filter wheel, focus, camera mount stages,
camera, and data processing plug-ins. For a list of available attributes please refer to the ICS to VBI
ICD.
Sequence execution rate:
This is the rate in Hz at which the sequence of parameter sets will be executed. The default will be to
execute the sequence as fast as possible based on the longest exposure rate of any parameter set in the
sequence. However if the user wishes to run the sequence at a slower rate, this input parameter may
be used.
Number of sequence cycles:
This input specifies the number of times the VBI control system should cycle through the given
sequence of parameter sets before exiting.
Data collection indicator:
This input specifies whether or not data should be sent to the DHS camera line where it can be viewed
via the Quick Look display.
If the given Setup observing task configuration is valid, the VBI control system will cycle the mechanism
and detector sub-systems through the sequence of parameter sets at the specified sequence execution
rate. If the data collection indicator is set to true it will instruct the camera to send data to the DHS
camera line where it can be inspected via Quick Look Display. When the data collection option is
invoked execution of the sequence will be repeated indefinitely until the user cancels the operation.
Otherwise execution will continue for the specified number of sequence cycles.
The functionality of the Setup observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7. For more information on Quick Look Display
please refer to Section 7.5.5.2
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 55 of 267
7.4.4.2 Perform Dark Calibration Darks can be generated in the telescope by shutting off the light path from the Gregorian down to the
instrument. This is done as part of the TCS to PAC interface during the OCS Dark operational mode.
Data taken by the VBI detector during this task will yield information about false light contributions and
must be saved to the calibration store for later reference. The Dark observation task is provided by the
VBI control system to support this functionally.
In the Dark observing task the VBI control system will accept the following input as part of the
configuration:
Frame Exposure Time
This is the exposure time that will be used for each frame in the burst.
Number of Frames
This is the number of frames to take in the burst.
If the given Dark observing task configuration is valid, the VBI control system will automatically place
the mechanism sub-systems (filter wheel, focus, and camera mount x-y stages) into pre-determined
positions for collecting darks. It will then configure the camera based on the given frame exposure time
and number of frames for the burst. Once a signal from the ICS is received that the TCS is in position
(i.e. dark shutter is deployed in PA&C), the burst will be executed. Data collected by the camera will be
tagged as darks and sent to the DHS camera pipeline where the VBI dark calibration plug-in will process
it and save it to the calibration data stor.
The functionality of the Dark observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.4. The VBI dark calibration plug-in will be
implemented as a DHS processing pipeline plug-in. For more information on this plug-in please refer to
Section 7.8.
7.4.4.3 Perform Alignment To properly align the VBI camera mount, pinhole target images should be taken. The user must first
request deployment of the lower GOS pinhole target as part of an OCS Align operational mode. During
this operational mode the VBI may be aligned via a manual or automated routine. The Align observing
task is provided by the VBI control system to support this functionally.
In the Align observing task the VBI control system will accept the following input as part of the
configuration:
Frame Exposure Time:
This input allows the user to specify a demand camera frame exposure time to use when collecting
frames during alignment. This will allow the signal to noise ratio to be adjusted to ensure optimal
input to the automated alignment algorithm.
Alignment type:
This input is used to specify the type of alignment routine to execute. The user may choose between
manual and automatic alignment types.
If the given Align observing task configuration is valid, the VBI control system will start by automatically
configuring the filter wheel and focus stages to pre-determined positions, and place the camera x-y stages
in the last known FOV center position. The camera will also be configured to continuously take frames
of the specified frame exposure time. Once the signal is received from the ICS that the TCS is in
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 56 of 267
position (i.e. pinhole target is deployed at PA&C) it will instruct the camera to send frames to the DHS
camera pipeline where they can be viewed via Quick Look display.
If the user has chosen manual alignment type, he/she must use the VBI engineering GUI to make
adjustments to the camera mount x and y stage positions until the pinhole is centered.
If the automatic alignment type was selected, the VBI control system will use camera engineering images
as input to an alignment algorithm, and use the algorithm results to make adjustments to the camera
mount x and y stage positions. This process is continued until the algorithm determines the pinhole has
been centered successfully. Total execution time for this automated alignment routine will be less than
one minute.
The functionality of the Align observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.9. The process for having camera engineering
images returned to the VBI control system is discussed in Section 7.6.3.1.3. For more information on the
Quick Look Display please refer to Section 7.5.5.2.
7.4.4.4 Perform Focusing Focusing of the VBI will occasionally need to be performed by the user. The WCCS will be used as the
frame of reference for focusing. The user must first determine if the light beam from the Sun or an
artificial light source should be used and configure the lower GOS accordingly as part of an OCS Focus
operational mode. During this operational mode focusing for each of the VBI wavelength bands may be
performed via a manual or automated routine. The Focus observing task is provided by the VBI control
system to support this functionally.
In the Focus observing task the VBI control system will accept the following input as part of the
configuration:
Frame Exposure Time for each Wavelength:
This input allows the user to specify for each wavelength the demand camera frame exposure time to
use during focus. This will allow the signal to noise ratio to be adjusted to ensure optimal input to the
automated focus algorithm.
Focus Type:
This input is used to specify the type of focus routine to execute. The user may choose between
manual and automatic focus types. In the case of automatic focusing, the user may also specify the
type of focusing algorithm to be used.
If the given Focus observing task configuration is valid, the VBI control system will start by
automatically configuring the filter wheel and focus stages for the first filter wheel position, and place the
camera x-y stages in the last known FOV center position. The camera will also be configured to
continuously take frames of the specified frame exposure time for that filter and send them to the DHS
camera pipeline where they can be viewed via Quick Look display.
If the user has chosen manual focus type, the VBI engineering GUI must be used to make adjustments to
the focus lens stage position until the desired focus is achieved. At that time the engineering GUI is used
to signal the VBI software to save the focus position and move to the next filter position.
If the automatic focus type was selected, the VBI control system will use camera images as input to a
focus algorithm, and use the algorithm results to make adjustments to the focus lens stage position. This
continues until the algorithm determines the optimal focus has been obtained, at which point the focus
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 57 of 267
position and will be saved. This process is repeated for each filter position. Total execution time for
automated focus of all wavelengths will be less than one minute.
The functionality of the Focus observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.8. The process for having camera engineering
images returned to the VBI control system is discussed in Section 7.6.3.1.3.
7.4.4.5 Perform Gain Calibration VBI filtergrams will need to be corrected for transmission irregularities caused by dust particles close to
focal planes, by inhomogeneities of optical elements, and differences of the properties of individual
pixels. Transmission irregularities are typically determined by flat field data, which can be generated by
using lamp flats, defocusing of the telescope, moving the solar image, or un-flatting the deformable
mirror. The choice of a source for flat field data must be specified by the user in the OCS Gain
operational mode. In this operational mode, the VBI control system may be instructed to collect gain
frames, process them, and save them in the calibration store. The Gain observing task is provided by the
VBI control system to support this functionality.
In the Gain observing task the VBI control system will accept the following input as part of the
configuration:
Frame Exposure Time for each Wavelength
This input allows the user to specify for each wavelength the demand camera frame exposure time to
use during gain calibration. This will allow the signal to noise ratio to be adjusted to ensure optimal
input to the gain calibration plug-in.
Number of Frames
This is the number of frames to take in the burst.
Number of sequence cycles:
This input specifies the number of times the VBI control system should cycle through the
wavelengths and produce gain calibrations.
Continue flag:
This input indicates whether the VBI control system should repeat execution of the gain calibration
cycles or stop and wait for the next observing task configuration.
If the given Gain observing task configuration is valid, the VBI control system will start by automatically
configuring the camera mount x-y stages to the FOV center position and placing the filter wheel and
focus in position for the first filter. The camera will be configured to continually take bursts of the
specified number of frames with each frame having the given frame exposure time. Once the signal
from the ICS is received indicating the TCS is in position (i.e. gain light source deployed) the VBI will
signal the camera to send the next burst frame set to the DHS. Data sent to the DHS camera pipeline will
be tagged as gain data and processed by the VBI gain calibration plug-in and saved to the calibration data
store. This process will be repeated for each wavelength. Execution of the gain observing task will
continue for the specified number of sequence cycles after which the continue flag will determine if the
cycles should be repeated or not.
The functionality of the Gain observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.3. The VBI gain calibration plug-in will be
implemented as a DHS processing pipeline plug-in. For more information on this plug-in please refer to
Section 7.8.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 58 of 267
7.4.4.6 Calculate Pixel Scale Target data are used to calculate and verify the plate (pixel) scale in the detector. This is commonly done
by using the knowledge of the image scale in the prime focus. The user must first request the desired
lower GOS target during the OCS Target operational mode. During this operational mode the VBI may
be instructed to collect data under different configurations, tag it with the selected target, and save it to the
DHS data store. Additionally the VBI may perform an automated plate scale calculation for each
configuration based on the selected lower GOS target. The Target observing task is provided by the VBI
control system to support this functionality.
In the Target observing task the VBI control system will accept the following input as part of the
configuration:
Sequence of Parameter Sets:
This input is an ordered list of parameter sets to be executed sequentially. It allows the user to create
a custom sequence of parameter sets with which to collect target data. Each parameter set consists of
attributes that will be used to configure the mechanism and detector components of the system.
Sequence execution rate:
This is the rate in Hz at which the sequence of parameter sets will be executed. The default will be to
execute the sequence as fast as possible based on the longest exposure rate of any instrument
configuration in the sequence. However if the user wishes to run the sequence at a slower rate, this
input parameter may be used.
Number of sequence cycles:
This input specifies the number of times the VBI control system should cycle through the given
sequence of instrument configurations before exiting.
Continue flag:
This input indicates whether the VBI control system should repeat execution of the sequence of
instrument configurations or stop and wait for the next observing task configuration.
If the given Target observing task configuration is valid, the VBI control system will cycle the
mechanism and detector sub-systems through the given sequence of parameter sets at the specified
sequence execution rate. Data collected by the camera will be tagged as target data and sent to the DHS
data store. Execution will continue for the specified number of sequence cycles after which the continue
flag will determine if the cycles should be repeated or not.
Automated plate scale calculation routines may be established in the system if desired. The system will
support registering one automated routine per lower GOS target. If this option is invoked, the VBI
control system will additionally request an engineering image from the camera and use it as input to the
plate scale algorithm associated with the selected target. The resulting plate scale will be saved in a
persistent store as a property of the filter wavelength used to collect the data. This pixel scale calculation
routine will be completed in less than one minute.
The functionality of the Target observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.10.
7.4.4.7 Perform Observe Once all preparations have been completed the user will commence collection of scientific data with the
VBI. The VBI may be instructed to collect scientific data over a sequence of different parameter sets.
Each parameter set provides the flexibility to specify how the mechanism, detector, and data processing
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 59 of 267
elements of the VBI system shall be used to capture the desired scientific data. The Observe observing
task is provided by the VBI control system to support this functionally.
In the Observe observing task the VBI control system will accept the following input as part of the
configuration:
Sequence of parameter sets:
This input is an ordered list of parameter sets that will be executed sequentially. Each parameter set
consists of attributes that will be used to configure the mechanism, detector, and data processing
elements of the VBI as follows:
Bandpass filter name
User will select desired bandpass filter by name (i.e. 393.4, 430.5, 450.4, 486.1). The VBI
control system will use this information to deploy the corresponding filter and set focus
accordingly.
Field sampling mode
User may specify a desired field sampling pattern by name (i.e. Center, LeftRight, Star, Spiral).
Camera settings
User will specify a limited set of camera settings such as exposure time, number of frames,
binning, and region of interest. Please refer to the ICS to Camera Systems ICD for a list of all
available camera settings.
Exposure Mode
The user will be able to select whether the exposure time should be auto-calculated by the VBI
control system based on test images, or follow the specified fixed exposure times. An option to
auto-update the exposure time throughout the day based on zenith angle will also be supported.
Active plug-ins and their parameters
User will indicate which data processing plug-ins are active (i.e. Speckle, Frame Selection) and
provide any required input parameters.
Sequence execution rate:
This is the rate in Hz at which the sequence of parameter sets will be executed. The default will be to
execute the sequence as fast as possible based on the longest exposure rate of any parameter set in the
sequence. However if the user wishes to run the sequence at a slower rate, this input parameter may
be used.
Number of sequence cycles:
This input specifies the number of times the VBI control system should cycle through the given
sequence of parameter sets before exiting.
Continue flag:
This input indicates whether the VBI control system should repeat execution of the sequence of
parameter sets or stop and wait for the next observing task configuration.
If the given Observe observing task configuration is valid, the VBI control system will cycle the
mechanism and detector sub-systems through the sequence of parameter sets at the specified sequence
execution rate. Under each parameter set, data collected by the camera will be sent to the DHS
processing pipeline where active plug-ins will process the data before it is written to the data store. If a
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 60 of 267
field sampling mode is specified the camera mount will cycle through the sub-fields in the specified
pattern, collecting and processing data for each. Execution will continue for the specified number of
sequence cycles after which the continue flag will determine if the cycles should be repeated or not.
The functionality of the Observe observing task will be implemented using a Jython script. For more
information on this script please refer to Section 7.7.1.2.
7.4.5 Process Data
Data processing is an important element of VBI software system and the instrument as a whole. Data
produced by a VBI camera during execution of an observing task configuration will be processed on the
summit by different data processing “plug-ins” located in the DHS. The plug-ins applied will depend on
the observing task and data processing input parameters. Plug-ins for frame selection, speckle image
reconstruction, detailed display, gain calibration, and dark calibration will be available.
The ATST Data Handling System (DHS) provides several constructs that can be used to support
instrument data processing needs. The first of these is a camera line, which consists of a Data Transfer
Node (DTN) for receiving raw frames from the camera and an optional Data Processing Pipeline (DPP)
for applying any instrument specific processing to the raw frames. A DHS camera line may contain only
one Data Processing Pipeline (DPP). The DPP is a directed graph containing one or more Data
Processing Nodes (DPN) that may each provide some element of data processing.
The VBI will utilize a separate camera line for each channel. The DPP in the camera line will contain a
DPN for each data processing plug-in required. The DPNs will be written with logic to determine
whether they should process data based on the observing task and input parameters. Each node will either
process the data or not, and then pass it to the next node in the graph.
Figure 39 below shows the planned topology of the VBI Blue DPP network. For more information on the
VBI DPP and the individual DPNs please refer to Section 7.8.
Figure 39: VBI Blue Data Processing Pipeline
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 61 of 267
7.4.5.1 Off-summit Data Processing All of the DHS plug-ins discussed in section 7.4.5 will also be available off the summit so that they can
be used to process raw data. This VBI data processing package will allow the user to run a specific
processing plug-in against a set of data files. The DHS will not be required to run the data processing
package off-summit. This will be accomplished by replacing the interface between the plug-in and the
DHS with one that lets the plug-in read the data from files on the user’s system.
7.4.6 Verify Data Quality
During execution of an observing task the VBI may be configured to collect and send raw data to the
DHS camera line. In addition, based on the data processing attributes of the instrument configuration the
VBI camera line may be required to perform data pipeline processing tasks. Data sent to the DHS camera
line (whether saved or not) may need to be viewed for quality or control purposes at various stages. To
support this functionality the VBI software system will employ the services provided by the DHS Quality
Assurance Support (QAS) system (QAS).
The DHS QAS system follows a source and sink model for quality assurance data delivery and
presentation respectively. A QAS source can be established at any component in the camera pipeline and
serves as a configurable probe whose sole purpose is delivering quality assurance data as efficiently as
possible. Configurable elements of the source include the image size, thus allowing the user to reduce the
amount of image data being sent for QA purposes. The data transfer node for camera raw data is the most
common implementation of a QAS source. A QAS sink is a configurable subscriber of the data delivered
by the QAS source and is concerned with presenting the data as efficiently as possible. Configurable
elements of the sink might include specifying that only every nth frame delivered by the source will be
presented. A quick look display is the most common implementation of a QAS sink.
For the VBI, the following usage of the QAS system is planned:
QAS source will be implemented at the data transfer node so that raw images from the camera can be
delivered for quality and control purposes.
QAS sink will be implemented as a DS9 quick look display that subscribes to the QAS source at the
data transfer node. This will allow users to view camera raw data for quality and control purposes.
QAS source will be implemented at each of the data processing nodes of the data processing
pipeline.
QAS sink will be implemented for each data processing node as a DS9 quick look display that
subscribes to the QAS source at the data processing node.
Figure 40 illustrates how the QAS elements can be used with Data Transfer Nodes and Data Processing
Nodes. Notice that the Quick Look Display can be connected to any QAS source probe, thus allowing
inspection of the data at various points in the camera line.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 62 of 267
7.4.7 View Final Data
The VBI camera line will produce both raw data and processed data which can be viewed by the user.
The type of data will determine the length of time, location, and format the data will be available in. The
VBI will use the DHS to transport, process, and store the acquired data.
Raw data will be saved in the camera store of on the facility Data Storage System (DSS) for one day, after
which it will be purged. Should a significant solar event occur, this raw data can be obtained and moved
to a separate temporary storage device (i.e. SSD) for further analysis.
Processed data will be written to a transfer store by the Data Transfer Node (DTN) of the VBI DPP. Data
residing in the transfer store will be further processed by the VBI Data Distribution Node (DDN). The
DDN consults the header database to obtain related header data information, combines the header and
data for each data item into a single FITS file, and transfers that file to the NSO Digital Library (either
directly, or via removable media). Figure 41 below illustrates how the DPN will take on-mountain data,
process it into FITS files, and transfer it off-mountain.
Data residing at the NSO Digital Library will be made available to users per NSO distribution policy.
Once a user gains access to the data, analysis can be performed using the processing tool of choice (i.e.
IDL, etc).
Figure 41: VBI Data Distribution Node
Figure 40: VBI Software System use of the DHS QAS and DPP Elements
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 63 of 267
7.4.8 Control / Monitor Instrument
Under normal circumstances the VBI will be operated by configurations built using the OCS VBI
instrument tabs and submit for execution through the ICS. However, certain activities such as
troubleshooting, adjusting, and overriding the VBI sub-systems settings require operation outside the
standard OCS-ICS-VBI control hierarchy. For this purpose the VBI software team will provide a VBI
engineering user interface.
The VBI engineering user interface will allow the user to directly monitor and control all elements of the
VBI software system. The communication path between the VBI engineering GUI and other VBI
software systems will be based on CSF but will not pass through other principal systems such as the OCS
and ICS. In addition, the interface will provide access to lower level sub-systems and their attributes
which are not available through the standard control hierarchy. For example, the engineering GUI will
allow the user to set the focus position to be used for each bandpass filter. These attributes will support
activities such as stage adjustments, updating system settings, and parasitic participation in experiments.
The VBI engineering user interface will be built using the Java Engineering Screens (JES) framework
provided by the ATST software group. The JES framework provides standard widgets for GUI
construction and communications with other systems through CSF (events, peer-to-peer).
Although a majority of the VBI engineering user interface will be built using the standard widgets
provided by JES, customized JES widgets may be developed to support unique needs of the VBI. The
VBI software team will work with the ATST software team to publish standards and guidelines for the
look and feel of instrument engineering GUIs. This effort will help increase usability of these interfaces
by ATST staff by ensuring common elements are presented to all users in the same manner.
For more details on the design of the VBI engineering user interface please refer to Section 7.5.4.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 64 of 267
7.5 GRAPHICAL USER INTERFACES DESIGN
7.5.1 Overview
The VBI provides several graphical user interfaces (GUI) for the purpose of gathering inputs and
monitoring system status. These interfaces play an important role in the user’s experience with the VBI
and therefore much consideration has gone into their design. In the next few sections we will discuss the
design elements of these GUIs.
7.5.2 VBI Explorer
Potential users of the VBI will need to easily access information about the instrument such as
specifications, contact information, and simulators. Currently, the ATST intranet provides a project book
web page for the VBI as shown in Figure 42. This web page provides access to documentation for the
VBI that has been approved as well as contact information.
Figure 42: VBI Project Book Web Page
The ATST intranet is built on the Drupal open source content management platform. This platform
provides all of the web site administration services such as user authentication, content management, and
access control. In addition, it provides templates for common web site features such as forums and
documentation libraries. As the VBI project progresses, the ATST VBI project book page will be
expanded as follows to provide additional functionality as follows.
Forums
Several forums will be created for different topics such as optics, observing, and software. Users
of the site will be able to post their questions to the forum. When a post is made, members of the
VBI team will be notified and then able to respond to forum post. Users may also search the
forum to see if their question has already been answered in a previous thread.
VBI Explorer Download
Potential users of the VBI will want to check observing configurations to determine if they are
feasible on the VBI. The VBI Explorer will be a downloadable tool that can be used for this
purpose. The VBI Explorer will provide an interface into which users can enter input parameters.
The interface will guide users through the required inputs and their range of values. It will then
validate input configuration and provide feedback to the user as to whether they are valid.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 65 of 267
7.5.3 OCS Instrument Tabs
The OCS provides the primary interface for user control of the ATST. One of the key functions it
provides is user support for specification, execution, and monitoring of experiments. These experiments
may consist of one or more steps that involve various telescope and instrument configurations.
Instrument configurations are classified into different observation tasks, each task having unique input
requirements and performing distinct behaviors based on those inputs. Therefore the OCS must provide
access to instrument specific user interfaces where tasks and inputs for the instrument can be defined for
each step of the experiment.
To support defining the instrument specific tasks and inputs for each step of an experiment, the OCS
provides hooks to instrument specific input screens called tabs. These tabs must be implemented using
the Java Engineering Screens (JES) framework which is the ATST standard for user interface
development. The JES framework provides standard widgets with built in hooks to CSF services such as
the event service (publish and subscribe) and connection service (peer-to-peer communications).
The VBI provides one OCS tab for each observation task. Most of the VBI tabs are built using the
standard JES framework widgets and only require configuration as needed for the VBI. However there
are a few instances where custom JES widgets are needed to support non-standard functionality required
by the VBI.
7.5.3.1 Common VBI OCS Instrument Tab Design Elements There are several design elements that are common to all VBI OCS tabs. This section will provide the
details for those elements. Some of these elements may eventually be provided by the OCS instrument
tab framework, while others will require development by the VBI software team. Unless otherwise noted
in the tab specific sections that follow, all tabs support these features using the same design elements.
7.5.3.1.1 Configuration Manager The Configuration Manager provides methods for reading and writing observing task configurations from
the persistent store. Each tab must have the ability to manage the observing task configurations
associated with the tab as follows:
Create new observing task configuration
Open existing observing task configuration by name
Edit observing task configuration
Save observing task configuration by name
Delete existing observing task configuration by name
7.5.3.1.2 Parameter Set Manager The Parameter Set Manager provides methods for reading and writing parameter sets from the persistent
store. Each tab will have the ability to manage the parameter sets associated with the tab as follows:
Create new parameter set
Open existing parameter set by name
Edit parameter set
Save parameter set by name
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 66 of 267
Delete existing parameter set by name
7.5.3.1.3 VBI Instrument Configuration Validation Component The VBI Instrument Configuration Validation Component provides a common service for performing
validation of VBI instrument configurations. By using a common service for validations, we can be
certain that all VBI software systems are using the latest validation rules. This component will be
implemented as a re-usable class that takes configurations, validates them, and responds with the results
of the validation.
7.5.3.1.4 Camera Configuration Validation Component The Camera Configuration Validation Component provides a common service for performing validation
of camera configurations. By using a common service for validations, we can be certain that all VBI
software systems are using the latest validation rules. This component would be provided by the ATST
Camera Systems team in the form of a CSF component or re-usable class.
7.5.3.2 Instrument Tab Prototypes Prototypes were developed for some of the VBI instrument tabs to obtain feedback from clients and help
the development team understand the capabilities of the JES framework. Figure 38 presented earlier
shows a prototype screen for the Setup tab. Figure 43 below shows a prototype screen for the Focus tab.
7.5.4 Engineering GUI
The VBI engineering GUI provides monitoring and control capabilities for all the sub-systems of the
instrument. This allows users to perform engineering diagnostics, troubleshooting, and repair on the
system during operations. The following sections discuss the main groups of functionality provided by
the engineering GUI.
Figure 43: Prototype of the VBI Focus Tab
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 67 of 267
7.5.4.1 Monitor System Status One of the primary functions of the VBI engineering GUI is to provide real-time monitoring of all the
VBI software system components. Detailed status information is needed at all times to support operators
and engineering in the setup, troubleshooting, and adjustment of the instrument. The following is a list of
the status information that will be provided in the engineering GUI:
High Level
Health of instrument controller (based on health of sub-systems)
Observation task
Observing task configuration (list of attributes)
Current script name and version
Current script status
Current script percentage complete
Motion Stages
Health of motion stages
Mode
Current position
Demand position
Position Error
Current velocity
Cameras
Health of camera
Mode
Camera configuration
Percentage complete
Data Processing
Health of data processing component
Active/Inactive status
Current configuration
Percentage complete
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 68 of 267
7.5.4.2 Adjust Motion Stages The engineering GUI will enable the user to directly control all motion stages. This capability is
important to allow adjustments to be made to motion stages during manual setup routines as well as
during observations. The following adjustments will be supported:
Motion Stages
Demand position
Demand velocity
7.5.4.3 Build and Submit Configurations The engineering GUI will provide the tools necessary for creating and submitting configurations to the
VBI. It will support building all of the observing task configurations in the same manner as the VBI
instrument tabs. In addition it will support building and submitting configurations directly to the motion
control stages of the instrument as discussed in Section 7.6.
7.5.4.4 Reporting The engineering GUI will provide the capability to view reports showing metrics on performance of the
VBI software system components. At this time no specific reporting requirements have been established.
7.5.5 Image Data Displays
7.5.5.1 Overview Image data collected by the VBI that resides in the DHS will be available for display to the user for
quality assurance and control. The DHS supports generic and customized displays for this purpose and
we will discuss how each will be used by the VBI in the next few sections.
7.5.5.2 Quick Look Display The VBI will utilize the DHS Quick Look Display for image data quality assurance and control. This
display will provide the images at the required rate (5Hz) and within the required latency tolerance (0.2s).
The source-sink model design of the DHS enables the Quick Look Display to tap into the VBI camera
line at any output point from a DTN or DPN. This flexibility will enable the display to easy adapt to new
DPNs being added to the DPP over time. In addition, it supports a uniform display that will help with the
general usability of the system. Figure 40 shows an example of how a DHS Quick Look Display can
utilize the sink-source model to display VBI data at various points in the camera line. The ATST DHS
team has selected SAOImage ds9 as the display package for the Quick Look Display. Figure 44 below
shows a screenshot from the SAOImage ds9 display package.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 69 of 267
7.5.5.3 Detailed Display The VBI will provide a detailed display that allows users to view calibrated images. The detailed display
will be comprised of two parts: image processing and image display. The image processing software that
produces calibrated images will be implemented in the ATST DHS as a Data Processing Node (DPN) in
the VBI Data Processing Pipeline (DPP). For more information on the Detailed Display plug-in DPN,
please refer to Section 7.8. Data output from this DPN may be displayed using the DHS Quick Look
Display. Although no custom display controls are currently planned, they may be added if necessary to
control the flow of data from the DPN to the Quick Look Display.
Figure 44: Prototype of the ATST Quick Look Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 70 of 267
7.6 INSTRUMENT CONTROLLER DESIGN
7.6.1 Introduction
The VBI instrument requires a sophisticated control system capable of positioning mechanical and
detector components within specified error tolerances. In addition, the system must monitor and report
status on these components throughout an observation. To provide this functionality the VBI software
team will deliver the VBI Instrument Controller.
7.6.1.1 Purpose The purpose of this section is to provide the critical design definition for the VBI Blue Instrument
Controller software system. The critical design documentation will be presented using four different
design views: Decomposition Description, Dependency Description, Interface Description, and
Detailed Design. Each design view represents a separate concern about the software system. Together
these views provide a comprehensive description of the design in a form that allows users to access
desired information quickly.
7.6.1.2 Scope The scope of this section is to provide the critical design documentation for the VBI Blue Instrument
Controller software system.
7.6.1.3 Definitions and Acronyms The following definitions and acronyms are useful in understanding and discussing aspects of the VBI
Blue Instrument Controller software system.
CS/CSS – Camera Systems / Camera System Software
DC – Detector Controller
DHS – Data Handling System
IC – Instrument Controller
ICS – Instrument Control System
MC – Mechanism Controller
OCS – Observatory Control System
TRADS – Time Reference and Distribution System
VCC – Virtual Camera Controller
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 71 of 267
7.6.2 Decomposition Description
7.6.2.1 Module Decomposition
7.6.2.1.1 Instrument Controller The VBI Blue Instrument Controller (IC) is a group of software components that work together to provide
control of the VBI Blue sub-systems. The IC will provide the interface, logic, motion control, and
detector control capabilities required for that channel. The ICs and their components will be implemented
using CSF and will utilize the SIF provided by the ICS. Figure 45 below shows the controller layout for
the VBI blue channel container (VBI BLUE).
The various components of the system will be implemented as ATST controllers (sub-classing from base
controllers) providing the command/action/response behavior needed to handle configurations. Details of
the controller model in general and the particular controllers and components used within the VBI can be
found in the CSF user’s manual.
Each of these controllers will be initialized and then started by the container manager via the init and
startup commands methods. During this phase the VBI components will attempt to make connections to
the other ATST components with which they need to communicate and will retrieve their initial state
from the runtime database through the use of the property service.
Figure 45: VBI Blue Channel Control System Layout
ExperimentGUI
VBI IC
Mechanism Controller (atst.ics.vbiBlue.mc)
Detector Controller (atst.ics.vbiBlue.dc)
Instrument Sequencer (atst.ics.vbiBlue)
Filter Wheel
Motion Controller (atst.ics.vbiBlue.mc.filter)
Focus Motion
Controller (atst.ics.vbiBlue.mc.focus)
Virtual Camera
Virtual Camera Controller
(atst.ics.vbiBlue.dc.vcc)
VBI Camera
Filter Wheel Focus Stage Camera Mount
Y Stage
OCS
ICS
VBI Engineering
GUI
VBI Instrument Tabs
ICD 3.1.4 / 3.2
Thermal
Controller (atst.ics.vbiBlue.mc.thermal)
Thermal
Sensors Camera Mount
X Stage
Camera X Stage
Motion Controller
(atst.ics.vbiBlue.mc.x)
ICD 3.1.4 / 3.6
Camera Y Stage Motion
Controller (atst.ics.vbiBlue.mc.y)
Auxiliary
Controller (atst.ics.vbiBlue.mc.aux)
Auxiliary
Sensors
Delta Tau PPMAC PLC Hardware
Time Base
Controller (atst.ics.vbiBlue.time)
TRADS
Hardware
TRADS PTP
Timing Bus
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 72 of 267
7.6.3 Dependency Description
7.6.3.1 Inter-module dependencies
7.6.3.1.1 IC and VCC
The VBI Blue will have its own camera for collecting image data. This camera will be operated using the
Camera Systems (CS) provided by ATST. The CS includes the Camera System Software (CSS) which
contains the Virtual Camera Controller (VCC). The VCC provides an interface to the ICS as defined in
the ICS 3.1.4/3.6. This interface will allow the user to specify values for camera settings such as
exposure time, number of frames, binning, and ROI. The VBI will use this interface to configure and
control the camera during execution of its observing tasks.
For most of the observing tasks supported by the VBI, the camera settings will simply be set to those
specified by the user during setup of the observation in the OCS instrument tab or the engineering GUI.
However some observing tasks that provide automated observing (focus, align, etc.) may configure the
camera based on pre-determined settings as part of a routine.
The use cases of the VBI require that sequences of observations be performed rapidly. The amount of
time allowed between observations for re-configuring the system is 333ms. Thus, the VCC must be able
to load a new configuration and apply it in less than 333ms.
7.6.3.1.2 IC and TRADS
The ATST Time Reference and Distribution System (TRADS) will be used to achieve the required
synchronization between the VBI mechanisms and camera. There will be two TRADS interfaces in the
VBI; one in the facility camera and one in the IC that triggers the filter, focus, and camera x/y stages. The
interface to the TRADS is through the TimeBaseController software component and the TSync-PICe-PTP
time base board. Figure 46 below shows a picture of the TSync hardware.
Figure 46: TSync-PCIe-PTP
All TSync-PCIe-PTP time base boards keep identical reference time signals accurate to 10 nanoseconds.
Therefore, through proper configuration of the time epoch, rate, and offset parameters of the time base
boards, mechanisms can be triggered to move exactly at the end of a camera exposure.
The TimeBaseController is a software component provided by the ATST software group that provides an
interface to the capabilities of the TSync-PCIe-PTP board. The VBI IC will use the TimeBaseController
component to perform the following functions on the TSync card:
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 73 of 267
Obtain reference time from the TSync board
Generate pulses on TSync output pins to trigger mechanism moves
Record timestamps on input pin pulses received from mechanisms at move completion
7.6.3.1.3 IC and BDT
Several of the VBI observing tasks (setup, focus, align, target) require the ability to obtain frames from
the camera so that they may be processed, and results of the processing can be used to make adjustments
to the instrument configuration. To support this functionality the VBI will take the following actions
when a frame is required:
Send a request for frame to the camera
A request will be sent from the VBI IS to the VCC in the form of a configuration. The
configuration will indicate that a frame is being requested, and provide a unique identifier for
the frame to be tagged with. Upon receipt of the configuration, the VCC will tag the next
frame with a header data element containing the unique identifier. The frame will then be sent
out as usual, which involves posting an event containing the frame data to the BDT.
Obtain the frame from the BDT
Once a frame request has been made by the IS to the VCC, the IS will subscribe to the
camera’s BDT event using the BDT service. As a subscriber to the VBI camera’s BDT event,
all VBI camera BDT events (i.e. frames) will be sent to the IS. The IS can then check the
header data of each frame it receives on an event until it finds the one matching the unique
identifier used in the request to VCC. The IS can then unsubscribe from the camera’s BDT
event, and process the obtained frame as needed.
Figure 47 below illustrates how the VBI control system IS would subscribe to the BDT event stream of
the VBI camera. It is important to note that this approach will require communication over a 10Gbit
Ethernet to ensure timely receipt of frame data. Thus the VBI control system computer will need to have
a separate 10Gbit network card in addition to the 1Gbit card used for the command channel.
Figure 47: VBI IS as a BDT Subscriber
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 74 of 267
7.6.3.2 System Hardware Dependency
The VBI Instrument Controller must run on hardware that meets the networking and processing demands
of the system design. To help evaluate the many hardware options, several criteria were set and
prioritized. Hardware from several vendors was considered, and after careful consideration the Rackform
iServ 350 from Silicon Mechanics was selected.
Table 2 below lists those criteria, the justification for each, and how the Rackform iServ 350 meets those
criteria.
Criteria Justification RackForm iServ 350 High performance networking ATST systems are distributed and
network communications
performance is essential
Intel Xeon 6 core processing
architecture and dual port 1Gbit
Ethernet
PCIe 2.0 expandability System needs to support TSync
timer board (PCIe x4), 10 Gbit
Ethernet card (PCIe x8), and
possibly a mid-level GPU (PCIe x8)
for engineering image processing if
desired.
2 PCIe 2.0 x 16 that can be split into
4 PCIe 2.0 x 8.
1 PCIe 2.0 x 4
RAM capacity VBI IC will be required to acquire
several engineering images for
processing.
Motherboard has capacity for up to
192GB RAM (12x16GM)
High reliability VBI IC must have minimal
downtime
Server class motherboard, hot-
swappable drives.
Minimize rack space Rack space below the coudé lab is
limited.
1U
Table 2: Criteria for VBI IC Computer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 75 of 267
Figure 48 lists the technical specifications for the RackForm iServ 350.
Figure 48: Rackform iServ350 Technical Specification
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 76 of 267
7.6.4 Interface Description
The VBI is one of the facility instruments of the ATST. It must therefore work seamlessly with the other
components that make up the overall ATST control system. In particular it must accept and act on
configurations sent by the ICS as well as configure external sub-systems, such as the camera. The next
few sections will focus on these interfaces and the interaction details of each.
7.6.4.1 ICS to VBI The software interface between the ICS and the VBI provides a control mechanism for the ICS to position
and control the mechanical and detector elements of the VBI. It also provides a status and event
mechanism for VBI information to be broadcast to interested systems (ICS, users, loggers, etc.).
The VBI shall perform the following actions under command from the ICS:
Operate all VBI servo electronics, sensing hardware, and other electronics;
Control the position and motion of the filter wheel, focus mirror, and camera mount x-y stages
Configure and control the camera using the ICS to Camera Systems interface (3.1.4/3.6)
Record the metrology from the VBI mechanical and thermal sensors; and
Provide up-to-date status information on all VBI equipment.
The VBI interface provides a set of observation tasks and attributes that can be used by the ICS to
command the VBI to perform certain actions. The observation tasks provide an abstract interface that
limits which attributes can be specified with each, and thus which VBI components are impacted.
Commands to the interface are provided in the form of observing task configurations, which consist of the
observation task attribute and other attributes specifying the demand settings for the VBI subsystems
relevant to that task. Observing task configurations are submitted to the VBI and if valid, the sub-systems
will be updated to match the demand configuration. For more information about the observation tasks
and related attributes refer to the ICS to VBI ICD.
To further understand this interface it is helpful to examine the sequencing of commands and the
interaction between the ICS and VBI in more detail. An example interaction sequence is shown below in
Figure 49. In this example two complete observing tasks are shown. The sequence of events is as
follows:
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 77 of 267
At the start, the OMS in the ICS sends an instrument setup configuration (cfg1) to the ICS VBI
Instrument Adapter (IA) for the first observing task (shown with red lines). Note that the parameters sent
in cfg1 may actually be sent using multiple consecutive configurations (essentially multiple lines for
cfg1).
The VBI IA sends the configuration to the VBI Instrument Sequencer (IS), which processes it, forwarding
the appropriate portions to the Mechanism Controller (MC) and Detector Controller (DC). The MC and
DC begin their actions. The MC and DC actions occur in parallel to maximize efficiency. Note: the MC
and DC sub-controllers and associated hardware are not shown in this diagram. If there is an observing
script associated with this observing task, the VBI IS downloads it from the Script Store at this time (this
action is not shown in the figure).
Next, the ICS has received confirmation that the TCS is in position, so it sends the tcsConfigured
command (tcsCfg1) to the VBI IA. The IA is still processing the previous setup configuration, so it
queues the request. The IA is single threaded to purposely cause this behavior.
Immediately after sending the tcsConfigured command, the ICS sends a setup configuration for the next
observing task (cfg2) to the IA. The IA is still busy, so it queues the request behind the already queued
tcsConfigured command.
After the MC and DC complete their setup actions for cfg1, they report completion to the IS. The IS
reports completion to the IA, which in turn reports completion to the ICS (done cfg1).
Figure 49: Sample Sequence Diagram Showing Interaction between OCS, ICS, and VBI.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 78 of 267
Now that the setup configuration is complete, the IA can process the queued tcsConfigured command
(tcsCfg1), which it sends to the IS (start vbi). The IS begins execution of the observing script downloaded
earlier. The script commands the MC and the DC to move devices and acquire data, as required.
When the observing script is complete, the IS, reports its completion to the IA, which reports it to the ICS
(done tcsCfg1).
Now that the observing actions are complete, the IA can process the queued setup configuration for the
next observing task (cfg2). The behavior here is similar to that described above for cfg1.
Sometime after the observing actions are complete, the TCS has moved to a new observing configuration,
and the ICS sends a tcsConfigured command for the 2nd observing task (tcsCfg2) to the IA. The IA is still
busy processing the previous parameters (cfg2), so it queues the request.
After the MC and DC complete their setup actions for cfg2, they report completion to the IS. The IS
reports completion to the IA, which in turn reports completion to the ICS (done cfg2).
Now that the setup configuration is complete, the IA can process the queued tcsConfigured command
(tcsCfg2), which it sends to the IS (start vbi). The IS processes the configuration and begins executing the
observing script.
When the observing script is complete, the DC reports the completion to the IS, which continues the
reporting all the way up the chain to the ICS (done tcsCfg2).
7.6.4.2 Interlocks The ICS will handle all interlock related events. When an interlock occurs, the ICS will abort any actions
currently being performed by the VBI, and block any additional actions from being started. The ICS will
maintain this state until the interlock is cleared, at which point it will continue normal operations. For
more information on the ICS handling of interlocks please refer to the ICS design document.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 79 of 267
7.6.5 Detailed Design
7.6.5.1 Module Detailed Design
7.6.5.1.1 Instrument Controller The CSF components and controllers of the VBI blue channel control system are implemented using the
Java classes provided as part of the Standard Instrument Framework (SIF). In some cases the SIF classes
are extended to allow for VBI specific behaviors to be added. Figure 50 below shows the UML class
diagram for the VBI blue channel control system. In the next few sections we will discuss each of the
CSF components/controllers used in the system and make reference to the Java class used to implement
that component/controller.
7.6.5.1.1.1 Class Diagram
Figure 50: VBI Blue Channel Control System Class Diagram
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 80 of 267
7.6.5.1.1.2 Instrument Sequencer
7.6.5.1.1.2.1 Overview The Instrument Sequencer (IS) is the top-level controller for the VBI blue channel control system. In the
system hierarchy the IS is given the name atst.ics.vbiBlue. The IS provides the interface to the ICS and
an interface to the instrument’s Mechanism Controller (MC) and Detector Controller (DC). The IS
responds to configurations received from the ICS and forwards attributes to the MC and DC, as
appropriate. It also provides scripting support for the instrument, enabling it to load and run observing
scripts.
7.6.5.1.1.2.2 Structure The VBI Blue InstrumentSequencer class is implemented as an extension of the Standard Instrument
Framework (SIF) InstrumentSequencer class, adding the functional behavior specific to the VBI
application. The SIF InstrumentSequencer is implemented as an extension of the ManagementController
class, primarily adding the functionality required to interface with the ICS and manage observing script
execution. For more information on the design of the SIF IS please refer to the ICS design document.
7.6.5.1.1.2.3 Functionality
7.6.5.1.1.2.3.1 Observing Script Support One of the most significant features the IS adds to the technical architecture of the controller is observing
script support. Scripting is a key element of maintaining flexibility in how the VBI is used, and enables
association of a specific script with an observing task. The association is made using the Script Store
database. By using the instrument name (i.e. vbiBlue) and current observing task, the IS is able to
retrieve the correct observing script. The parameters passed down with the instrument configurations are
used to provide experiment specifics to the generic observing script. The script controls all details of what
the instrument will do during an observing task. The IS provides the script engine required to execute the
script using standard CSF scripting tools.
7.6.5.1.1.2.3.2 Parameter Set Support A parameter set is a group of attributes that can be referenced by name. At the user level they provide a
way to re-use common groups of input parameters. At the interface level they allow simplification by
replacing a larger set of attributes with a simple name reference. The IS will provide the ability to pull
them by name from a persistent store and perform validation on them. The VBI will use an ordered
sequence of parameter set names as part of the input configuration. In this case when a configuration is
submitted to the IS, the IS will pull and validate all the parameter sets as part of validating the
configuration. When an action is started for a configuration, the IS would make the parameter sets
available by name in memory for any observing script executed.
7.6.5.1.1.2.4 Custom Properties The IS is provided by the SIF with a standard set of properties as described in the ICS design document.
In addition to these properties, the VBI software team will provide custom properties that can be set on
this controller as described below. Note that some of these attributes may become standard rather than
custom once the ICS design is finalized.
NOTE: In addition to the standard and custom properties, the VBI software team has defined properties
that will be used as input attributes to the observing scripts (i.e. filter, field sampling mode, exposure
mode, etc.). These properties are not listed here but are described in the ICS to VBI ICD.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 81 of 267
Name Obs
Tasks
Param Set? Type Units Comment
atst.ics.vbiBlue
.numCycles All N integer cycles Number of execution cycles to perform
.continueFlag All N boolean N/A Flag indicating if cycles should be
repeated once complete
.specialScriptName Special N string N/A Name of script to execute when in
Special obsTask
7.6.5.1.1.2.4.1 .numCycles Data Type: integer
Units: N/A
Valid Values: 1…231
Default Value: 1
This attribute specifies the number of times to cycle through the given sequence of parameter sets
(.paramSets[]). This attribute may be used in any observation task.
7.6.5.1.1.2.4.2 .continueFlag Data Type: boolean
Units: N/A
Valid Values: true | false
Default Value: false
This attribute specifies what action the VBI control system should take once it has completed executing
all cycles (.numCycles) of the parameter set sequence (.paramSets[]). If the value is true, the cycles will
be repeated. If the value is false, execution will end, and the VBI will wait for the next configuration.
This attribute may be used in any observation task.
7.6.5.1.1.2.4.3 .specialScriptName Data Type: string
Units: N/A
Valid Values: Must match valid script name in database
Default Value: N/A
This attribute is used in the Special observation task and specifies the name of a script to be executed.
7.6.5.1.1.2.5 Custom Extensions The SIF IS may be specialized as needed to support any non-standard functional needs of an instrument.
The VBI software team will extend the methods of the SIF IS in the VBI InstrumentSequencer class as
follows:
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 82 of 267
7.6.5.1.1.2.5.1 Overridden method: doSubmit(IConfiguration config) The doSubmit method is a hook in the controller’s technical architecture that allows the functional
architecture to perform any actions when a configuration is submitted. For the VBI IS, this method will
be overridden to allow for custom validation of the input configuration. This is required because although
the CSF performs type and range checking of attributes in the configuration, custom code must be written
to check valid combinations of attributes, and make any adjustments as needed.
7.6.5.1.1.2.5.2 Validation class As part of the validation done in the overridden doSubmit method, we are looking at using a re-usable
VBI specific validation class. The idea here is to use an instrument specific validation library to check
configurations as early as possible, such as in the user interface. A validation library can be assigned a
unique key that would be attached to any configuration it validates successfully. Upon receipt of a pre-
validated configuration, the IS could verify that the id of the library used to validate the configuration is
the same as that currently registered with the IS. A match would indicate that the configuration can be
considered valid for this version of the IS, and thus the IS would not need to perform validation again.
The primary advantage of this approach is to reduce latency in the control system due to unnecessary
validations.
7.6.5.1.1.2.6 Events Published The events published by the VBI IS are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.cStatus Change Current status of VBI
atst.ics.vbiBlue.groupStart Change Start of frame set
atst.ics.vbiBlue.groupStop Change End of frame set
7.6.5.1.1.2.6.1 atst.ics.vbiBlue.cStatus Attributes: inPosition(boolean), experimentId(string), observationId(string), obsTask(string),
percentComplete(float), participantType(string), IAConnected(boolean)
Rate: 1 Hz.
This event reports the general status of the VBI. The inPosition attribute indicates whether the VBI sub-
systems are within defined tolerances or not. The participantType attribute indicates whether the
instrument is participating as a normal instrument or as a parasitic. The IAConnected attribute indicates
whether the VBI Instrument Adapter has connected to the VBI or not.
7.6.5.1.1.2.6.2 atst.ics.vbiBlue.groupStart This event signals the start of an observation group (frame set).
7.6.5.1.1.2.6.3 atst.ics.vbiBlue.groupStop This event signals the end of an observation group (frame set).
7.6.5.1.1.2.7 Events Subscribed The events subscribed to by the VBI are summarized in the table below. For full details of the event,
reference should be made to the appropriate system ICD.
Name Rate Comment
atst.ics.parasitic 1 Hz Experiment info for parasitic instruments
atst.tcs.mcs.cPos 20 Hz Current position data for telescope mount
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 83 of 267
atst.tcs.wccs.adops.cStatus On change Current status info for high order adaptive optics
Table 2 Events subscribed to by the VBI control system
7.6.5.1.1.2.7.1 Header events The VBI subscribes to the appropriate header events and supplies the proper header information to the
Data Handling System in response to each. Most of the work involved in managing the subscriptions is
handled automatically by the Application base layer of the ATST Common Services Framework. See the
ATST Common Services Framework Users’ Manual for details.
7.6.5.1.1.2.7.2 atst.ics.parasitic Attributes: experimentId(string), observationId(string), observingTask(string), tcsConfigured(boolean)
Rate: 1 Hz, and on change
The VBI subscribes to the ICS parasitic instrument event, for the case where it will participate in an
experiment as a parasitic. This event provides a parasitic instrument with the information it needs to
participate in an experiment. A periodic update is required in case an instrument comes on-line after an
on-change event has occurred.
7.6.5.1.1.2.7.3 atst.tcs.mcs.cPos Attributes:altCPos(float)
Rate: 20 Hz
The VBI subscribes to the Mount Control System (MCS) cPos event. This event includes several
attributes, but the VBI is specifically interested in the altCPos attribute. The altCPos attribute provides
the current altitude of the telescope in degrees. The VBI can use this value to perform automatic
exposure time adjustments based on known air mass impact on light intensity at that zenith distance.
This event is defined in ICD-1.1/4.4, Telescope Mount Assembly to Telescope Control System Interface,
and is reproduced herein for information purposes only.
7.6.5.1.1.2.7.4 atst.tcs.wccs.adops.cStatus Attribute: controlMatrix(string)
Rate: On change
The VBI subscribes to the wave front correction system high order adaptive optics (AO) cStatus event.
This event includes several attributes, however the VBI is specifically interested in the controlMatrix
attributes. The controlMatrix attribute provides the current control matrix file in use by the AO system.
The VBI Speckle image reconstruction plug-in must know this information to function properly.
This event is defined in ICD-2.3/4.4, Wavefront Correction Control System to Telescope Control System
Interface, and is reproduced herein for information purposes only.
7.6.5.1.1.3 Mechanism Controller
7.6.5.1.1.3.1 Overview The Mechanism Controller (MC) is the top-level controller responsible for managing worker lifecycle
states and managing worker actions for VBI sub-systems related to control of mechanical components, as
well as miscellaneous hardware devices. In addition the MC will be used to provide an abstract interface
that allows the user to specify attributes at the MC level that are automatically translated into many
attributes for its workers.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 84 of 267
In the VBI blue channel control system hierarchy the MC is given the name atst.ics.vbiBlue.mc. It will
be responsible for managing the following components of the system:
Filter wheel
Focus lens
Camera mount horizontal (x) stage
Camera mount vertical (y) stage
Thermal system hardware
Auxiliary system hardware
7.6.5.1.1.3.2 Structure The VBI MechanismController (MC) class is implemented as an extension of the CSF
ManagementController class, adding functional behavior specific to the VBI application. This CSF
ManagementController class extends the CSF BaseController class by adding features for managing
worker lifecycles and states. For more information on the base design of the ManagementController
please refer to the ICS design document.
7.6.5.1.1.3.3 Functionality The MC receives commands via CSF configurations from the IS, either directly from an IS controller or
from an observing script running in the IS. The MC responds by configuring the appropriate devices and
providing feedback to the IS as to their status and position.
7.6.5.1.1.3.4 Custom Properties The MC is provided by the SIF with a standard set of properties as described in the ICS design document.
In addition to these properties, the VBI software team will provide custom properties that can be set on
this controller as described below.
Name Modes Param
Set?
Type Units Comment
atst.ics.vbiBlue.mc
.activeFilter All Y string N/A Demand bandpass filter to select and set
focus.
.activeSubField All Y String N/A Demand subfield to center the camera
mount x an y axes on
7.6.5.1.1.3.4.1 .activeFilter Data Type: string
Units: N/A
Valid Values: deployPos[n] | 393.4 | 430.5 | 450.4 | 486.1
Default Value: N/A
The .activeFilter attribute is a named position that provides an abstract interface for configuring the
positions of the bandpass filter wheel and camera focus stages. When the .activeFilter attribute is
specified, the mechanism controller will automatically determine which bandpass filter is needed and
configure the filter wheel to place it in the light path. In addition, it will configure the camera focus
position to the correct value for the selected filter.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 85 of 267
7.6.5.1.1.3.4.2 .activeSubField Data Type: string
Units: N/A
Valid Values: deployPos[n] | center
Default Value: N/A
The .activeSubField attribute is a named position that provides an abstract interface for configuring the
positions of the camera mount x and y stages. When the .activeSubField attribute is specified, the
mechanism controller will automatically configure the camera mount x and y stages with the appropriate
named position (.x.namedPos, .y.namedPos).
7.6.5.1.1.3.5 Custom Extensions The CSF ManagementController may be specialized as needed to support any non-standard functional
needs of an instrument. The VBI software team will extend the methods of the CSF
ManagementController in the VBI MechanismController class as follows:
7.6.5.1.1.3.5.1 Overridden method: makeConfig(IConfiguration config, String name) When a configuration is started as an action (i.e. doAction) by the MC, the makeConfig method is called
for each worker the MC manages. This method is passed the configuration (config) and the name of the
worker (name). The purpose of this method is to extract from the configuration the attributes necessary to
build a configuration for the worker. Its default behavior is to extract only the attributes qualified for the
worker (i.e. Configuration attribute atst.ics.vbiBlue.mc.thermal.mode would be used for worker
atst.ics.vbiBlue.mc.thermal). However this method can be overridden to allow customization of how a
configuration is built for a worker.
The VBI control system design will override the makeConfig method of the MC to add functionality to
support the abstract interface provided by the .activeFilter and .activeSubField custom attributes. When
called to build configurations for the filter and focus motion controllers (workers) the overridden
makeConfig method will determine if the .activeFilter attribute is present, and if so set the value for the
.namedPos attribute of both workers to that of .activeFilter. Similarly, when called for the camera mount
x and y motion controllers (workers) it will determine if the .activeSubField attribute is present, and if so
set the .namedPos value for both workers to that of the .activeSubField.
7.6.5.1.1.3.6 Events Published The events published by the VBI MC are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.mc.cStatus 1.0 Hz Current mechanism controller status
7.6.5.1.1.3.6.1 atst.ics.vbiBlue.mc.cStatus Attributes: status(string), inPosition(boolean), percentComplete(float), cActiveFilter(string),
cActiveSubFiled(string)
Rate: 1 Hz
This event is an aggregate status based on the status received from all controllers the MC manages. It
reports the general status of the MC, the aggregate in position status of all mechanisms, aggregate percent
complete for the current configuration, the current active bandpass filter wavelength, and the current
active camera mount sub-field.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 86 of 267
7.6.5.1.1.3.7 Events Subscribed There MC will subscribe to the status events of all controllers it manages. Therefore it will subscribe to
the status events for the filter, focus, camera x, camera y, thermal, and auxiliary controllers. Details on
these events can be found in the controllers respective detailed design sections.
7.6.5.1.1.4 Filter Wheel Stage Controller
7.6.5.1.1.4.1 Overview The Filter Wheel Stage Controller is the controller responsible for monitoring and control of VBI
bandpass filter wheel motion stage. In the hierarchy o the VBI blue channel control system it is given the
name atst.ics.vbiBlue.mc.filter.
7.6.5.1.1.4.2 Structure The filter wheel controller is an instance of the MotionController class. The MotionController class
extends the HardwareController class by adding functionality that supports common motion control
commands, status reporting, and state management. It will connect and communicate with the Delta Tau
Power PMAC (PPMAC) motion controller using an instance of the DeltaTauConnection class. The
DeltaTauConnection class extends the Connection class by adding support for the PPMAC motion
command/response interface. The DeltaTauConnection class provides the basic connectivity to the
PPMAC, and will utilize a DeltaTauChannel object for communications over Ethernet. For more
information on the design of the MotionController, DeltaTauConnection, and DeltaTauChannel classes
please refer to the ICS design document.
7.6.5.1.1.4.3 Functionality The filter wheel controller receives commands via CSF configurations from the MC or directly from the
engineering interface. A configuration consists of a mode attribute and other attributes providing the
inputs for that mode. The available modes and inputs support common motion commands such as power
on/off, brake on/off, jog, offset, move, and follow. If the configuration is valid, the controller will derive
the appropriate motion command(s), and use the DeltaTauConnection to execute the command(s) on the
PPMAC. The controller also monitors the current position, velocity, torque, and error of the motor and
reports status through the event service. For more information on the functionality provided by the
MotionController please refer to the ICS design document.
7.6.5.1.1.4.4 Custom Properties The MotionController class is provided by the SIF with a standard set of properties as described in the
ICS design document. In addition to these properties, the VBI software team will provide custom
properties that can be set on this controller as described below.
Name Modes Type Units Comment
atst.ics.vbiBlue.mc.filter
.stowPos Any float degs Stow position
.deployPos Any float[] degs Filter wheel deployed positions
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 87 of 267
7.6.5.1.1.4.4.1 .stowPos Data Type: float
Units: degrees
Valid Values: 0 ≤ stowPos < 360
Default Value: N/A
This is the stow position to which the filter wheel will move when sent to the stowPos named position.
Please refer to the ICS to VBI ICD for more information on named positions. This attribute may be used
in the any mode.
7.6.5.1.1.4.4.2 .deployPos Data Type: float
Units: degrees
Valid Values: 0 ≤ deployPos < 360
Default Value: N/A
The .deployPos attribute is the array of assigned positions to which the filter wheel may move when sent
to the .deployPos[n] named position. See the ICS to VBI ICD for more information about named
positions. This attribute may be set in any mode.
7.6.5.1.1.4.5 Custom Extensions The MotionController may be specialized as needed to support any non-standard functional needs of an
instrument. Currently the VBI software team does not plan to extend this class. However the following
items are still being reviewed and may result in a request to ATST software group to add additional
standard functionality to the controller, or require custom extension
7.6.5.1.1.4.5.1 Execute Motion Program The standard MotionController class supports sending basic jog and offset commands to the PPMAC.
Due to the speed at which the VBI filter wheel must move, there may be a need to develop a custom
PPMAC motion program that helps minimize the vibration settle time. In this case we will need the
capability to execute a motion program on the PPMAC.
7.6.5.1.1.4.6 Events Published The events published by the VBI Filter Wheel Controller are those provided by the SIF MotionCOntroller
class and are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.mc.filter.cStatus 1.0 Hz Current filter wheel motion status
atst.ics.vbiBlue.mc.filter.fault Change Current fault status of filter wheel
atst.ics.vbiBlue.mc.filter.power Change Current power status of filter wheel motion
controller
7.6.5.1.1.4.6.1 atst.ics.vbiBlue.mc.filter.cStatus Attributes: cPos(float), cVel(float), cTorque(float), cErr(float)
Rate: 1 Hz
This event publishes the current position, velocity, torque and position error of the filter wheel motion
controller.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 88 of 267
7.6.5.1.1.4.6.2 atst.ics.vbiBlue.mc.filter.fault Attributes: fault(string)
Rate: on change
This event is posted when the filter wheel encounters a fault.
7.6.5.1.1.4.6.3 atst.ics.vbiBlue.mc.filter.power Attributes: cPower(boolean)
Rate: on change
This event gives the current power state of the filter wheel motion controller, on or off.
7.6.5.1.1.4.7 Events Subscribed There are not custom events subscribed to by the VBI Filter Wheel Controller. The default events
subscribed to by this component are internal to the implementation of the MotionController class.
Additional details on these events can be found in the ICD design document.
7.6.5.1.1.5 Focus Stage Controller
7.6.5.1.1.5.1 Overview The Focus Stage Controller is the controller responsible for monitoring and control of VBI focus lens
motion stage. In the hierarchy o the VBI blue channel control system it is given the name
atst.ics.vbiBlue.mc.focus.
7.6.5.1.1.5.2 Structure The focus controller is an instance of the MotionController class. The MotionController class extends the
HardwareController class by adding functionality that supports common motion control commands,
status reporting, and state management. It will connect and communicate with the Delta Tau Power
PMAC (PPMAC) motion controller using an instance of the DeltaTauConnection class. The
DeltaTauConnection class extends the Connection class by adding support for the PPMAC motion
command/response interface. The DeltaTauConnection class provides the basic connectivity to the
PPMAC, and will utilize a DeltaTauChannel object for communications over Ethernet. For more
information on the design of the MotionController, DeltaTauConnection, and DeltaTauChannel classes
please refer to the ICS design document.
7.6.5.1.1.5.3 Functionality The focus controller receives commands via CSF configurations from the MC or directly from the
engineering interface. A configuration consists of a mode attribute and other attributes providing the
inputs for that mode. The available modes and inputs support common motion commands such as power
on/off, brake on/off, jog, offset, move, and follow. If the configuration is valid, the controller will derive
the appropriate motion command(s), and use the DeltaTauConnection to execute the command(s) on the
PPMAC. The controller also monitors the current position, velocity, torque, and error of the motor and
reports status through the event service. For more information on the functionality provided by the
MotionController please refer to the ICS design document.
7.6.5.1.1.5.4 Custom Properties The MotionController class is provided by the SIF with a standard set of properties as described in the
ICS design document. In addition to these properties, the VBI software team will provide custom
properties that can be set on this controller as described below.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 89 of 267
Name Modes Type Units Comment
atst.ics.vbiBlue.mc.focus
.stowPos Any float degs Stow position
.deployPos Any float[] degs Focus lens deployed positions
7.6.5.1.1.5.4.1 .stowPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ stowPos ≤ 20
Default Value: N/A
This is the stow position to which the focus lens will move when sent to the .stowPos named position.
Please refer to the ICS to VBI ICD for more information on named positions. This attribute may be used
in the any mode.
7.6.5.1.1.5.4.2 .deployPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ deployPos ≤ 20
Default Value: N/A
The .deployPos attribute is the array of assigned positions to which the focus lens may move when sent to
the deployPos[n] named position. Please refer to the ICS to VBI ICD for more information on named
positions. This attribute may be set in any mode.
7.6.5.1.1.5.5 Custom Extensions The MotionController may be specialized as needed to support any non-standard functional needs of an
instrument. Currently the VBI software team does not plan to extend this class. However the following
items are still being reviewed and may result in a request to ATST software group to add additional
standard functionality to the controller, or require custom extension
7.6.5.1.1.5.5.1 Execute Motion Program The standard MotionController class supports sending basic jog and offset commands to the PPMAC.
Due to the speed at which the VBI focus stage must move, there may be a need to develop a custom
PPMAC motion program that helps minimize the vibration settle time. In this case we will need the
capability to execute a motion program on the PPMAC.
7.6.5.1.1.5.6 Events Published The events published by the VBI Focus Controller are those provided by the SIF MotionController class
and are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.mc.focus.cStatus 1.0 Hz Current camera focus motion status
atst.ics.vbiBlue.mc.focus.fault Change Current fault status of focus stage
atst.ics.vbiBlue.mc.focus.power Change Current power status of focus motion controller
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 90 of 267
7.6.5.1.1.5.6.1 atst.ics.vbiBlue.mc.focus.cStatus Attributes: cPos(float), cVel(float), cTorque(float), cErr(float)
Rate: 1 Hz
This event publishes the current position, velocity, torque and position error of the focus stage motion
controller.
7.6.5.1.1.5.6.2 atst.ics.vbiBlue.mc.focus.fault Attributes: fault(string)
Rate: on change
This event is posted when the focus stage encounters a fault.
7.6.5.1.1.5.6.3 atst.ics.vbiBlue.mc.focus.power Attributes: cPower(boolean)
Rate: on change
This event gives the current power state of the focus stage motion controller, on or off.
7.6.5.1.1.5.7 Events Subscribed The events subscribed to by the VBI Focus Stage Controller are those internal to the implementation of
the SIF MotionController class. For details on these event, please refer to the ICS design document.
7.6.5.1.1.6 Camera Mount X Stage Controller
7.6.5.1.1.6.1 Overview The Camera Mount X Stage Controller is the controller responsible for monitoring and control of VBI
camera mount horizontal (x) motion stage. In the hierarchy o the VBI blue channel control system it is
given the name atst.ics.vbiBlue.mc.x.
7.6.5.1.1.6.2 Structure The camera mount x stage controller is an instance of the MotionController class. The MotionController
class extends the HardwareController class by adding functionality that supports common motion control
commands, status reporting, and state management. It will connect and communicate with the Delta Tau
Power PMAC (PPMAC) motion controller using an instance of the DeltaTauConnection class. The
DeltaTauConnection class extends the Connection class by adding support for the PPMAC motion
command/response interface. The DeltaTauConnection class provides the basic connectivity to the
PPMAC, and will utilize a DeltaTauChannel object for communications over Ethernet. For more
information on the design of the MotionController, DeltaTauConnection, and DeltaTauChannel classes
please refer to the ICS design document.
7.6.5.1.1.6.3 Functionality The camera mount x stage controller receives commands via CSF configurations from the MC or directly
from the engineering interface. A configuration consists of a mode attribute and other attributes
providing the inputs for that mode. The available modes and inputs support common motion commands
such as power on/off, brake on/off, jog, offset, move, and follow. If the configuration is valid, the
controller will derive the appropriate motion command(s), and use the DeltaTauMotionConnection to
execute the command(s) on the PPMAC. The controller also monitors the current position, velocity,
torque, and error of the motor and reports status through the event service. For more information on the
functionality provided by the MotionController please refer to the ICS design document.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 91 of 267
7.6.5.1.1.6.4 Custom Properties The MotionController class is provided by the SIF with a standard set of properties as described in the
ICS design document. In addition to these properties, the VBI software team will provide custom
properties that can be set on this controller as described below.
Name Modes Type Units Comment
atst.ics.vbiBlue.mc.x
.stowPos Any float degs Stow position
.deployPos Any float[] degs Camera mount x
stage deployed
positions
7.6.5.1.1.6.4.1 .stowPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ stowPos ≤ 70
Default Value: N/A
This is the stow position to which the camera mount x stage will move when sent to the stowPos named
position. Please refer to the ICS to VBI ICD for more information on named positions. This attribute
may be used in the any mode.
7.6.5.1.1.6.4.2 .deployPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ deployPos ≤ 70
Default Value: N/A
The .deployPos attribute is the array of assigned positions to which the camera mount x stage may move
when sent to the deployPos[n] named position. Please refer to the ICS to VBI ICD for more information
on named positions. This attribute may be set in any mode.
7.6.5.1.1.6.5 Custom Extensions The MotionController may be specialized as needed to support any non-standard functional needs of an
instrument. Currently the VBI software team does not plan to extend this class. However the following
items are still being reviewed and may result in a request to ATST software group to add additional
standard functionality to the controller, or require custom extension
7.6.5.1.1.6.5.1 Execute Motion Program The standard MotionController class supports sending basic jog and offset commands to the PPMAC.
Due to the speed at which the VBI camera mount x stage must move, there may be a need to develop a
custom PPMAC motion program that helps minimize the vibration settle time. In this case we will need
the capability to execute a motion program on the PPMAC.
7.6.5.1.1.6.6 Events Published The events published by the VBI Camera X Stage Controller are those provided by the SID
MotionController class and are summarized in the table below.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 92 of 267
Name Rate Comment
atst.ics.vbiBlue.mc.x.cStatus 1.0 Hz Current camera mount x motion status
atst.ics.vbiBlue.mc.x.fault Change Current fault status of mount X stage
atst.ics.vbiBlue.mc.x.power Change Current power status of mount motion controller
7.6.5.1.1.6.6.1 atst.ics.vbiBlue.mc.x.cStatus Attributes: cPosX(float), cVelX(float), cTorqueX(float), cErrX(float)
Rate: 1 Hz
This event publishes the current position, speed, torque and position error of the camera mount X stage.
7.6.5.1.1.6.6.2 atst.ics.vbiBlue.mc.x.fault Attributes: faultX(string)
Rate: on change
This event is posted when the camera mount X stage encounters a fault.
7.6.5.1.1.6.6.3 atst.ics.vbiBlue.mc.x.power Attributes: cPower(boolean)
Rate: on change
This event gives the current power state of the camera mount X motion controller, on or off.
7.6.5.1.1.6.7 Events Subscribed There are no custom events that the VBI Camera X Stage Controller subscribes to. All events received by
this component are those internal to SIF MotionController class. Additional details on these events can
be found in the ICS design document.
7.6.5.1.1.7 Camera Mount Y Stage Controller
7.6.5.1.1.7.1 Overview The Camera Mount X Stage Controller is the controller responsible for monitoring and control of VBI
camera mount vertical (y) motion stage. In the hierarchy o the VBI blue channel control system it is
given the name atst.ics.vbiBlue.mc.y.
7.6.5.1.1.7.2 Structure The camera mount y stage controller is an instance of the MotionController class. The MotionController
class extends the HardwareController class by adding functionality that supports common motion control
commands, status reporting, and state management. It will connect and communicate with the Delta Tau
Power PMAC (PPMAC) motion controller using an instance of the DeltaTauConnection class. The
DeltaTauConnection class extends the Connection class by adding support for the PPMAC motion
command/response interface. The DeltaTauConnection class provides the basic connectivity to the
PPMAC, and will utilize a DeltaTauChannel object for communications over Ethernet. For more
information on the design of the MotionController, DeltaTauConnection, and DeltaTauChannel classes
please refer to the ICS design document.
7.6.5.1.1.7.3 Functionality The camera mount y stage controller receives commands via CSF configurations from the MC or directly
from the engineering interface. A configuration consists of a mode attribute and other attributes
providing the inputs for that mode. The available modes and inputs support common motion commands
such as power on/off, brake on/off, jog, offset, move, and follow. If the configuration is valid, the
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 93 of 267
controller will derive the appropriate motion command(s), and use the DeltaTauConnection to execute the
command(s) on the PPMAC. The controller also monitors the current position, velocity, torque, and error
of the motor and reports status through the event service. For more information on the functionality
provided by the MotionController please refer to the ICS design document.
7.6.5.1.1.7.4 Custom Properties The MotionController class is provided by the SIF with a standard set of properties as described in the
ICS design document. In addition to these properties, the VBI software team will provide custom
properties that can be set on this controller as described below.
Name Modes Type Units Comment
atst.ics.vbiBlue.mc.y
.stowPos Any float degs Stow position
.deployPos Any float[] degs Camera mount y stage deployed positions
7.6.5.1.1.7.4.1 .stowPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ stowPos ≤ 70
Default Value: N/A
This is the stow position to which the camera mount y stage will move when sent to the stowPos named
position. Please refer to the ICS to VBI ICD for more information on named positions. This attribute
may be used in the any mode.
7.6.5.1.1.7.4.2 .deployPos Data Type: float
Units: millimeters
Valid Values: 0 ≤ deployPos ≤ 70
Default Value: N/A
The .deployPos attribute is the array of assigned positions to which the camera mount y stage may move
when sent to the deployPos[n] named position. Please refer to the ICS to VBI ICD for more information
on named positions. This attribute may be set in any mode.
7.6.5.1.1.7.5 Custom Extensions The MotionController may be specialized as needed to support any non-standard functional needs of an
instrument. Currently the VBI software team does not plan to extend this class. However the following
items are still being reviewed and may result in a request to ATST software group to add additional
standard functionality to the controller, or require custom extension
7.6.5.1.1.7.5.1 Execute Motion Program The standard MotionController class supports sending basic jog and offset commands to the PPMAC.
Due to the speed at which the VBI camera mount y stage must move, there may be a need to develop a
custom PPMAC motion program that helps minimize the vibration settle time. In this case we will need
the capability to execute a motion program on the PPMAC.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 94 of 267
7.6.5.1.1.7.6 Events Published The events published by the VBI Camera Y Stage Controller are those provided by the SID
MotionController class and are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.mc.y.cStatus 1.0 Hz Current camera mount y motion status
atst.ics.vbiBlue.mc.y.fault Change Current fault status of mount Y stage
atst.ics.vbiBlue.mc.y.power Change Current power status of mount motion controller
7.6.5.1.1.7.6.1 atst.ics.vbiBlue.mc.y.cStatus Attributes: cPosY(float), cVelY(float), cTorqueY(float), cErrY(float)
Rate: 1 Hz
This event publishes the current position, speed, torque and position error of the camera mount Y stage.
7.6.5.1.1.7.6.2 atst.ics.vbiBlue.mc.y.fault Attributes: faultY(string)
Rate: on change
This event is posted when the camera mount Y stage encounters a fault.
7.6.5.1.1.7.6.3 atst.ics.vbiBlue.mc.y.power Attributes: cPower(boolean)
Rate: on change
This event gives the current power state of the camera mount Y motion controller, on or off.
7.6.5.1.1.7.7 Events Subscribed There are no custom events that the VBI Camera Y Stage Controller subscribes to. All events received by
this component are internal to the implementation of the SIF MotionController class. Additional details
on these events can be found in the ICS design document.
7.6.5.1.1.8 Detector Controller
7.6.5.1.1.8.1 Overview The Detector Controller is the controller responsible for monitoring and control of Virtual Camera. In the
hierarchy of the VBI blue channel control system it is given the name atst.ics.vbiBlue.dc.
7.6.5.1.1.8.2 Structure The DC is an instance of the ManagementController class. This class extends the BaseController class
by adding features for managing worker lifecycles and states. For more information on the base design of
the ManagementController please refer to the ICS design document.
7.6.5.1.1.8.3 Functionality The DC receives commands via CSF configurations from the IS, either directly from an IS controller or
from an observing script running in the IS. The DC responds by configuring the Virtual Camera (VC)
and providing feedback to the IS as to their status and position. At this time the VBI software team does
not plan to implement any custom functionality at the DC level. Therefore it will simply pass
configuration attributes through to the VC.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 95 of 267
7.6.5.1.1.8.4 Custom Properties The DC is provided by the SIF with a standard set of properties as described in the ICS design document.
At this time the VBI software team does not plan on any additional custom properties for the DC.
7.6.5.1.1.8.5 Custom Extensions The DC may be customized with any instrument specific functionality required. At this time the VBI
software team does not plan on adding any custom functionality to the DC.
7.6.5.1.1.8.6 Events Published The events published by the VBI DC are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.dc.cStatus 1.0 Hz Current detector controller status
7.6.5.1.1.8.6.1 atst.vbiBlue.dc.cStatus Attributes: status(string), exposing(boolean), percentComplete(float)
Rate: 1 Hz
This event reports the general status of the DC, whether the camera is exposing, and the percent complete
for the current observation. This information is a summary based on the information received from the
VCC status event.
7.6.5.1.1.8.7 Events Subscribed The events subscribed to by the VBI DC are summarized in Table 2 below. For full details of the event,
reference should be made to the appropriate system ICD.
Name Rate Comment
atst.vcc.cStatus 1 Hz Current status of the Virtual Camera Controller
7.6.5.1.1.8.7.1 atst.vcc.cStatus The VBI subscribes to the status event generated by the Virtual Camera Controller in order to monitor the
execution of observations being executed by the camera. For details on the contents of this event please
refer to the ICS to CS ICD.
7.6.5.1.1.9 Thermal Controller
7.6.5.1.1.9.1 Overview The Thermal Controller is the controller responsible for monitoring and control of VBI thermal systems.
In the hierarchy of the VBI blue channel control system it is given the name atst.ics.vbiBlue.mc.thermal.
7.6.5.1.1.9.2 Structure The thermal controller is an instance of the MotionController class. The MotionController class extends
the HardwareController class by adding functionality that supports common motion control commands,
status reporting, and state management. This type of controller can be used because the thermal flow
control valve can be driven by an analog signal and is therefore easily modeled as a motor, with position
being the target temperature.
The thermal controller will connect and communicate with the Delta Tau Power PMAC (PPMAC) motion
controller using an instance of the DeltaTauConnection class. The DeltaTauConnection class extends the
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 96 of 267
Connection class by adding support for the PPMAC motion command/response interface. The
DeltaTauConnection class provides the basic connectivity to the PPMAC, and will utilize a
DeltaTauChannel object for communications over Ethernet. For more information on the design of the
MotionController, DeltaTauConnection, and DeltaTauChannel classes please refer to the ICS design
document.
7.6.5.1.1.9.3 Functionality The thermal controller receives commands via CSF configurations from the MC or directly from the
engineering interface. A configuration consists of a mode attribute and other attributes providing the
inputs for that mode. The available modes and inputs support common thermal control commands such
as power on/off, fixed cooling rate, and closed loop cooling. If the configuration is valid, the controller
will derive the appropriate “motion” command(s), and use the DeltaTauConnection to execute the
command(s) on the PPMAC. The controller also monitors the current temperature and error of the
thermal sensors reports status through the event service.
7.6.5.1.1.9.4 Custom Properties The MotionController class and its parent classes are provided by the SIF with a standard set of properties
as described in the ICS design document. The VBI software team does not plan to add any custom
properties for this controller.
7.6.5.1.1.9.5 Custom Extensions The MotionController class may be specialized as needed to support any non-standard functional needs of
an instrument. Currently the VBI software team does not plan to extend this class for use as a thermal
controller.
7.6.5.1.1.9.6 Events Published The events published by the VBI Thermal Controller are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.mc.thermal.cStatus 1.0 Hz Current thermal system status
7.6.5.1.1.9.6.1 atst.ics.vbiBlue.mc.thermal.cStatus Attributes: cMode(string), inPosition(boolean), cFilterTemp(float), cFilterPctCoolFlow(float),
cXTemp(float), cXPctCoolFlow(float), cYTemp(float), cYPctCoolFlow(float), cFocusTemp(float),
cFocusPctCoolFlow(float), cCameraTemp(float), cCameraPctCoolFlow(float)
Rate: 1 Hz
This event publishes the current VBI thermal control system mode and in position status. It also provides
the temperature (in degrees Celsius) and percentage max cooling flow for each component it manages.
7.6.5.1.1.9.7 Events Subscribed The events subscribed to by the VBI thermal controoler are summarized in Table 2 below. For full details
of the event, reference should be made to the appropriate system ICD. Additional event subscribed to by
this component are internal to the implementation of the SIF MotionController class. Details on those
events can be found in the ICS design document.
Name Rate Comment
atst.fcs.coude.thermal.cStatus 1 Hz Current status info for coude thermal control
7.6.5.1.1.9.7.1 atst.fcs.coude.thermal.cStatus Attribute: ambientTemp(float)
Rate: 1 Hz
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 97 of 267
The VBI subscribes to the coude thermal system cStatus event. This event includes several attributes,
however the VBI is specifically interested in the ambientTemp attribute. The ambientTemp attribute
provides the current ambient temperature of the coude room. The VBI thermal control system must know
this information to function properly.
This event is defined in ICD-X.X/4.4, FCS to Observatory Control System Interface, and is reproduced
herein for information purposes only.
7.6.5.1.1.10 Power Controller
7.6.5.1.1.10.1 Overview The Power Controller is the controller responsible for monitoring and control of the VBI 24V power
supply. In the hierarchy of the VBI blue channel control system it is given the name
atst.ics.vbiBlue.mc.power.
7.6.5.1.1.10.2 Structure The power controller is implemented with the PowerController class. The PowerController class extends
the HardwareController class, adding the specific command and logic elements needed to control the
24V power supply via USB. The power controller will connect and communicate with the 24V power
supply using an instance of the USBPowerConnection class. The USBPowerConnection class extends the
Connection class by adding support for the 24V power supply USB command/response interface. The
USBPowerConnection class provides the basic connectivity to the 24V power supply hardware, and will
utilize a USBChannel object for communications over USB.
7.6.5.1.1.10.3 Functionality The power controller receives commands via CSF configurations from the MC or directly from the
engineering interface. A configuration consists of a mode attribute and other attributes providing the
inputs for that mode. The available modes and inputs support turning the power supply on or off and
adjusting the output voltage. If the configuration is valid, the controller will derive the appropriate
command(s), and use the USBPowerConnection to execute the command(s) on the 24V Power Supply.
The controller also monitors the power supply and reports status through the event service.
7.6.5.1.1.10.4 Properties The HardwareController class uses a standard set of properties as described in the ICS design document.
In addition to these properties, the VBI software team will provide custom properties for the
PowerController class as described below.
Name Modes Type Units Comment
atst.ics.vbiBlue.mc.power
.mode N/A string N/A Mode indicating on/off
.deployPos on integer Volts Power supply deployed voltage
7.6.5.1.1.10.4.1 .mode Data Type: string
Units: N/A
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 98 of 267
Valid Values: on|off
Default Value: off
This is the mode of the power controller. When in the off mode, the 24V power supply will be powered
down and no voltage will be delivered to the VBI. When in the on mode, the 24V power supply will be
powered up and the output voltage set to 24V by default, or the value given by the deployPos attribute.
7.6.5.1.1.10.4.2 .deployPos Data Type: integer
Units: Volts
Valid Values: 0 ≤ deployPos ≤ 24
Default Value: N/A
This is the deploy position to which the power supply will be set. This attribute may be used in the “on”
mode.
7.6.5.1.1.10.5 Custom Extensions The HardwareController may be specialized as needed to support any non-standard functional needs of
an instrument. Currently the VBI software team plans to extend the doSubmit() and doAction() methods
to allow for VBI specific functional behavior to be added.
7.6.5.1.1.10.6 Events Published The events published by the VBI IC are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.power.cStatus Change Current status of VBI 24V power supply
7.6.5.1.1.10.6.1 atst.ics.vbiBlue.power.cStatus Attributes: cStatus(string), cMode(string), cOutput(integer)
Rate: 1 Hz.
This event reports the general status of the VBI power controller. The cStatus attribute indicates whether
the controller is in good health. The cMode attribute reports the current mode of the controller and the
cOutput attribute reports the current output voltage.
7.6.5.1.1.10.7 Events Subscribed There are no custom events subscribed to by the Power Controller. The default events received by this
component are those internal to the implementation of the SIF HardwareController. For more
information on these events please refer to the ICS design document.
7.6.5.1.1.11 Auxiliary Controller
7.6.5.1.1.11.1 Overview The Auxiliary Controller is the controller responsible for monitoring and control of all VBI auxiliary
systems. In the hierarchy of the VBI blue channel control system it is given the name
atst.ics.vbiBlue.mc.aux.
7.6.5.1.1.11.2 Structure The auxiliary controller is implemented with the AuxiliaryController class. The AuxiliaryController class
extends the DigitalIOController class, adding the specific command and logic elements needed to control
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 99 of 267
the auxiliary components of the VBI system. The DigitalIOController extends the HardwareController
class by providing generic command and logic elements for controlling digital input and output. The
auxiliary controller will connect and communicate with the Delta Tau Power PMAC (PPMAC) using an
instance of the DeltaTauConnection class. The DeltaTauConnection class extends the Connection class
by adding support for the PPMAC command/response interface. The DeltaTauConnection class provides
the basic connectivity to the PPMAC, and will utilize a DeltaTauChannel object for communications over
Ethernet. For more information on the design of the DigitalIOController, DeltaTauConnection, and
DeltaTauChannel classes please refer to the ICS design document.
7.6.5.1.1.11.3 Functionality The auxiliary controller receives commands via CSF configurations from the MC or directly from the
engineering interface. A configuration consists of a mode attribute and other attributes providing the
inputs for that mode. The available modes and inputs support control of mechanical switches and sensors.
If the configuration is valid, the controller will derive the appropriate command(s), and use the
DeltaTauConnection to execute the command(s) on the PPMAC. The controller also monitors the
sensors and reports status through the event service.
7.6.5.1.1.11.4 Custom Properties The DigitalIOController class and its parent classes are provided by the SIF with a standard set of
properties as described in the ICS design document. In addition to these properties, the VBI software
team will provide custom properties for the AuxiliaryController as described below. At this time there
are no custom properties defined as details on auxiliary functions have not been specified.
7.6.5.1.1.11.5 Custom Extensions The DigitalIOController may be specialized as needed to support any non-standard functional needs of an
instrument. Currently the VBI software team plans to extend this class with the AuxiliaryController class
described above.
7.6.5.1.1.11.6 Events Published The events published by the VBI IC are summarized in the table below.
Name Rate Comment
atst.ics.vbiBlue.aux.cStatus Change Current status of VBI
7.6.5.1.1.11.6.1 atst.ics.vbiBlue.cStatus Attributes: cStatus(string), auxInputs[] (boolean)
Rate: 1 Hz.
This event reports the general status of the VBI auxiliary controller. The cStatus attribute indicates
whether the controller is in good health. The auxInputs[] array provides the current status (true=active,
false=inactive) for each digital IO input being monitored by the controller.
7.6.5.1.1.11.7 Events Subscribed There are no custom events subscribed to by the Auxiliary Controller. The default events received by this
component are those internal to the implementation of the SIF DigitalIOController. For more
information on these events please refer to the ICS design document.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 100 of 267
7.7 OBSERVING TASK SCRIPTS DESIGN
Scripting is a key element of maintaining flexibility in how the VBI is used. The Instrument Sequencer
provides scripting support by loading and running observing scripts written in the Jython programming
language. The Jython programming language is a Java implementation of the popular Python language.
Jython scripts support the same syntax and programming abilities of Python, but because they are
compiled and run in the Java Virtual Machine, they have the ability to work directly with other Java
objects.
The VBI will associate a specific script with each observing task. The association is made using the Script
Store database. By using the instrument name and current observing task, the IS is able to retrieve the
correct observing script. The parameters passed down with the observing task configuration are used to
provide experiment specifics to the generic observing script. The script controls all details of what the
instrument will do during observing. The IS provides the script engine required to execute the script,
using standard CSF scripting tools.
The VBI observing scripts will support the start, pause, resume, cancel, and abort actions required by the
ICS to VBI interface. These actions allow the OCS/ICS user to signal the instrument controller to
perform the action as needed during the current observation. The scripts will be written to check for these
signals periodically to ensure they are handled as required.
The next few sections will discuss the general behavior of each standard observing task script of the VBI.
It is expected that variations of these scripts and other custom scripts will be written to meet changing
observation needs over time.
7.7.1.1 Setup The Setup observation task is used to test VBI configurations on the actual system hardware and/or
current observing conditions. It is also used to initialize (index/home) the motion stages of the VBI. If
the configuration is valid the associated VBI control settings will be updated and remain unchanged until
another configuration or engineering update is processed. The Setup script will then execute the
observations defined by the given sequence of parameter sets (.paramSets[]). The Setup task also allows
the user to optionally command the VBI to send data to the DHS (.collectExpFlag) so that it can be
evaluated. When this option is invoked, the observation sequence will be repeated indefinitely until the
user cancels the operation. Figure 51 shows the control logic flowchart for the Setup observing task
script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 101 of 267
Figure 51: Flowchart for Setup Observing Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Exit
critical error
critical error
Crit Error, C/A,
or Done?
Observing mode script: Setup
PRE:
Script inputs assigned
System clock synced to TRADS
TODO:
Handling of auto exp time calcs
YES
YES
YES
NO
NO
NO
NO
Build/Submit MC
config (all obsv)
Build DC config
(all obsv)
Determine start
time
MC config OK?
Build/Submit TC
config
Submit DC config
(no start/time)
DC config OK?
Build/Submit DC
“start” config
Post obsv stop
header data
Post obsv start
header data
Recv. all status
for this obsv?
Are there more
obsv?
DC / TC
configs OK?
critical errorNO
critical errorNO
critical errorNO
NO
YES
YES
NO
pause/cancel/abort
Paused?
P/R/C/A
Handler
P/R/C/A
Handler
cancel / abort
done
YES
YES
YES
YES
Paused?
pause / resume
YES
NO
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 102 of 267
7.7.1.2 Observe The Observe observation task is used to start the collection of science data. If the configuration is valid,
the associated VBI control settings will be updated and remain unchanged until another configuration or
engineering update is processed. The Observe script will then execute the sequence of observations
defined by the given list of parameter sets (.paramSets[]). During execution of the sequence, the VBI
will adjust filter and focus mechanical components as well as camera settings (exposure time, frame
acquisition mode, etc.) as needed between each observation.
This sequence of observations will be repeated based on the user specified number of cycles
(.numCycles). Upon completing the last data collection cycle, the VBI will stop and wait for additional
commands unless the user specifies the data collection process to be automatically repeated
(.continueFlag=true) until another configuration is received.
Figure 52 shows the control logic flowchart of the Observe task script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 103 of 267
Figure 52: Flowchart for Observe Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Exit
critical error
critical error
Crit Error, C/A,
or Done?
Observing mode script: Observe
PRE:
Script inputs assigned
System clock synced to TRADS
TODO:
YES
YES
YES
NO
NO
NO
NO
Build/Submit MC
config (all obsv)
Build DC config
(all obsv)
Determine start
time
MC config OK?
Build/Submit TC
config
Submit DC config
(no start/time)
DC config OK?
Build/Submit DC
“start” config
Post obsv stop
header data
Post obsv start
header data
Recv. all status
for this obsv?
Are there more
obsv?
DC / TC
configs OK?
critical errorNO
critical errorNO
critical errorNO
NO
YES
YES
NO
pause/cancel/abort
Paused?
P/R/C/A
Handler
P/R/C/A
Handler
cancel / abort
done
YES
YES
YES
YES
Paused?
pause / resume
YES
NO
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 104 of 267
7.7.1.2.1 Example Script To help illustrate what an observing script will look like, the following example observe script was
written. Please note that this is by no means a complete and bug free script. It is only an example of how
a Jython script can interact with Java and the distributed components of the IC via CSF. // Inputs:
// paramSet: An array of ordered configurations
//
// Validate input configuration
if not Misc.checkDims(paramSet, [“filters”, “expTime”, “expCount”]):
return Action.ERROR
// Connect to dc, mc, and time base controller
dc = App.connect(“atst.ics.vbiBlue.dc”)
mc = App.connect(“atst.ics.vbiBlue.mc”)
tbc = App.connect(“atst.ics.vbiBlue.time”)
// build the real time move configuration
move = Configuration()
move.insert(Attribute(mc.getName() + “.mode”,”move”))
// add each filter to the real-time move config
for filter in paramSet.getStringArray(“filters”):
move.insert(Attribute(mc.getName() + “.nposArray”, filter))
// check for pause / abort
if currentAction.isCanceled() :
return CANCELED
if currentAction.isAborted():
return ABORTED
// submit mc configuration but don’t wait for completion
remoteAction = Misc.submit(currentAction, mc, move)
// while we wait for mechanisms to pre-configure
// build the camera capture configuration
// without start time/signal
capture = Misc.makeDCConfig(paramSet)
// now check mechanism config result
remoteAction.waitForDone()
if remoteAction.wasAborted():
return ABORTED
// submit config to camera and wait for acknowledgment
// that it was valid
Misc.submitAndWait(currentAction, dc, capture)
if currentAction.wasFailed() :
return FAIL
// get absolute start time
getAbsTime = Configuration()
getAbsTime.insert(Attribute(tbc.getName() + “.mode”,”getTime”);
Misc.submitAndWait(currentAction, tbc, getAbsTime)
if currentAction.wasFailed() :
return FAIL
// save the absolute reference time
absTime = currentAction.result.getAtstDate(“absTime”)
// calculate configuration for tsync
tsync = Misc.buildTSyncConfiguration(paramSet, absTime)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 105 of 267
// add start parameters to
// camera capture configuration
capture = Misc.buildDCStartConfiguration(capture, absTime)
// bail if the tcs isn't configured
if not parmSet.contains(“tcsConfigured”):
return ACTION_OK;
// start the camera
remoteAction1 = Misc.submit(currentAction, dc, capture)
// start the real-time move triggers
remoteAction2 = Misc.submit(currentAction, tbc, tsync)
// wait for the camera to finish
remoteAction1.waitForDone()
if remoteAction1.wasAborted():
remoteAction2.abort();
return ABORTED
7.7.1.3 Gain This observation task is used to obtain images that can be analyzed for flat fielding the VBI camera. It
behaves the same way as the Observe task but limits the settings that can be specified and tags the
collected data as gain related. The settings that must be specified are the camera settings
(.dc.vcc.<vccAttrib>).
Figure 53 shows the control logic flowchart for the Gain observing task script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 106 of 267
Figure 53: Flowchart for Gain observing task script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Are there more
observations?
Build MC config
Submit MC config
MC config
done and OK?
Build DC config
Submit DC config
and wait
DC config done
and OK?
Post obs start
header data
Post obs stop
header data
Exit
Report error
critical error
critical error
non-critical error
non-critical error
press-on-regardless Critical Error?
Observing mode script: Gain
TODO:
1) Handling of pause/resume/cancel/abort
2) Header data prior to exposure? MC Error?
Are there more
cycles?
Post cycles
continue?
YES
YES
YES
YES
YES
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 107 of 267
7.7.1.4 Dark This observation task is used to obtain images that can be analyzed for flat fielding the VBI camera. It
behaves the same way as the Observe task but limits the settings that can be changed and tags the
collected data as dark related. The settings that must be specified are the camera settings
(.dc.vcc.<vccAttrib>).
Figure 54 shows the control logic flowchart for the Dark observing task script.
Figure 54: Flowchart for Dark Observing Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Are there more
observations?
Build MC config
Submit MC config
MC config
done and OK?
Build DC config
Submit DC config
and wait
DC config done
and OK?
Post obs start
header data
Post obs stop
header data
Exit
Report error
critical error
critical error
non-critical error
non-critical error
press-on-regardless Critical Error?
Observing mode script: Dark
TODO:
1) Handling of pause/resume/cancel/abort
2) Header data prior to exposure? MC Error?
Are there more
cycles?
Post cycles
continue?
YES
YES
YES
YES
YES
YES
YES
YES
NO
NO
NO
NO
NO
NO
NO
NO
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 108 of 267
7.7.1.5 PolCal This task is used during collection of polarimetry calibration data for instruments. The VBI does not take
polarmetric data measurements and therefore does not require any polarimetry calibration activities. In
addition, during the PolCal task the PA&C may place polarization optics into the light path thus
rendering the light not useful for VBI purposes. Therefore when the PolCal observational task is
received, the VBI will simply accept the configuration and perform no operation.
7.7.1.6 TelCal This observation task is used during collection of polarimetry calibration data for the telescope. The VBI
does not take polarmetric data measurements and therefore does not play a role in the telescope
calibration activities. In addition, during the TelCal task the PA&C may place polarization optics into the
light path thus rendering the light not useful for VBI purposes. Therefore when the TelCal observational
task is received, the VBI will simply accept the configuration and perform no operation.
7.7.1.7 WaveCal The WaveCal observation task is used to perform wavelength calibrations for the VBI. This task will
behave exactly the same as the Gain task. Please refer to the Gain task section for more details.
7.7.1.8 Focus The Focus observation task is used to support focus activities for the VBI. In this task the configuration
must specify the focus type (.focusType) to use and what target is currently placed in the light path at the
PA&C lower GOS carousel (atst.tcs.pac.target.namedPos). If the configuration is valid, the VBI control
system will execute a focus routine to determine the optimal focus position (.mc.focus.deployPos[]) for
each wavelength based on the given focus type and target. For each wavelength, the focus routine repeats
a process of collecting exposures, evaluating them, and making adjustments to focus position
(.mc.focus.pos or .mc.focus.oPos) until the exposure evaluation output meets a specific criteria. All data
captured during this process can be viewed in the quick look display but is not saved. Figure 55 shows
the control logic flowchart for the Focus observing task script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 109 of 267
Figure 55: Flowchart for Focus Observing Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Are there more
observations?
Build/Submit MC
config
MC config
done?
Build DC config
Submit DC config
DC config OK?
Exit
Report error / done
critical error
critical error
error
error
Observing mode script: FocusThis version has the following key design characteristics:
1) Uses dedicated java object that implements the BDT processor interface
for processing incoming images
2) Adjusts focus stage after each output of image processing algorithm
TODO:
YES
YES
YES
YES
NO
YES
NO
NO
NO
NO
NO
Auto-Focus?
Subscribe image
processor to BDT
Auto Focus?
Image
processor
result ready?
P/R/C/A
Handlerpause cancel / abort
P/R/C/A
Handlerpause cancel / abort
NO
MC config OK?
YES
DC config
done?
P/R/C/A
Handlerpause
NO
YES
cancel / abort
Update default
focus pos property
P/R/C/A
Handler pause
YES
NO
NO
Manual focus
signal recv.?
YES
YES
YES
Focus
achieved?Try next focus pos
NO
YES
NO
Image
processing
result OK?NO
YES
done
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 110 of 267
7.7.1.9 Align The Align observation task is used to support alignment activities for the VBI. In this task the
configuration must specify the alignment type (.alignType) to use and what target is currently placed in
the light path at the PA&C lower GOS carousel (atst.tcs.pac.target.namedPos). If the configuration is
valid, the VBI control system will execute an alignment routine based on the given alignment type and
target. The alignment routine collects exposures, evaluates them, and makes adjustments to the camera
mount x and y stages until the exposure evaluation output meets a specific criteria. All data captured
during this process can be viewed in the quick look display but is not saved.
Figure 56 shows the control logic flowchart for the Align observing task script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 111 of 267
Figure 56: Flowchart for Align Observing Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Build/Submit MC
config
MC config
done?
Build DC config
Submit DC config
DC config OK?
Exit
Report error
critical error
critical error
error
error
Observing mode script: AlignThis version has the following key design characteristics:
1) Uses dedicated java object that implements the BDT processor interface
for processing incoming images
2) Adjusts camera x-y stages after each output of image processing
algorithm
3) Alignment is performed at only one filter position
YES
YES
YES
NO
YES
NO
NO
NO
NO
Auto-Align?
Subscribe image
processor to BDT
Auto Align?
Image
processor
result ready?
P/R/C/A
Handlerpause cancel / abort
P/R/C/A
Handlerpause cancel / abort
NO
MC config OK?
YES
DC config
done?
P/R/C/A
Handlerpause
NO
YES
cancel / abort
Update default
align pos property
P/R/C/A
Handlerpause
YES
NO
NO
Manual align
done signal
recv.?
YES
YES
YES
Align
achieved?Try next x-y pos
NO
YES
NO
Image
processing
OK?
error
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 112 of 267
7.7.1.10 Target This observation task serves as a catch-all for any other calibration or alignment needs of the VBI. This
task behaves the same as the Observe task and accepts all the same configuration input parameters. In
addition, this task requires the configuration to specify the currently active PA&C lower GOS target
(atst.tcs.pac.target.namedPos) which is then used to tag the collected data. If the specified PA&C lower
GOS target is the Line Grid, the VBI will automatically calculate and update its pixel scale. All data
captured during this process can be viewed in the quick look display and will be saved.
Figure 57 shows the control logic flowchart for the Target observing task script.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 113 of 267
Figure 57: Flowchart for Target Observing Task Script
Start
Script inputs
valid?
Connect to
subsystems
Subsystems
status OK?
Are there more
observations?
Build/Submit MC
config
MC config
done?
Build DC config
Submit DC config
DC config OK?
Exit
Report error / done
critical error
critical error
error
error
Observing mode script: TargetThis version has the following key design characteristics:
1) Uses dedicated java object that implements the BDT processor interface
for processing incoming images
2) Finish image processing before next obs
TODO:
YES
YES
YES
YES
NO
YES
NO
NO
NO
NO
NO
Image
processing
required?
Subscribe image
processor to BDT
Image
processor
result ready?
P/R/C/A
Handlerpause cancel / abort
P/R/C/A
Handlerpause cancel / abort
NO
MC config OK?
YES
DC config
done?
P/R/C/A
Handlerpause
NO
YES
cancel / abort
Update target
related property
P/R/C/A
Handler pause
YES
YES
YES
NO
Image
processing
result OK?NO
done
Image
processing
required?
YES
NO
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 114 of 267
7.7.1.11 Special This observation task serves as a catch-all for any other observing scripts that need to be run but are not
covered by the established observation tasks. This task requires the user to specify the name of the script
to execute (.specialScriptName). Any attributes required by the special script that are not covered in this
document must be validated by the script itself.
7.7.1.12 Handling Pause, Resume, Cancel, and Abort During script execution, users may request that the script be paused, resumed, cancelled, or aborted.
Should the user request once of these actions, the ICS will set a flag for the IC that indicates the action
requested. The observing task scripts must therefore be written to check for the flag and handle these
scenarios. The VBI scripts will use a common handler for this purpose. Scripts will call the handler at
strategic points to check for and handle any request to pause, resume, cancel, or abort. Figure 58 shows
the control logic flowchart for the handler.
Figure 58: Flowchart for Pause-Resume-Cancel-Abort Handler
P/R/C/A
Handler
Demand state
== Pause /
Cancel?
Exit
Send cancel to
subsystems
Demand state
== Abort?
Send abort to
subsystems
Current state
!= Pause?
Set current state =
demand state
NO
YES
NO
YES
NO
YES
Pause / Resume / Cancel / Abort
Handling
PRE:
IS provides P/R/C/A flags
NOTE: If current state == paused, we
already cancelled subsystem configs
before. Therefore, we just need to
update state so caller can continue
(resume), exit (cancel/abort), or wait
(still paused).
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 115 of 267
7.8 DATA PROCESSING PIPELINE DESIGN
7.8.1 Introduction
7.8.1.1 Purpose The purpose of this section is to provide the critical design definition for the VBI Blue Data Processing
Pipeline software system. The critical design documentation will be presented using four different design
views: Decomposition Description, Dependency Description, Interface Description, and Detailed
Design. Each design view represents a separate concern about the software system. Together these views
provide a comprehensive description of the design in a form that allows users to access desired
information quickly.
7.8.1.2 Scope The scope of this section is to provide the critical design documentation for the VBI Blue Data Processing
Pipeline software, which includes the Dark, Gain, Frame Selection, Speckle Image Reconstruction, and
Detailed Display packages.
7.8.1.3 Definitions and Acronyms The following definitions and acronyms are useful in understanding and discussing aspects of the Speckle
Image Reconstruction software system.
BDT – Bulk Data Transport
CSF – Common Services Framework
DDN – Data Distribution Node
DHS – Data Handling System
DPN – Data Processing Node
DPP – Data Processing Pipeline
DTN – Data Transfer Node
Frame – A full 4kx4k frame
Frame Set – A set of frames delivered sequentially by the camera
Macro Tile – A large region of a frame used to break frame processing into large sub-problems
Macro Tile Cube – A set of macro tiles from the same region of a set of frames
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 116 of 267
7.8.2 Decomposition Description
7.8.2.1 Module Decomposition
The VBI Blue Data Processing Pipeline can be decomposed into three major modules: Dark Data
Processing Node, Gain Data Processing Node, Frame Selection Data Processing Node, Speckle
Input Data Processing Node, Speckle Slave Data Processing Node(s), Speckle Output Data
Processing Node, and Detailed Display Data Processing Node. These modules work together to
provide the functionality needed to meet the VBI Blue data processing requirements. The following
sections will provide details on the purpose and functionality provided by each of these modules, as well
as a look at the major components that comprise each of them.
7.8.2.1.1 Dark Data Processing Node
7.8.2.1.1.1 Identification The Dark Data Processing Node, or Dark DPN, will be the name used to identify the VBI camera line
module that handles an incoming burst of dark frames and produces an output dark calibration frame.
7.8.2.1.1.2 Type The Dark DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data Processing Pipeline
(DPP). The Figure 59 below shows a high level view of the Dark DPN in the context of the other VBI
Blue camera line modules.
Figure 59: Dark DPN Context Diagram
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 117 of 267
7.8.2.1.1.3 Purpose The purpose of the Dark DPN is to provide the capability to generate a dark calibration frame on the
summit as part of the VBI Blue camera line.
7.8.2.1.1.4 Function The primary function of the Dark DPN is to acquire a set of frames taken in the dark task, process them,
and produce a single output dark calibration frame. The functional steps involved in this process are as
follows:
Receive incoming frames via the Bulk Data Transport (BDT) interface
Interpret meta-data for each frame
Calculate dark calibration frame as output
Save output to calibration store
In addition, the VBI requirements state that the Dark DPN must accept input frames and produce an
output dark calibration frame in real-time.
7.8.2.1.1.5 Subordinates The Dark DPN module is comprised of several sub-modules that work together to meet the functional
requirements. Figure 60 shows the hierarchical relationship between the objects that comprise the Dark
DPN.
Figure 60: Dark DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events. The DarkHandler extends the
DHS framework and provides the functional behavior specific to the Dark DPN application. It uses the
jCUDA library as an interface to the low level CUDA GPU driver, thus allowing it to perform functions
on the GPU hardware such as loading data, unloading data, and executing code on the device.
Using the jCUDA library, the DarkHandler stores received frame data into a buffer in the GPU memory
space. When the appropriate data has been loaded to the GPU, the DarkHandler will execute the Dark
GPU Software on the device. The Dark GPU Software consists of a kernel written in C that performs the
pixel parallel algorithm for calculating a dark calibration frame.
Utilities
Dark Handler
Dark Frame
Buffer
GPU Interface
Dark GPU
Software
DHS Framework
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 118 of 267
For more information on the design of these software components please refer to the detailed design in
section 7.8.5.1.1.
7.8.2.1.2 Gain Data Processing Node
7.8.2.1.2.1 Identification The Gain Data Processing Node, or Gain DPN, will be the name used to identify the VBI camera line
module that handles an incoming burst of gain frames and produces an output gain calibration frame.
7.8.2.1.2.2 Type The Gain DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data Processing Pipeline
(DPP). Figure 61 below shows a high level view of the Gain DPN in the context of the other VBI Blue
camera line modules.
Figure 61: Gain DPN Context Diagram
7.8.2.1.2.3 Purpose The purpose of the Gain DPN is to provide the capability to generate a gain calibration frame on the
summit as part of the VBI Blue camera line.
7.8.2.1.2.4 Function The primary function of the Gain DPN is to acquire a set of frames taken in the gain task, process them,
and produce a single output gain calibration frame. The functional steps involved in this process are as
follows:
Receive incoming frames via the Bulk Data Transport (BDT) interface
Interpret meta-data for each frame
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 119 of 267
Calculate gain calibration frame as output
Save output to calibration store
In addition, the VBI requirements state that the Gain DPN must accept input frames and produce an
output gain calibration frame in real-time.
7.8.2.1.2.5 Subordinates The Gain DPN module is comprised of several sub-modules that work together to meet the functional
requirements. Figure 62 shows the hierarchical relationship between the objects that comprise the Gain
DPN.
Figure 62: Gain DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events. The GainHandler extends the
DHS framework and provides the functional behavior specific to the Gain DPN application. It uses the
jCUDA library as an interface to the low level CUDA GPU driver, thus allowing it to perform functions
on the GPU hardware such as loading data, unloading data, and executing code on the device.
Using the jCUDA library, the GainHandler stores received gain frames into a gain frame buffer in the
GPU memory space. It also keeps a copy of the binary mask calibration data in a buffer in the GPU
memory space for use in gain calibration processing. When the appropriate data has been loaded to the
GPU, the GainHandler will execute the Gain GPU Software on the device. The Gain GPU Software
consists of a kernel written in C that performs the pixel parallel algorithm for calculating a gain
calibration frame.
For more information on the design of these software components please refer to the detailed design in
section 7.8.5.1.2.
Utilities
Gain Handler
Gain Frame
Buffer
GPU Interface
Gain GPU
Software
DHS Framework
Binary Mask
Buffer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 120 of 267
7.8.2.1.3 Frame Selection Data Processing Node
7.8.2.1.3.1 Identification The Frame Selection Data Processing Node, or Frame Selection DPN, will be the name used to identify
the VBI camera line module that handles an incoming burst of frames and selects the best N of M based
on the desired selection algorithm for output.
7.8.2.1.3.2 Type The Frame Selection DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data
Processing Pipeline (DPP). Figure 63 below shows a high level view of the Frame Selection DPN in the
context of the other VBI Blue camera line modules.
Figure 63: Frame Selection DPN Context Diagram
7.8.2.1.3.3 Purpose The purpose of the Frame Selection DPN is to provide the capability to apply a quality metric to a burst of
images and select only the best images for output.
7.8.2.1.3.4 Function The primary function of the Frame Selection DPN is to acquire a set of frames taken in the observe task,
apply a quality metric to them, and output only the best frames. The functional steps involved in this
process are as follows:
Receive incoming frames via the Bulk Data Transport (BDT) interface
Interpret meta-data for each frame
Calibrate ROI using latest calibration images and binary mask
Apply quality metric to select best N of M frames based on user parameters
Update meta-data for selected frames
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 121 of 267
In addition, the VBI requirements state that the Frame Selection DPN must accept input frames and
produce selected output frames in real-time.
7.8.2.1.3.5 Subordinates The Frame Selection DPN module is comprised of several sub-modules that work together to meet the
functional requirements. Figure 64 shows the hierarchical relationship between the objects that comprise
the Frame Selection DPN.
Figure 64: Frame Selection DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events. The DetailedDisplayHandler
extends the DHS framework and provides the functional behavior specific to the Frame Selection DPN
application. It uses the jCUDA library as an interface to the low level CUDA GPU driver, thus allowing
it to perform functions on the GPU hardware such as loading data, unloading data, and executing code on
the device.
Using the jCUDA library, the DetailedDisplayHandler stores received frames into a frame buffer in the
GPU memory space. It also keeps a copy of the latest gain, dark, and binary mask calibration data in a
buffer of the GPU memory space for use in ROI calibration processing. When the appropriate data has
been loaded to the GPU, the DetailedDisplayHandler will execute the Frame Selection GPU Software on
the device. The Frame Selection GPU Software consists of a kernel written in C that performs the pixel
parallel algorithm for calculating a gain calibration frame.
For more information on the design of these software components please refer to the detailed design in
section 7.8.5.1.3.
Utilities
Frame Selection Handler
Input Frame
Buffer
GPU Interface
Frame Selection
GPU Software
DHS Framework
Binary Mask
Buffer
Dark Calibration
Buffer
Gain Calibration
Buffer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 122 of 267
7.8.2.1.4 Detailed Display Data Processing Node
7.8.2.1.4.1 Identification The Detailed Display Data Processing Node, or Detailed Display DPN, will be the name used to identify
the VBI camera line module that handles incoming frames and performs the processing required to
present the frames in a detailed display.
7.8.2.1.4.2 Type The Detailed Display DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data
Processing Pipeline (DPP). Figure 65 below shows a high level view of the Detailed Display DPN in the
context of the other VBI Blue camera line modules.
Figure 65: Detailed Display DPN Context Diagram
7.8.2.1.4.3 Purpose The purpose of the Detailed Display DPN is to apply data processing steps to incoming frames so they
can be presented to the user in a detailed display.
7.8.2.1.4.4 Function The primary function of the Detailed Display DPN is to acquire frames, calibrate those frames, and output
them to a detailed display for the user to view. The functional steps involved in this process are as
follows:
Receive incoming frames via the Bulk Data Transport (BDT) interface
Interpret meta-data for each frame
Calibrate using latest calibration images and binary mask
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 123 of 267
In addition, the VBI requirements state that the Detailed Display DPN must accept input frames and
produce calibrated output frames in real-time.
7.8.2.1.4.5 Subordinates The Detailed Display DPN module is comprised of several sub-modules that work together to meet the
functional requirements. Figure 66 shows the hierarchical relationship between the objects that comprise
the Detailed Display DPN.
Figure 66: Detailed Display DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events. The DetailedDisplayHandler
class extends the DHS framework and provides the functional behavior specific to the Detailed Display
DPN application. It uses the jCUDA library as an interface to the low level CUDA GPU driver, thus
allowing it to perform functions on the GPU hardware such as loading data, unloading data, and executing
code on the device.
Using the jCUDA library, the DetailedDisplayHandler stores received frames into a frame buffer in the
GPU memory space. It also keeps a copy of the latest gain, dark, and binary mask calibration data in a
buffer of the GPU memory space for use during calibration processing. When the appropriate data has
been loaded to the GPU, the DetailedDisplayHandler will execute the Detailed Display GPU Software on
the device. The Detailed Display GPU Software consists of a kernel written in C that performs the pixel
parallel algorithm for calculating a gain calibration frame.
For more information on the design of these software components please refer to the detailed design in
section 7.8.5.1.4.
7.8.2.1.5 Speckle Input Data Processing Node
7.8.2.1.5.1 Identification The Speckle Input Data Processing Node, or Speckle Input DPN, will be the name used to identify the
VBI camera line module that handles incoming full frames requiring Speckle image reconstruction.
Utilities
Detailed Display Handler
Input Frame
Buffer
GPU Interface
Detailed Display
GPU Software
DHS Framework
Binary Mask
Buffer
Dark Calibration
Buffer
Gain Calibration
Buffer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 124 of 267
7.8.2.1.5.2 Type The Speckle Input DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data Processing
Pipeline (DPP). Figure 67 below shows a high level view of the Speckle Input DPN in the context of the
other Speckle related modules and VBI Blue camera line modules.
Figure 67: Speckle Input DPN Context Diagram
7.8.2.1.5.3 Purpose The purpose of the Speckle Input DPN is to act as the main data entry point for frames requiring Speckle
image reconstruction. It provides the acquisition and slave distribution functionality needed for the first
step in the Speckle solution pipeline as shown in Figure 68 below.
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle
Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 125 of 267
Figure 68: Speckle Pipeline - Stage 1
7.8.2.1.5.4 Function The primary function of the Speckle Input DPN is to act as the router for incoming VBI camera frames to
the Speckle Slave Data Processing Nodes. The functional steps involved in this process are as follows:
Receive incoming frames via the Bulk Data Transport (BDT) interface
Interpret meta-data for each frame
Split frames into macro-tiles
Distribute macro-tiles to slave processing nodes
In addition, the Speckle Input DPN will receive, process, and distribute co-temporal AO data to each
Speckle Slave DPN for use in the image reconstruction process. The functional steps involved in this
process are as follows:
Ingest co-temporal AO data
Compute covariance matrices
Distribute covariance matrices to Speckle Slave DPNs
The VBI requirements state that the speckle reconstruction process must be performed in near real-time
based on the use case of a 3.0 second observation duration producing 80 4k x 4k input frames at 30Hz.
Since the reconstruction process requires all 80 frames to first be acquired, the pipelined approach shown
in Figure 68 allows 3.0s for acquisition, slave distribution, pre-processing, and download to the GPU
hardware. Thus the acquisition and slave distribution steps performed on the Speckle Input DPN must be
completed with enough time remaining for the pre-processing of final frames and download of data to the
GPU hardware to complete on the Speckle Slave DPN.
Speckle Output DPN
Speckle Slave DPN
Speckle Input DPN
Acquisition and Slave Distribution
Speckle Processing
Re-assembly and Output
Input
Output
1s 2s 3s0s
Pre-process and Load GPUBurst n+1
Burst n
Burst n-1
Unload GPU and transfer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 126 of 267
7.8.2.1.5.5 Subordinates The Speckle Input DPN module is comprised of several sub-modules that work together to meet the
functional requirements. Figure 69 below shows the hierarchical relationship between the elements
comprising the Speckle Input DPN.
Figure 69: Speckle Input DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events.
The Master Input Handler extends upon the DHS framework by providing the functional behavior
specific to the Speckle Input DPN application. It acts as a router for incoming frames, splitting them into
macro-tiles and distributing them to slave processing nodes. It also receives co-temporal AO data,
processes it, and distributes it to appropriate slave nodes. The Master Input Handler also makes use of a
re-usable Utilities library that contains common routines for working with frame data and AO data.
For more details on these components and other related software please refer to the detailed design in
section 7.8.5.1.5.
7.8.2.1.6 Speckle Slave Data Processing Node
7.8.2.1.6.1 Identification The Speckle Slave Data Processing Node, or Speckle Slave DPN, will be the name used to identify each
of the VBI camera line modules that perform Speckle image reconstruction processing on a subset of the
Speckle Master DPN input data set, using GPU enabled hardware.
7.8.2.1.6.2 Type The Speckle Slave DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data Processing
Pipeline (DPP). Figure 70 below shows a high level view of the Speckle Slave DPN in the context of the
other VBI Blue camera line modules.
Master Input Handler
Utilities
DHS Framework
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 127 of 267
Figure 70: Speckle Slave DPN Context Diagram
7.8.2.1.6.3 Purpose The purpose of the Speckle Slave DPN is to perform computationally intensive data processing steps on
image data using GPU hardware. Therefore, multiple Speckle Slave DPNs are used in parallel, each
processing a subset of the overall image data, to help increase data processing throughput. The
processing steps performed include the loading of data to the GPU hardware, pre-processing, and Speckle
processing. Figure 71 shows where these steps reside relative to the other steps of the Speckle solution
pipeline.
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 128 of 267
Figure 71: Speckle Pipeline - Stage 2
7.8.2.1.6.4 Function The primary function of the Speckle Slave DPN is to take an input set of macro-tiles and apply a series of
data processing steps to produce a single output macro-tile that is of diffraction limited quality. These
data processing steps are as follows:
Functionality related to “Pre-processing and Load GPU”
Receive macro-tiles and AO data from Speckle Input DPN via BDT
Load macro-tiles and AO data to GPU
Functionality related to “Speckle Processing”
Convert macro-tile data from 16-bit quantization to 32-bit floating point
Calibrate incoming frames (dark/gain)
Compute relative light level for each frame based on the intensity statistics of the frame set (burst of frames)
Segmentation into sub-frames of the approximate size of the isoplanatic patch
Phase reconstruction using triple correlation within sub-frame
Amplitude reconstruction using Labeyrie within sub-frame
Interpret lock point location
Compute sub-frame dependent transfer functions from covariance matrices and AO reconstruction matrix
Compute noise filter
Re-assemble sub-frames into macro-tile output
Speckle Output DPN
Speckle Slave DPN
Speckle Input DPN
Acquisition and Slave Distribution
Speckle Processing
Re-assembly and Output
Input
Output
1s 2s 3s0s
Pre-process and Load GPUBurst n+1
Burst n
Burst n-1Unload GPU and
transfer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 129 of 267
Functionality related to “Unload GPU and Transfer”
Unload macro-tile output from GPU
Update meta-data
Transfer to BDT
The Speckle image reconstruction process must be performed in near real-time based on the use case of a
3.0 second observation duration producing 80 4k x 4k input frames at 30Hz.
7.8.2.1.6.5 Subordinates The Speckle Slave DPN module is comprised of several sub-modules that work together to meet the
functional requirements. Figure 72 shows the hierarchical relationship between the objects that comprise
the Speckle Slave DPN.
Figure 72: Speckle Slave DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events.
The Slave Input Handler extends the DHS framework and provides the functional behavior specific to the
Speckle Slave DPN application. It uses the jCUDA library as an interface to the low level CUDA GPU
driver, thus allowing it to perform functions on the GPU hardware such as loading data, unloading data,
and executing code on the device.
Using the jCUDA library, the Slave Input Handler stores received macro-tiles, calibration data, and AO
data into their respective buffers in the GPU memory space. When the appropriate data has been loaded
to the GPU, the Slave Input Handler will execute the Speckle GPU Software on the device.
The Speckle GPU Software consists of kernels written in C for each of main functions of the Speckle
image reconstruction algorithm. It also employs several re-usable routines from the Utilities library for
performing tiling, calibration, and light level calculations on incoming macro-tiles.
Calibration
Data BufferUtilitiesAO Data
Buffer
Slave Input Handler
Macro-Tile
Cube Buffer
jCUDA (GPU Interface)
Speckle GPU
Software
DHS Framework
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 130 of 267
For more information on the design of these software components please refer to the detailed design in
section 7.8.5.1.5.2.
7.8.2.1.7 Speckle Output Data Processing Node
7.8.2.1.7.1 Identification The Speckle Output Data Processing Node, or Speckle Output DPN, will be the name used to identify
the VBI camera line component that collects reconstructed macro-tile output from the slave nodes, re-
assembles them, and produces a single full frame output.
7.8.2.1.7.2 Type The Speckle Output DPN is a Data Processing Node (DPN) in the VBI Blue camera line Data
Processing Pipeline (DPP). Figure 73 below shows a high level view of the Speckle DPN in the context
of the other VBI Blue camera line modules.
Figure 73: Speckle Output DPN Context Diagram
7.8.2.1.7.3 Purpose The purpose of the Speckle Output DPN is to receive reconstructed macro-tile output from each Speckle
Slave DPNs and reassemble those macro-tiles into a single full frame output. The re-assembly and output
steps serve as the third and final stage of the Speckle solution pipeline as shown in Figure 74 below:
Calibration
Store
Transfer StoreCamera Store
VBI Blue Data Processing Pipeline
Virtual Camera
topic=main topic=speckle
topic=raw
topic=dark
topic=detail
topic=gain
.
.
.
.Data Processing Node
Speckle Input
Data Processing Node
Speckle
Slave 1
Data Processing Node
Speckle
Slave n
topic=slave
Data Processing Node
Speckle
Outputtopic=output
Data Processing Node
Frame
Selection
Data Processing Node
Gain
Data Processing Node
Dark
Data Processing Node
Detailed
Display
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 131 of 267
Figure 74: Speckle Pipeline - Stage 3
7.8.2.1.7.4 Function The primary function of the Speckle Output DPN is to take reconstructed macro-tile output from each
Speckle Slave DPN and to apply processing steps to re-assemble those macro-tiles into a single full frame
output. These data processing steps are as follows:
Receive reconstructed macro-tile output from each slave processing node
Re-assembly of macro-tiles to full frame
Update meta-data for reconstructed frame
Publish reconstructed frame
The Speckle image reconstruction process must be performed in real-time based on the use case of a 3.0
second observation duration producing 80 4k x 4k input frames at 30Hz. This requirement primarily
impacts the processing budget of the Speckle Slave DPN. However it is also important the re-assembly
and output stage performed by the Speckle Output DPN be completed very fast to ensure data delivery to
the user in a timely manner. The output frame will be saved to the DHS transfer store.
7.8.2.1.7.5 Subordinates The Speckle Output DPN module is comprised of several sub-modules that work together to meet the
functional requirements. Figure 75 shows the hierarchical relationship between the objects that comprise
the Speckle Slave DPN.
Speckle Output DPN
Speckle Slave DPN
Speckle Input DPN
Acquisition and Slave Distribution
Speckle Processing
Re-assembly and
Output
Input
Output
1s 2s 3s0s
Pre-process and Load GPUBurst n+1
Burst n
Burst n-1
Unload GPU and transfer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 132 of 267
Figure 75: Speckle Output DPN Module Decomposition
The DHS framework provides the technical architecture for receipt and delivery of image data. It
provides services such as subscription to topics, sub-topics, and events.
The Master Output Handler extends upon the DHS framework by providing the functional behavior
specific to the Speckle application. It acquires input frames from the Speckle slave nodes, re-assembles
them, and stores the full frame result into the output buffer. Once all slave outputs have been re-
assembled it publishes the resulting full frame to the BDT.
Utilities
Master Output Handler
Output Buffer
DHS Framework
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 133 of 267
7.8.2.2 Data Decomposition
7.8.2.2.1 BDT Data All data transferred on the BDT will be delivered to the subscriber of the topic using an IBdtBuffer
object. This object consists of meta-data and a data buffer as shown in Figure 76.
Figure 76: BDT Data Decomposition
The meta-data is represented using an AttributeTable object. The data buffer is represented using a byte
array. The next few sections will provide details on the data elements delivered to the DPNs using the
BDT and how they are represented via the IBdtBuffer object.
7.8.2.2.1.1 Raw Frame VBI Raw frame input consists of the meta-data and 4k x 4k 16-bit quantization pixel data. The following
sections describe how this information is represented by the IBdtBuffer object.
7.8.2.2.1.1.1 Meta-data Meta-data is stored in the IBdtBuffer object as an AttributeTable object. A reference to this object can be
obtained using the .getMetaData method. Once a reference is obtained, attributes can be inserted,
updated, or deleted from the attribute table. There are many attributes available in the meta-data,
however only a sub-set of them are utilized by the DPNs. For more information on the required attributes
please refer to the module interface descriptions in section 7.8.4.1.
7.8.2.2.1.1.2 Pixel Data Raw frame pixel data is stored in row major order as a byte array (byte[]) by the IBdtBuffer object. The
byte array is accessed using the .getData and .setData methods of the IBdtBuffer object. For a 4k x 4k
frame represented in 16-bit quantization the byte array will occupy 33,554,432 bytes of memory.
7.8.2.2.1.2 Calibration Frame Calibration frames delivered from the gain and dark plug-ins consist of the meta-data and 4k x 4k 32-bit
floating point pixel data. Several DPNs will subscribe to this data using sub-topics to ensure they receive
the latest calibration files when they become available. The following sections describe how this
information is represented by the IBdtBuffer object.
7.8.2.2.1.2.1 Meta-data Meta-data is stored in the IBdtBuffer object as an AttributeTable object. A reference to this object can be
obtained using the .getMetaData method. Once a reference is obtained, attributes can be inserted,
updated, or deleted from the attribute table. For more information on the individual attributes please refer
to the module interface descriptions in section 7.8.4.1.
IBdtBuffer
Meta Data Data Buffer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 134 of 267
7.8.2.2.1.2.2 Pixel Data Calibration frame pixel data is stored in row major order as a byte array (byte[]) by the IBdtBuffer object.
The byte array is accessed using the .getData and .setData methods of the IBdtBuffer object. For a 4k x
4k frame represented in 32-bit floating point, the byte array will occupy 67,108,864 bytes of memory.
7.8.2.2.1.3 AO Covariance Data The AO system will generate covariance data and its associated meta-data and transmit it via BDT to
subscribers. The Speckle Input DPN will subscribe to this data via sub-topic and use it for the Speckle
reconstruction process. The following sections will describe how this data is represented by the
IBdtBuffer object.
7.8.2.2.1.3.1 Meta data Meta-data associated with the AO covariance data is stored in the IBdtBuffer object as an AttributeTable
object. A reference to this object can be obtained using the .getMetaData method. Once a reference is
obtained, attributes can be inserted, updated, or deleted from the attribute table. For more information on
the individual attributes please refer to the module interface descriptions in section 7.8.4.1.
7.8.2.2.1.3.2 Covariance Matrix Data The covariance matrix data will be stored in row major order as a byte array (byte[]) in the IBdtBuffer
object. The byte array is accessed using the .getData and .setData methods of the IBdtBuffer object.
7.8.2.2.1.4 Macro-Tile A macro-tile will be the unit of data transmitted between the Speckle Input DPN and the Speckle Slave
DPNs, as well as the Speckle Slave DPNs and the Speckle Output DPN.
The Speckle Input DPN will break input raw frames into macro-tiles and distribute to slave nodes for processing via the BDT.
The Speckle Slave DPN will receive macro-tiles from the BDT in the form of an IBdtBuffer.
The Speckle Slave DPN will collect and pre-process several (i.e. 80) of these macro-tiles into what is called a macro-tile cube. This macro-tile cube will be used as input to the Speckle reconstruction algorithm.
The Speckle reconstruction algorithm will process and reduce this data into a single output macro-tile. This reconstructed macro-tile will be sent to the Speckle Output DPN via the BDT.
The Speckle Output DPN will receive the reconstructed macro-tile from the BDT in the form of an IBdtBuffer. The Speckle Output DPN will collect these outputs from all the Speckle Slave DPNs so they can be re-assembled to form the final full frame output.
Macro-tiles will consist of meta-data and 16-bit quantization pixel data. The size of a macro-tile will
depend on the number of slave nodes required to achieve the required computation throughput (TBD).
Macro-tiles will overlap with one another by 50% of the tile size used for Speckle phase reconstruction
(expected to be 128x128).
Macro-tiles will be transmitted between DPNs using the BDT. The BDT will deliver data to the
subscriber as an IBdtBuffer object. The following sections describe how the macro-tile data will be
represented by the IBdtBuffer object.
7.8.2.2.1.4.1 Meta-data Meta-data is stored in the IBdtBuffer object as an AttributeTable object. A reference to this object can be
obtained using the .getMetaData method. Once a reference is obtained, attributes can be inserted,
updated, or deleted from the attribute table. The attributes that will be available for the macro-tiles are all
those for the raw frame as well as those specific to the macro-tile. For more information on the specific
attributes please refer to the Speckle Slave DPN module interface description in section 7.8.4.1.6.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 135 of 267
7.8.2.2.1.4.2 Pixel Data Macro-tile pixel data is stored in row major order as a byte array (byte[]) by the IBdtBuffer object. The
byte array is accessed using the .getData and .setData methods of the IBdtBuffer object. For a 1k x 1k
macro-tile represented in 16-bit quantization, the byte array will contain 2,097,152 elements and occupy
2,097,152 bytes of memory.
7.8.2.2.1.5 Processed Frame The primary output of a DPN will be a 4k x 4k image and its corresponding meta-data. This output frame
will be placed in the outgoing BDT image data buffer as single precision floating point (32 bit) pixel data.
The meta-data for the output frame will consist of name/value pairs that correspond to the single output
frame. These meta-data elements will be derived based on the meta-data values of the input frames that
were processed. The following sections describe how this information is represented by the IBdtBuffer
object.
7.8.2.2.1.5.1 Meta-data Meta-data is stored in the IBdtBuffer object as an AttributeTable object. A reference to this object can be
obtained using the .getMetaData method. Once a reference is obtained, attributes can be inserted,
updated, or deleted from the attribute table. For example, DPNs that reduce data, such as the frame
selection and speckle DPNs, will update this meta data to reflect the reduction in data.
7.8.2.2.1.5.2 Pixel Data Processed frame pixel data is stored in row major order as a byte array (byte[]) by the IBdtBuffer object.
The byte array is accessed using the .getData and .setData methods of the IBdtBuffer object. For a 4k x
4k frame represented in 32-bit floating point, the byte array will contain 67,108,864 elements and occupy
67,108,864 bytes of memory.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 136 of 267
7.8.3 Dependency Description
The dependency description design view specifies the relationships between system entities, describes
their coupling, and identifies the required resources. This information is useful for evaluating the impact
of requirements and design changes, isolating maintenance issues, and integration test planning.
7.8.3.1 Inter-module Dependencies There are several key dependencies that exist between the major modules of the VBI DPPsoftware
system. The next few sections will describe these dependencies in detail.
7.8.3.1.1 Bulk Data Transport and DPNs The Bulk Data Transport (BDT) will be used for transferring frame data to and from DPNs as well as
between DPNs. The following sections will discuss the resource requirements of the BDT as they apply
to its use for the VBI Blue DPP solution.
7.8.3.1.1.1 Raw Frame Data Transfer The VBI camera system will produce 4k x 4k raw frames at a maximum rate of 30 Hz. This equates to a
required data transfer rate of around 1 Giga Bytes/s, or 8 Giga bits/s. The BDT is built upon a 10 Gbit/s
Ethernet and thus will be able to meet this requirement. The BDT will therefore be used to transfer raw
frames from the VBI camera system to the data processing nodes of the VBI camera line.
7.8.3.1.1.2 Maco-Tile Data Transfer Macro-tiles are subsets of the full frame data that are distributed by the Speckle Input DPN to the Speckle
Slave DPNs for processing. The Speckle Slave DPNs will process a group of macro-tiles referred to as a
macro-tile cube and produce a single Speckle reconstructed macro-tile. This output macro-tile is
transferred from the Speckle Slave DPN to the Speckle Output DPN for re-assembly with other slave
outputs to a full frame.
The transfer of macro-tiles between Speckle DPNs is done via the BDT. Therefore, a dependency exists
on the BDT resource to transfer the data at the required rates. The following sections will discuss the
resource requirements for each use case of the BDT by the Speckle DPNs.
7.8.3.1.1.2.1 Speckle Input DPN to Speckle Slave DPN The Speckle Input DPN will split incoming raw frames into overlapping macro-tiles and distribute them
to Speckle Slave DPNs for processing. Macro-tiles will remain in the 16-bit quantization format during
this process. Therefore, the required data transfer rate between the Speckle Input DPN and a single
Speckle Slave DPN can therefore be calculated as:
Where:
Max data transfer rate (in bits/s) between Input DPN and a Slave DPN
Number of pixels in X for macro-tile
Number of pixels in Y for macro-tile
Max rate (in FPS) of raw frames produced from VBI camera
Thus if 16 macro-tiles are used the amount of data being transferred between the Speckle Input DPN and
a single Speckle Slave DPN will be on the order of 70 MB/s. The total data transfer rate required for 16
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 137 of 267
slave nodes would therefore be 1.1 Giga Bytes/s, which is very close to the 10 Gbit/s limitation of the
BDT.
To ensure BDT rates between the Speckle Input DPN and Speckle Slave DPNs do not exceed the
10Gbit/s capability of the BDT, we will employ two 10Gbit/s Ethernet network interface cards on the
Speckle Input DPN host machine. The Ethernet link between the Speckle Input DPN and Speckle Slave
DPNs can be split into two subnets. The max BDT rate on one of these subnets would then be limited to
about 500MB/s, which is well under the 10 Gbit/s limitation of the BDT.
7.8.3.1.1.2.2 Speckle Slave DPN to Speckle Output DPN The Speckle Slave DPNs will transfer a reconstructed macro-tile output to the Speckle Output DPN.
These macro-tiles will be in 32-bit floating point format. Therefore, the required data transfer rate
between a Speckle Slave DPN and a single Speckle Output DPN can therefore be calculated as:
Where:
Max data transfer rate (in bits/s) between a Slave DPN and Output DPN
Number of pixels in X for macro-tile
Number of pixels in Y for macro-tile
Max rate of frame bursts produced from VBI camera
Thus using the 3.0s per observation VBI use case and assuming 1k x 1k macro-tiles, the rate of data being
transferred between a Speckle Slave DPN and the Speckle Output DPN will be on the order of 100
Mbits/s. The total data transfer rate required for a 16 slave node configuration would therefore be 1.6
Gbits/s, which is well under the 10 Gbit/s limitation of the BDT.
7.8.3.1.1.2.3 Speckle Processed Frame Data Transfer The Speckle Output DPN will produce a 4k x 4k frame every 3.0s. This output frame is represented in
32-bit floating point format and includes related meta-data. This equates to a required data transfer rate of
around 180 Mbits/s. The BDT is built upon a 10 Gbit/s Ethernet and thus will be able to meet this
requirement. The BDT will therefore be used to transfer Speckle processed full frame from the Speckle
Output DPN to the data transfer store.
7.8.3.1.2 DPNs dependency on the Calibration Store The DPNs of the VBI Blue DPP must be able to access the calibration store at all times to access the
latest binary mask, dark calibration, and gain calibration files.
7.8.3.1.3 DPNs dependency on Dark DPN
7.8.3.1.3.1 Dark Calibration Frame The Gain, Frame Selection, Speckle Input, and Detailed Display DPNs require that the Dark DPN publish
new dark calibration frames on a BDT topic. This will allow the interested DPNs to configure a sub-topic
subscription and automatically receive the dark calibration frames when they become available rather
than having to check the calibration store.
7.8.3.1.4 DPNs dependency on the Gain DPN
7.8.3.1.4.1 Gain Calibration Frame The Frame Selection, Speckle Input, and Detailed Display DPNs require that the Gain DPN publish new
gain calibration frames on a BDT topic. This will allow the interested DPNs to configure a sub-topic
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 138 of 267
subscription and automatically receive the gain calibration frames when they become available rather than
having to check the calibration store.
7.8.3.1.5 Speckle Input DPN and Speckle Slave DPN
7.8.3.1.5.1 Performance The Speckle Input DPN acts as a router for incoming frames, splitting them into macro-tiles and
distributing them to the appropriate Speckle Slave DPNs for processing. Therefore, the Speckle Input
DPN can be thought of as a producer of data and the Speckle Slave DPN can be thought of as a consumer.
Due to the large data volume and near real-time requirements of the Speckle data processing it is not
feasible to allow the producer to get ahead of the consumer, and expect the consumer to “catch-up” at
some point. Instead we employ the use of double buffering in the Speckle Slave DPN, which allows the
incoming macro-tile cube (n) to be loaded to the GPU while the previous macro-tile cube (n-1) is being
processed. Figure 77 below illustrates this double buffering over the course of three macro-tile cubes.
Figure 77: Timing Diagram for Speckle Slave DPN Double Buffering
With double buffering in place, the Speckle Slave DPN must now ensure that a buffer is available to
receive incoming data when it arrives. Using the VBI user case of an 80 frame burst @30Hz in 3.0s,
there will be a 333ms between the end of data acquisition of one macro-tile cube and the beginning of the
next. Therefore, the unloading of the macro-tile output must be completed promptly after processing is
finished to ensure a buffer is available for loading the next data set. Based on our prototype tests it only
takes a couple milliseconds to unload the processed macro-tile output, and is therefore within the 333ms
processing budget.
7.8.3.1.6 Speckle Slave DPN and Speckle Output DPN
7.8.3.1.6.1 Performance The Speckle Slave DPN processes a set of macro-tiles, called a macro-tile cube, and produces a single
reconstructed macro-tile output. This output macro-tile is transferred to the Speckle Output DPN where it
will be re-assembled with other slave outputs to create a full frame result. Therefore, the Speckle Slave
DPN can be thought of as a “producer” of reconstructed macro-tiles and the Speckle Output DPN can be
thought of as a “consumer”.
Processing n-1
Load n Processing n
Load n+1 Processing n+1Buffer 1
Buffer 2
0s 1s 2s 3s 4s 5s 6s 7s 8s 9s
Load Channel
Processing Channel Unload n-1
Unload n
Load n+2
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 139 of 267
Due to the large data volume and near real-time requirements of the Speckle data processing, it is not
feasible to allow the producer to get ahead of the consumer and expect the consumer to “catch-up” at
some point. Instead we must ensure that the Speckle Output DPN can collect reconstructed macro-tiles,
re-assemble them, and send them off to the transfer store before the next set of slave outputs arrives.
Based on the VBI user case where a burst of 80 frames is taken every 3.0s, we can expect that the Speckle
Output DPN will receive a new set of slave outputs to re-assemble every 3.0s. Thus the maximum
processing budget allowed for the Speckle Output DPN to complete processing of an input set of slave
outputs is 3.0s. However, it is desirable to minimize the processing time to as little as possible to ensure
timely delivery of the output frame to the user. Based on our prototype testing, the steps performed by
the Speckle Output DPN will be completed in less than 3.0s, and in fact are expected to be completed in
130ms or less.
Figure 78 below illustrates the timeframe in which the Speckle Output DPN will acquire slave outputs, re-
assemble them, and output a full frame result. Assuming a slave output macro-tile size of 1k x 1k and 32
bit quantization, it should take a slave node about 3ms (1024*1024*32 / 10^10) to transfer an output to
the Speckle Output DPN. If this occurs for 16 slave nodes, we would expect about 50ms (3ms * 16) total
time to transfer and receive all slave outputs. The re-assembly of slave outputs to a single full frame and
updating of meta-data is a straight forward process, and should take on the order of tens of milliseconds.
Finally, the transfer of the full frame output to the transfer store will again take around 50ms
(4096*4096*32 / 10^10).
Figure 78: Timing Diagram for Speckle Output DPN
7.8.3.2 Inter-process Dependencies Within each DPN there exist sub-modules and processes that have dependencies between one another.
These inter-process dependencies must be maintained in order for the system to perform as expected and
meet its requirements. The next few sections will detail some of the key inter-process dependencies that
exist in the modules of the VBI Blue DPP software system.
7.8.3.2.1 Speckle Slave DPN Concurrent Threads
7.8.3.2.1.1 Buffer State The SlaveInputHandler supports concurrent threads on calls to the process method. This allows each
calling thread to perform all appropriate GPU tasks based on the overall state of the current macro-tile
cube. For example, when the process method is called by a thread for the last macro-tile in the cube, it
Receive slave outputs
0 50ms 100ms 200ms
Re-
assemble to
full frame
Last slave output
received @ 50 ms
Finished re-assembly
@ 80 ms
Finished transferring output
@ 130ms
Transfer output
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 140 of 267
will not only load the macro-tile to the GPU buffer as usual, but it will also invoke the kernel for starting
the Speckle processing of the whole macro-tile cube buffer. That thread will therefore be active for up to
3 seconds, during which other threads will begin to call the process method as macro-tiles for the next
burst being to arrive.
It is therefore important that the state of each macro-tile cube buffer be managed in a thread-safe manner.
This can be provided by representing the state of the buffers using an object that provides synchronized
methods for reading and writing the state.
7.8.3.2.2 Speckle Slave DPN: Timing of Pre-processing and Load GPU The SlaveInputHandler receives macro-tiles at a max rate of 30Hz. These macro-tiles are pre-processed
and loaded to a macro-tile cube buffer on the GPU. Once the entire macro-tile cube has been loaded to
the GPU, the SlaveInputHandler will begin invoking the Speckle GPU Software kernels to process the
macro-tile cube.
Based on the VBI use case of 80 frames @ 30Hz every 3.0s, the processing budget for the “Pre-process
and Load GPU” step is at minimum 2.66s and at most 3.0s. Completion is less than 2.66s is not possible
because not all of the data will have been received yet. Completion in more than 3.0s will not allow the
SlaveInputHandler to consume inputs at the rate they are being produced.
Despite having a max processing budget of 3.0s it is important to minimize the processing time to ensure
overall timely delivery of data to the user. The faster we can complete the “Pre-Process and Load to
GPU” step, the sooner we can start the Speckle GPU software. Overall this will result in faster delivery
of data from the time the images were taken to the time the user receives the Speckle reconstructed
output.
7.8.3.3 Data Dependencies The DPN modules of the VBI Blue DPP solution each have their own data dependencies that must be
maintained to ensure the system performs correctly. These dependencies exist between DPNs and
external systems as well as between the DPNs themselves. The next few sections will describe the data
dependencies of each DPN in more detail.
7.8.3.3.1 Dark Data Processing Node The Dark DPN relies on several input data sources for the information needed to configure and drive the
overall solution. Figure 79 shows the inputs required by the Dark DPN to perform its duties. These
include the input raw frames, properties, and events. The next few sections will provide details on each
of these data dependencies.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 141 of 267
Figure 79: Dark DPN Data Dependencies
7.8.3.3.1.1 Raw Frame Data The Dark DPN performs acquisition of raw frame data from atst.dhs.vbiBlue.dark.in topic of the BDT.
For each input frame received from the BDT, the DHS interface will provide the Dark Input DPN an
IBdtBuffer object which contains the image data buffer and a set of associated meta-data.
The image data buffer will contain the 4k x 4k frame data in 16-bit quantization format as produced by
the camera. The meta-data will contain a set of name/value pairs representing information needed for
processing of the frame. Some of these meta-data elements are generated by the camera with each frame
(i.e. timestamp) and others that are passed from the VBI IC to the plug-in through the camera interface
(i.e. TBD). Please refer to section 7.8.4.1.1.2.1.1 for a list of required meta-data elements and the source
system for each.
Additional details on how the frame and meta-data elements are obtained through the DHS interface and
the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.1.2 Properties The Dark DPN is configurable through a set of properties stored in the property database. A property is
simply a name/value pair that contains information needed by the DPN for initialization and for
processing of incoming data. For example, the Dark DPN uses a property called topicName to identify
the BDT topic it should subscribe to in order to receive frame data from the camera. In general,
properties remain fixed during operation and would only be changed to adjust the system after
engineering analysis.
For more information on the properties available for the Dark DPN please refer to module interfaces
section 7.8.4.1.1.1 of this document.
7.8.3.3.1.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Dark DPN subscribing to any
external system events. However the functionality is provided should it be found necessary to do so.
7.8.3.3.2 Gain Data Processing Node The Gain DPN relies on several input data sources for the information needed to configure and drive the
overall solution. Figure 80 shows the inputs required by the Gain DPN to perform its duties. These
data
buffer
metadata
data
buffer
metadataDark DPN
properties database
atst.dhs.vbiBlue.dark.in
data
buffer
metadata
External System Events
atst.dhs.vbiBlue.dark.out
Calibration Store
Other DPNs
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 142 of 267
include the input raw frames, properties, calibration data, and events. The next few sections will provide
details on each of these data dependencies.
Figure 80: Gain DPN Data Dependencies
7.8.3.3.2.1 Raw Frame Data The Gain DPN performs acquisition of raw frame data from atst.dhs.vbiBlue.gainIn topic of the BDT.
For each input frame received from the BDT, the DHS interface will provide the Gain Input DPN an
IBdtBuffer object which contains the image data buffer and a set of associated meta-data.
The image data buffer will contain the 4k x 4k frame data in 16-bit quantization format as produced by
the camera. The meta-data will contain a set of name/value pairs representing information needed for
processing of the frame. Some of these meta-data elements are generated by the camera with each frame
(i.e. timestamp) and others that are passed from the VBI IC to the plug-in through the camera interface
(i.e. TBD). Please refer to section 7.8.4.1.2.2.1.1 for a list of required meta-data elements and the source
system for each. Additional details on how the frame and meta-data elements are obtained through the
DHS interface and the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.2.2 Properties The Gain DPN is configurable through a set of properties stored in the property database. A property is
simply a name/value pair that contains information needed by the DPN for initialization and for
processing of incoming data. For example, the Gain DPN uses a property called topicName to identify
the BDT topic it should subscribe to in order to receive frame data from the camera. In general,
properties remain fixed during operation and would only be changed to adjust the system after
engineering analysis.
For more information on the properties available for the Gain DPN please refer to module interfaces
section 7.8.4.1.2.1 of this document.
7.8.3.3.2.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Gain DPN subscribing to any
external system events. However the functionality is provided should it be found necessary to do so.
data
buffer
metadata
data
buffer
metadataGain DPN
properties database
atst.dhs.vbiBlue.gainIn
data
buffer
metadata
External System Events
atst.dhs.vbiBlue.gainOut
Calibration Store
Other DPNs
Calibration
Images
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 143 of 267
7.8.3.3.2.4 Binary Mask Calibration Data The Gain DPN is responsible for acquiring the binary mask calibration file from the calibration store
during initialization. Access to the calibration store will be provided through a DHS service.
7.8.3.3.2.5 Dark Macro-tile Data The Gain DPN is responsible for acquiring dark calibration frames published by the Dark DPN to the
atst.dhs.vbiBlue.darkOut topic. This allows the Gain DPN to automatically be notified when a new dark
calibration has been performed, and update its local dark calibration buffer accordingly. The BDT will
deliver the dark calibration frame as a IBdtBuffer object that contains a data buffer and meta-data. The
meta-data will be used to identify that this is a dark calibration frame.
7.8.3.3.3 Frame Selection Data Processing Node The Frame Selection DPN relies on several input data sources for the information needed to configure and
drive the overall solution. Figure 81 shows the inputs required by the Frame Selection DPN to perform
its duties. These include the input raw frames, properties, calibration data, and events. The next few
sections will provide details on each of these data dependencies.
Figure 81: Frame Selection DPN Data Dependencies
7.8.3.3.3.1 Raw Frame Data The Frame Selection DPN performs acquisition of raw frame data from atst.dhs.vbiBlue.selectIn topic of
the BDT. For each input frame received from the BDT, the DHS interface will provide the Frame
Selection DPN an IBdtBuffer object which contains the image data buffer and a set of associated meta-
data. The image data buffer will contain the 4k x 4k frame data in 16-bit quantization format as produced
by the camera. The meta-data will contain a set of name/value pairs representing information needed for
processing of the frame. Some of these meta-data elements are generated by the camera with each frame
(i.e. timestamp) and others that are passed from the VBI IC to the plug-in through the camera interface
(i.e. TBD). Please refer to section 7.8.4.1.3.2.1.1 for a list of required meta-data elements and the source
system for each. Additional details on how the frame and meta-data elements are obtained through the
DHS interface and the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.3.2 Properties The Frame Selection DPN is configurable through a set of properties stored in the property database. A
property is simply a name/value pair that contains information needed by the DPN for initialization and
for processing of incoming data. For example, the Frame Selection DPN uses a property called
data
buffer
metadata
data
buffer
metadata
Frame Selection
DPN
properties database
atst.dhs.vbiBlue.selectIn
data
buffer
metadata
External System Events
atst.dhs.vbiBlue.selectOut
Calibration Store
Other DPNs
Calibration
Images
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 144 of 267
topicName to identify the BDT topic it should subscribe to in order to receive frame data from the
camera. In general, properties remain fixed during operation and would only be changed to adjust the
system after engineering analysis. For more information on the properties available for the Frame
Selection DPN please refer to module interfaces section 7.8.4.1.3.1 of this document.
7.8.3.3.3.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Frame Selection DPN subscribing to
any external system events. However the functionality is provided should it be found necessary to do so.
7.8.3.3.3.4 Binary Mask Calibration Data The Frame Selection DPN is responsible for acquiring the binary mask calibration file from the
calibration store during initialization. Access to the calibration store will be provided through a DHS
service.
7.8.3.3.3.5 Dark Calibration Data The Frame Selection DPN is responsible for acquiring dark calibration images published by the Dark
DPN to the atst.dhs.vbiBlue.darkOut topic. This allows the Frame Selection DPN to automatically be
notified when a new dark calibration has been performed, and update its local dark calibration buffer
accordingly. The BDT will deliver the dark calibration frame as an IBdtBuffer object that contains a data
buffer and meta-data. The meta-data will be used to identify that this is a dark calibration frame.
7.8.3.3.3.6 Gain Calibration Data The Frame Selection DPN is responsible for acquiring gain macro-tile calibration images published by the
Gain DPN to the atst.dhs.vbiBlue.gainOut topic. This allows the Frame Selection DPN to automatically
be notified when a new gain calibration has been performed, and update its local gain calibration buffer
accordingly. The BDT will deliver the gain calibration frame as an IBdtBuffer object that contains a data
buffer and meta-data. The meta-data will be used to identify that this is a gain calibration frame.
7.8.3.3.4 Detailed Display Data Processing Node The Detailed Display DPN relies on several input data sources for the information needed to configure
and drive the overall solution. Figure 82 shows the inputs required by the Detailed Display DPN to
perform its duties. These include the input raw frames, properties, calibration data, and events. The next
few sections will provide details on each of these data dependencies.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 145 of 267
Figure 82: Detailed Display DPN Data Dependencies
7.8.3.3.4.1 Raw Frame Data The Detailed Display DPN performs acquisition of raw frame data from atst.dhs.vbiBlue.detailIn topic of
the BDT. For each input frame received from the BDT, the DHS interface will provide the Detailed
Display DPN an IBdtBuffer object which contains the image data buffer and a set of associated meta-
data. The image data buffer will contain the 4k x 4k frame data in 16-bit quantization format as produced
by the camera. The meta-data will contain a set of name/value pairs representing information needed for
processing of the frame. Some of these meta-data elements are generated by the camera with each frame
(i.e. timestamp) and others that are passed from the VBI IC to the plug-in through the camera interface
(i.e. TBD). Please refer to section 7.8.4.1.4.2.1.1 for a list of required meta-data elements and the source
system for each. Additional details on how the frame and meta-data elements are obtained through the
DHS interface and the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.4.2 Properties The Detailed Display DPN is configurable through a set of properties stored in the property database. A
property is simply a name/value pair that contains information needed by the DPN for initialization and
for processing of incoming data. For example, the Detailed Display DPN uses a property called
topicName to identify the BDT topic it should subscribe to in order to receive frame data from the
camera. In general, properties remain fixed during operation and would only be changed to adjust the
system after engineering analysis. For more information on the properties available for the Detailed
Display DPN please refer to module interfaces section 7.8.4.1.4.1 of this document.
7.8.3.3.4.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Detailed Display DPN subscribing to
any external system events. However the functionality is provided should it be found necessary to do so.
7.8.3.3.4.4 Binary Mask Calibration Data The Detailed Display DPN is responsible for acquiring the binary mask calibration file from the
calibration store during initialization. Access to the calibration store will be provided through a DHS
service.
data
buffer
metadata
data
buffer
metadata
Detailed Display
DPN
properties database
atst.dhs.vbiBlue.detailIn
data
buffer
metadata
External System Events
atst.dhs.vbiBlue.detailOut Detailed Display
Calibration
Images
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 146 of 267
7.8.3.3.4.5 Dark Calibration Data The Detailed Display DPN is responsible for acquiring dark calibration images published by the Dark
DPN to the atst.dhs.vbiBlue.darkOut topic. This allows the Detailed Display DPN to automatically be
notified when a new dark calibration has been performed, and update its local dark calibration buffer
accordingly. The BDT will deliver the dark calibration frame as a IBdtBuffer object that contains a data
buffer and meta-data. The meta-data will be used to identify that this is a dark calibration frame.
7.8.3.3.4.6 Gain Calibration Data The Detailed Display DPN is responsible for acquiring gain macro-tile calibration images published by
the Gain DPN to the atst.dhs.vbiBlue.gainOut topic. This allows the Detailed Display DPN to
automatically be notified when a new gain calibration has been performed, and update its local gain
calibration buffer accordingly. The BDT will deliver the gain calibration frame as a IBdtBuffer object
that contains a data buffer and meta-data. The meta-data will be used to identify that this is a gain
calibration frame.
7.8.3.3.5 Speckle Input Data Processing Node The Speckle Input DPN relies on several input data sources for the information needed to configure and
drive the overall Speckle solution. Figure 83 shows the inputs required by the Speckle Input DPN to
perform its duties. These include the input raw frames, properties, event, calibration data, and AO data.
The next few sections will provide details on each of these data dependencies.
Figure 83: Speckle Input DPN Data Dependencies
7.8.3.3.5.1 Raw Frame Data The Speckle Input DPN performs acquisition of raw frame data from atst.dhs.vbiBlue.speckle topic of the
BDT. For each input frame received from the BDT, the DHS interface will provide the Speckle Input
DPN an IBdtBuffer object which contains the image data buffer and a set of associated meta-data.
The image data buffer will contain the 4k x 4k frame data in 16-bit quantization format as produced by
the camera. The meta-data will contain a set of name/value pairs representing information needed for
processing of the frame by the Speckle processing solution. Some of these meta-data elements are
generated by the camera with each frame (i.e. timestamp) and others that are passed from the VBI IC to
the plug-in through the camera interface (i.e. TBD). Please refer to section 7.8.4.1.5.2 for a list of
required meta-data elements and the source system for each. Additional details on how the frame and
data
buffer
metadata
data
buffer
metadata
Speckle Input
DPN
properties database
atst.dhs.vbiBlue.speckle
data
buffer
metadata
External System Events
Speckle Slave
DPN
(#1)
Speckle Slave
DPN
(#2)
Speckle Slave
DPN
(#N)
Speckle Output
DPNatst.dhs.vbiBlue.speckle2
atst.dhs.vbiBlue.speckle1
atst.dhs.vbiBlue.speckleN
atst.dhs.vbiBlue.speckleOut
Calibration
ImagesAO Matrix
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 147 of 267
meta-data elements are obtained through the DHS interface and the objects used to represent them can be
found in section 7.8.2.2.1 of this document.
7.8.3.3.5.2 Properties The Speckle Input DPN is configurable through a set of properties stored in the property database. A
property is simply a name/value pair that contains information needed by the DPN for initialization and
for processing of incoming data. For example, the Speckle DPN uses a property called topicName to
identify the BDT topic it should subscribe to in order to receive frame data from the camera. In general,
properties remain fixed during operation and would only be changed to adjust the system after
engineering analysis. For more information on the properties available for the Speckle Input DPN please
refer to module interfaces section 7.8.4.1.5.1 of this document.
7.8.3.3.5.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Speckle Input DPN subscribing to
any external system events. However the functionality is provided should it be found necessary to do so.
7.8.3.3.5.4 Gain Calibration Data The Speckle Input DPN is responsible for acquiring the latest gain calibration image and distributing the
appropriate gain macro-tile calibration image to each Speckle Slave DPN. The Speckle Input DPN will
subscribe to a sub-topic that provides updated gain calibration files when they become available. Thus
when the Gain DPN produces a new calibration file, it will publish that information on a BDT topic that
the Speckle Input DPN has subscribed to as a sub-topic. This allows the Speckle Input DPN to
automatically be notified when a new gain calibration has been performed, and update its local calibration
file accordingly.
7.8.3.3.5.5 Dark Calibration Data The Speckle Input DPN is responsible for acquiring the latest dark calibration images and distributing the
appropriate dark macro-tile calibration image to each Speckle Slave DPN. The Speckle Input DPN will
subscribe to a sub-topic that provides updated dark calibration files when they become available. Thus
when the Dark DPN produces a new calibration file, it will publish that information on a BDT topic that
the Speckle Input DPN has subscribed to as a sub-topic. This allows the Speckle Input DPN to
automatically be notified when a new dark calibration has been performed, and update its local calibration
file accordingly.
7.8.3.3.5.6 AO Covariance Data As part of the Speckle reconstruction process co-temporal data from the AO system must be ingested and
used to compute covariance matrices and the subsequent sub-image transfer functions. The AO matrix
data is expected to be produced by the WCCS at a rate of 2KHz. The Speckle Input DPN will therefore
subscribe to this data stream using a sub-topic and process it in parallel with the camera frame data based
on timestamp matching. The result of the AO matrix data processing will be sent to each Speckle Slave
DPN where it will be used in the phase/amplitude combination step of the Speckle reconstruction
algorithm.
7.8.3.3.6 Speckle Slave Data Processing Node The Speckle Slave DPN relies on several input data sources for the information needed to execute the
steps of the Speckle reconstruction algorithm. Figure 83 shows the inputs required by the Speckle Slave
DPN to perform its duties. These include the input macro-tiles, properties, events, macro-tile calibration
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 148 of 267
data, and macro-tile AO covariance data. The next few sections will provide details on each of these data
dependencies.
7.8.3.3.6.1 Macro-tile The Speckle Slave DPN performs acquisition of macro-tiles from the atst.dhs.vbiBlue.speckleN topic of
the BDT, where N is the slave node number. For each input macro-tile received from the BDT, the DHS
interface will provide the Speckle Slave DPN an IBdtBuffer object which contains the image data buffer
and a set of associated meta-data. The image data buffer will contain the macro-tile data in 16-bit
quantization format. The meta-data will contain a set of name/value pairs representing information
needed for processing of the frame by the Speckle processing solution. Some of these meta-data elements
are generated by the camera with each frame (i.e. timestamp), some are passed from the VBI IC to the
plug-in through the camera interface (i.e. TBD), and others are added by the Speckle Input DPN to
provide attributes of the macro-tile.
Additional details on how the frame and meta-data elements are obtained through the DHS interface and
the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.6.2 Properties The Speckle Slave DPN is configurable through a set of properties stored in the property database. A
property is simply a name/value pair that contains information needed by the DPN for initialization and
for processing of incoming data. For example, the Speckle Slave DPN uses a property called topicName
to identify the BDT topic it should subscribe to in order to receive macro-tile data from the camera. In
general, properties remain fixed during operation and would only be changed to adjust the system after
engineering analysis.
For more information on the properties available for the Speckle Slave DPN please refer to module
interfaces section 7.8.4.1.6.1 of this document.
7.8.3.3.6.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Speckle Slave DPN subscribing to
any external system events. However the functionality is provided should it be found necessary to do so.
7.8.3.3.6.4 Gain Macro-tile Data The Speckle Slave DPN is responsible for acquiring gain macro-tile calibration images published by the
SpeckleInputDPN to the atst.dhs.vbiBlue.speckle topic. This allows the Speckle Slave DPN to
automatically be notified when a new gain calibration has been performed, and update its local gain
calibration buffer accordingly. The BDT will deliver the gain macro-tile as a IBdtBuffer object that
contains a data buffer and meta-data. The meta-data will be used to identify that this is a gain macro-tile
calibration image.
7.8.3.3.6.5 Dark Macro-tile Data The Speckle Slave DPN is responsible for acquiring dark macro-tile calibration images published by the
SpeckleInputDPN to the atst.dhs.vbiBlue.speckleN topic, where N is the slave node number. This allows
the Speckle Slave DPN to automatically be notified when a new dark calibration has been performed, and
update its local dark calibration buffer accordingly. The BDT will deliver the dark macro-tile as a
IBdtBuffer object that contains a data buffer and meta-data. The meta-data will be used to identify that
this is a dark macro-tile calibration image.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 149 of 267
7.8.3.3.6.6 AO Macro-tile Covariance Data As part of the Speckle reconstruction process co-temporal data from the AO system must be ingested and
used to compute covariance matrices and the subsequent sub-image transfer functions. The AO matrix
data is expected to be produced by the WCCS at a rate of 2KHz. The Speckle Input DPN handles this
data and distributes to each Speckle Slave DPN the appropriate AO covariance data based on the macro-
tile each slave is responsible for. The AO matrix data will be used by each Speckle Slave DPN in the
phase/amplitude combination step of the Speckle reconstruction algorithm.
7.8.3.3.7 Speckle Output Data Processing Node The Speckle Output DPN relies on several input data sources for the information needed to execute the
re-assembly and transfer of the full frame reconstructed image. Figure 83 shows the inputs required by
the Speckle Output DPN to perform its duties. These include the input reconstructed macro-tiles,
properties, and events. The next few sections will provide details on each of these data dependencies.
7.8.3.3.7.1 Reconstructed Macro-tile The Speckle Output DPN performs acquisition of reconstructed macro-tiles from the
atst.dhs.vbiBlue.speckleOut topic of the BDT. For each input reconstructed macro-tile received from the
BDT, the DHS interface will provide the Speckle Output DPN an IBdtBuffer object which contains the
image data buffer and a set of associated meta-data. The image data buffer will contain the reconstructed
macro-tile data in 32-bit floating point format. The meta-data will contain a set of name/value pairs
representing information needed for inclusion with the final full frame output.. Some of these meta-data
elements are generated by the camera with each frame (i.e. timestamp), some are passed from the VBI IC
to the plug-in through the camera interface (i.e. TBD), and others are added by the Speckle Input DPN to
provide attributes of the macro-tile.
Additional details on how the reconstructed macro-tile and meta-data elements are obtained through the
DHS interface and the objects used to represent them can be found in section 7.8.2.2.1 of this document.
7.8.3.3.7.2 Properties The Speckle Output DPN is configurable through a set of properties stored in the property database. A
property is simply a name/value pair that contains information needed by the DPN for initialization and
for processing of incoming data. For example, the Speckle Output DPN uses a property called topicName
to identify the BDT topic it should subscribe to in order to receive macro-tile data from the camera. In
general, properties remain fixed during operation and would only be changed to adjust the system after
engineering analysis.
For more information on the properties available for the Speckle Output DPN please refer to module
interface section 7.8.4.1.7.1 of this document.
7.8.3.3.7.3 Events The DHS interface provides the ability for a DPN to subscribe to events. The events subscribed to can be
from any system. This allows the DPN to be notified by a system when an event occurs, and take
appropriate actions as needed. At this time we do not anticipate the Speckle Output DPN subscribing to
any external system events. However the functionality is provided should it be found necessary to do so.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 150 of 267
7.8.4 Interface Description
7.8.4.1 Module Interfaces The DPNs of the VBI Blue DPP solution each have their own interface. Each interface is a contract
between the user and that module indicating the exposed functions, required inputs for each function, and
the output that can be expected from each function under different input sets. The next several sections
will provide details on the interface for each DPN.
7.8.4.1.1 Dark Data Processing Node
7.8.4.1.1.1 Properties The Dark DPN will be configurable using a set of properties. The following table lists the properties and
is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.darkIn Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-in resides
maxData integer bytes 33554432 (4096x4096x2) Max bytes of data as input to this plug-in
maxPluginData integer bytes 33554432 (4096x4096x2) Max bytes of data produced from this plug-
in
dpnHandlerClass string N/A atst.dhs.vbiBlue.dark.Dark
Handler
Name of class that implements this DPN
7.8.4.1.1.1.1 .topicName
Data Type: string
Units: N/A
Valid Values: darkIn
Default Value: N/A
The topicName property represents the DHS BDT topic name that the DPN will subscribe to.
7.8.4.1.1.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.1.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxData property represents the maximum bytes of data that the DPN will accept as input.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 151 of 267
7.8.4.1.1.1.4 .maxPluginData Data Type: integer
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxPluginData property represents the maximum bytes of data that the DPN will produce as output.
7.8.4.1.1.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.dark.DarkHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.1.2 Topic Interface: atst.dhs.vbiBlue.darkIn The primary interface to the Dark DPN is through data passed on the atst.dhs.vbiBlue.darkIn DHS topic.
The Dark DPN is a subscriber to this topic and will act on all data delivered under this topic name. The
next few sections will provide details on the valid inputs and expected outputs for this interface.
7.8.4.1.1.2.1 Input Data Organization For the Dark DPN to work correctly, input data must be organized in a particular fashion. These
requirements apply to the information included with each raw frame delivered on the topic, as well as the
relationship between contiguous raw frames delivered on the topic that comprise a frame set.
The Dark DPN expects a series of one or more contiguous raw frames to be delivered on the
atst.dhs.vbiBlue.darkIn topic. This series of raw frames constitutes a frame set, which is the required
input before generation of a dark calibration output frame can be performed. Each frame delivered on the
atst.dhs.vbiBlue.darkIn topic must contain raw frame information (meta-data and pixel data) as described
in section 7.8.2.2.1.1.
7.8.4.1.1.2.1.1 Required Meta Data The Dark DPN uses the frame set and frame level meta-data to keep track and verify that the sequence of
inputs received is valid. Frame set level meta-data is required on the first frame of the frame set in
addition to the frame level meta-data for that frame. Subsequent frames should only include their frame
level meta-data. The table below provides details on the meta-data elements that are required by the
interface as well as the source for each.
Name Type Value Comment Level Source
BITPIX int 16 or 32 Number of bits per pixel FrameSet CSS
NAXIS int 1, 2, or 3 BDT value indicates number of axes
in stream
FrameSet CSS
NAXIS1 int n Number of pixels in axis 1 FrameSet CSS
NAXIS2 int n Number of pixels in axis 2 FrameSet CSS
NAXIS3 int n Number of pixels in axis 3 FrameSet CSS
DATE-OBS str time format date/time of this image Frame CSS
OBSTASK str text observation task:
observing, calibration, focus,
alignment
FrameSet IC
DATATYPE str text dark, flat, target type, pinhole, science FrameSet IC
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 152 of 267
7.8.4.1.1.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.1.2.2 Expected Output When valid input is provided to the Dark DPN as described above, the following output response will
occur:
Raw frames for entire frame set processed into single calibration frame output
Output dark calibration frame to atst.dhs.vbiBlue.darkOut topic
Write dark calibration frame to the ATST calibration store
Generate atst.dhs.vbiBlue.dark.status event (see 7.8.4.1.5.7.1)
Generate atst.dhs.vbiBlue.dark.batch event (see 7.8.4.1.5.7.2)
7.8.4.1.1.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Dark DPN. The following is a list of invalid input scenarios:
First frame in frame set does not include frame set meta-data
Frame set meta-data is invalid
Frame does not include frame level meta-data
Frame level meta-data is invalid
Frame pixel data buffer is invalid
Frame number not valid for current frame set
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.dark.status event will be generated with a status of “bad”. This status will remain until the start of a new frame set is detected.
atst.dhs.vbiBlue.dark.batch event associated with the current frame set being processed will be generated with an eventType equal to “error”.
7.8.4.1.1.3 Events Subscribed To The following sections describe the events that the Dark DPN will subscribe to. These events provide
information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.1.4 Events Published The following sections describe the events that the Dark DPN will publish. These events provide status
information to other systems that are interested.
PIXELSIZX float x pixel size for each filter in x [arcsec] FrameSet IC
PIXELSIZY float y pixel size for each filter in y [arcsec] FrameSet IC
Frame Number int n Frame number within a frame set Frame CSS
Frames Per Set int n Number of frames in this frame set FrameSet CSS
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 153 of 267
7.8.4.1.1.4.1 atst.dhs.vbiBlue.dark.status This event provides current status information for the Dark DPN. It will be generated after processing of
a batch, or every 3 seconds, whichever occurs first. The following attributes will be provided for this
event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.1.4.2 atst.dhs.vbiBlue.dark.batch This event is generated by the Dark DPN at the beginning and end of each batch processed. The
following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop, error Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
NOTE: The “start” event type occurs as soon as the first input frame in the burst is received. The “stop”
event type occurs as soon as the last frame is processed. The “error” event type may occur at any point
after the “start” event type should a problem occur during processing.
7.8.4.1.2 Gain Data Processing Node
7.8.4.1.2.1 Properties The Gain DPN will be configurable using a set of properties. The following table lists the properties and
is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.gainIn Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-in resides
maxData integer bytes 33554432 (4096x4096x2) Max bytes of data as input to this plug-in
maxPluginData integer bytes 33554432 (4096x4096x2) Max bytes of data produced from this plug-
in
dpnHandlerClass string N/A atst.dhs.vbiBlue.gain.Gain
Handler
Name of class that implements this DPN
7.8.4.1.2.1.1 .topicName
Data Type: string
Units: N/A
Valid Values: gainIn
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 154 of 267
Default Value: N/A
The topicName property represents the DHS BDT topic name that the DPN will subscribe to.
7.8.4.1.2.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.2.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxData property represents the maximum bytes of data that the DPN will accept as input.
7.8.4.1.2.1.4 .maxPluginData Data Type: integer
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxPluginData property represents the maximum bytes of data that the DPN will produce as output.
7.8.4.1.2.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.gain.GainHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.2.2 Topic Interface: atst.dhs.vbiBlue.gainIn The primary interface to the Gain DPN is through data passed on the atst.dhs.vbiBlue.gainIn DHS topic.
The Gain DPN is a subscriber to this topic and will act on all data delivered under this topic name. The
next few sections will provide details on the valid inputs and expected outputs for this interface.
7.8.4.1.2.2.1 Input Data Organization For the Gain DPN to work correctly, input data must be organized in a particular fashion. These
requirements apply to the information included with each raw frame delivered on the topic, as well as the
relationship between contiguous raw frames delivered on the topic that comprise a frame set.
The Gain DPN expects a series of one or more contiguous raw frames to be delivered on the
atst.dhs.vbiBlue.gainIn topic. This series of raw frames constitutes a frame set, which is the required
input before generation of a dark calibration output frame can be performed. Each frame delivered on the
atst.dhs.vbiBlue.gainIn topic must contain raw frame information (meta-data and pixel data) as described
in section 7.8.2.2.1.1.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 155 of 267
7.8.4.1.2.2.1.1 Required Meta Data The Gain DPN uses the frame set and frame level meta-data to keep track and verify that the sequence of
inputs received is valid. Frame set level meta-data is required on the first frame of the frame set in
addition to the frame level meta-data for that frame. Subsequent frames should only include their frame
level meta-data. The table below provides details on the meta-data elements that are required by the
interface as well as the source for each.
7.8.4.1.2.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.2.2.2 Expected Output When valid input is provided to the Dark DPN as described above, the following output response will
occur:
Raw frames for entire frame set processed into single calibration frame output
Output calibration frame to atst.dhs.vbiBlue.gainOut topic
Write calibration frame to the ATST calibration store
Generate atst.dhs.vbiBlue.gain.status event (see 7.8.4.1.5.7.1)
Generate atst.dhs.vbiBlue.gain.batch event (see 7.8.4.1.5.7.2)
7.8.4.1.2.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Gain DPN. The following is a list of invalid input scenarios:
First frame in frame set does not include frame set meta-data
Frame set meta-data is invalid
Frame does not include frame level meta-data
Frame level meta-data is invalid
Frame pixel data buffer is invalid
Frame number not valid for current frame set
Name Type Value Comment Level Source
BITPIX int 16 or 32 Number of bits per pixel FrameSet CSS
NAXIS int 1, 2, or 3 BDT value indicates number of axes
in stream
FrameSet CSS
NAXIS1 int n Number of pixels in axis 1 FrameSet CSS
NAXIS2 int n Number of pixels in axis 2 FrameSet CSS
NAXIS3 int n Number of pixels in axis 3 FrameSet CSS
DATE-OBS str time format date/time of this image Frame CSS
OBSTASK str text observation task:
observing, calibration, focus,
alignment
FrameSet IC
DATATYPE str text dark, flat, target type, pinhole, science FrameSet IC
PIXELSIZX float x pixel size for each filter in x [arcsec] FrameSet IC
PIXELSIZY float y pixel size for each filter in y [arcsec] FrameSet IC
Frame Number int n Frame number within a frame set Frame CSS
Frames Per Set int n Number of frames in this frame set FrameSet CSS
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 156 of 267
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.gain.status event will be generated with a status of “bad”. This status will remain until the start of a new frame set is detected.
atst.dhs.vbiBlue.gain.batch event associated with the current frame set being processed will be generated with an eventType equal to “error”.
7.8.4.1.2.3 Events Subscribed To The following sections describe the events that the Gain DPN will subscribe to. These events provide
information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.2.4 Events Published The following sections describe the events that the Gain DPN will publish. These events provide status
information to other systems that are interested.
7.8.4.1.2.4.1 atst.dhs.vbiBlue.gain.status This event provides current status information for the Gain DPN. It will be generated after processing of
a batch, or every 3 seconds, whichever occurs first. The following attributes will be provided for this
event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.2.4.2 atst.dhs.vbiBlue.gain.batch This event is generated by the Gain DPN at the beginning and end of each batch processed. The
following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop, error Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
NOTE: The “start” event type occurs as soon as the first input frame in the burst is received. The “stop”
event type occurs as soon as the last frame is processed. The “error” event type may occur at any point
after the “start” event type should a problem occur during processing.
7.8.4.1.3 Frame Selection Data Processing Node
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 157 of 267
7.8.4.1.3.1 Properties The Frame Selection DPN will be configurable using a set of properties. The following table lists the
properties and is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.selectIn Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-in resides
maxData integer bytes 33554432 (4096x4096x2) Max bytes of data as input to this plug-in
maxPluginData integer bytes 33554432 (4096x4096x2) Max bytes of data produced from this plug-
in
dpnHandlerClass string N/A atst.dhs.vbiBlue.select.Sele
ctHandler
Name of class that implements this DPN
histogramBinNum integer bins Number of bins to use in histogram
analysis
roiStartX integer N/A 0..4096 Lower left X pixel coordinate of ROI
roiStartY integer N/A 0..4096 Lower left Y pixel coordinate of ROI
roiEndX integer N/A 0..4096 Upper right X pixel coordinate of ROI
roiEndY integer N/A 0..4096 Upper right Y pixel coordinate of ROI
deltaX integer pixels 128 Max delta allowed for ROI in X
deltaY integer pixels 128 Max delta allowed for ROI in Y
7.8.4.1.3.1.1 .topicName
Data Type: string
Units: N/A
Valid Values: selectIn
Default Value: N/A
The topicName property represents the DHS BDT topic name that the DPN will subscribe to.
7.8.4.1.3.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.3.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxData property represents the maximum bytes of data that the DPN will accept as input.
7.8.4.1.3.1.4 .maxPluginData Data Type: integer
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 158 of 267
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxPluginData property represents the maximum bytes of data that the DPN will produce as output.
7.8.4.1.3.1.5 .dpnHandlerClass Data Type: String
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.select.SelectHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.3.1.6 .histogramBinNum Data Type: integer
Units: N/A
Valid Values: .histogramBinNum > 0
Default Value: 128
The histogramBinNum property represents the number of bins to use when performing histogram based
metric analysis on the input data.
7.8.4.1.3.1.7 .roiStartX Data Type: Integer
Units: N/A
Valid Values: 0 <= roiStartX <= 4096
Default Value: 0
The roiStartX property represents the lower left start X coordinate for the ROI.
7.8.4.1.3.1.8 .roiStartY Data Type: Integer
Units: N/A
Valid Values: 0 <= roiStartY <= 4096
Default Value: 0
The roiStartY property represents the lower left start Y coordinate for the ROI.
7.8.4.1.3.1.9 .roiEndX Data Type: Integer
Units: N/A
Valid Values: 0 <= roiEndX <= 4096
Default Value: 128
The roiEndX property represents the upper right end X coordinate for the ROI.
7.8.4.1.3.1.10 .roiEndY Data Type: Integer
Units: N/A
Valid Values: 0 <= roiEndY <= 4096
Default Value: 128
The maxPluginData property represents the maximum bytes of data that the DPN will produce as output.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 159 of 267
7.8.4.1.3.1.11 .deltaX Data Type: Integer
Units: N/A
Valid Values: 0 < deltaX < 128
Default Value: 128
The deltaX property represents the maximum allowable delta between X in the ROI.
7.8.4.1.3.1.12 .deltaY Data Type: Integer
Units: N/A
Valid Values: 0 < deltaY < 128
Default Value: 128
The deltaY propert represents the maximum allowable delta between Y in the ROI.
7.8.4.1.3.2 Topic Interface: atst.dhs.vbiBlue.selectIn The primary interface to the Frame Selection DPN is through data passed on the atst.dhs.vbiBlue.selectIn
DHS topic. The Frame Selection DPN is a subscriber to this topic and will act on all data delivered under
this topic name. The next few sections will provide details on the valid inputs and expected outputs for
this interface.
7.8.4.1.3.2.1 Input Data Organization For the Frame Selection DPN to work correctly, input data must be organized in a particular fashion.
These requirements apply to the information included with each raw frame delivered on the topic, as well
as the relationship between contiguous raw frames delivered on the topic that comprise a frame set.
The Frame Selection DPN expects a series of one or more contiguous raw frames to be delivered on the
atst.dhs.vbiBlue.selectIn topic. This series of raw frames constitutes a frame set, which is the required
input before selection and output of the best frames can be performed. Each frame delivered on the
atst.dhs.vbiBlue.selectIn topic must contain raw frame information (meta-data and pixel data) as
described in section 7.8.2.2.1.1.
7.8.4.1.3.2.1.1 Required Meta Data The Frame Selection DPN uses the frame set and frame level meta-data to keep track and verify that the
sequence of inputs received is valid. Frame set level meta-data is required on the first frame of the frame
set in addition to the frame level meta-data for that frame. Subsequent frames should only include their
frame level meta-data. The following table provides details on the meta-data elements that are required
by the interface as well as the source for each.
Name Type Value Comment Level Source
BITPIX int 16 or 32 Number of bits per pixel FrameSet CSS
NAXIS int 1, 2, or 3 BDT value indicates number of
axes in stream
FrameSet CSS
NAXIS1 int n Number of pixels in axis 1 FrameSet CSS
NAXIS2 int n Number of pixels in axis 2 FrameSet CSS
NAXIS3 int n Number of pixels in axis 3 FrameSet CSS
DATE-OBS str time format date/time of this image Frame CSS
OBSTASK str text observation task:
observing, calibration, focus,
alignment
FrameSet IC
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 160 of 267
7.8.4.1.3.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.3.2.2 Expected Output When valid input is provided to the Dark DPN as described above, the following output response will
occur:
Raw frames for entire frame set processed based on user parameters
Best n of m frames from input are selected, rest are discarded
Update meta data for selected frames
Output selected frames to atst.dhs.vbiBlue.selectOut topic
Generate atst.dhs.vbiBlue.select.status event (see 7.8.4.1.5.7.1)
Generate atst.dhs.vbiBlue.select.batch event (see 7.8.4.1.5.7.2)
7.8.4.1.3.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Frame Selection DPN. The following is a list of invalid input scenarios:
First frame in frame set does not include frame set meta-data
Frame set meta-data is invalid
Frame does not include frame level meta-data
Frame level meta-data is invalid
Frame pixel data buffer is invalid
Frame number not valid for current frame set
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.select.status event will be generated with a status of “bad”. This status will remain until the start of a new frame set is detected.
DATATYPE str text dark, flat, target type, pinhole,
science
FrameSet IC
PIXELSIZX float x pixel size for each filter in x
[arcsec]
FrameSet IC
PIXELSIZY float y pixel size for each filter in y
[arcsec]
FrameSet IC
Frame Number int n Frame number within a frame
set
Frame CSS
Frames Per Set int n Number of frames in this frame
set
FrameSet CSS
Images to Select Int 0..frames per set Number of best images to select FrameSet IC
Calibrate Image bool Yes or No Flag indicating whether to
calibrate ROI before selection
or not
FrameSet IC
Image quality
metric
Str NormContract,
ModulusOfGradient
Type of image quality metric to
apply
Frame
Set
IC
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 161 of 267
atst.dhs.vbiBlue.select.batch event associated with the current frame set being processed will be generated with an eventType equal to “error”.
7.8.4.1.3.3 Events Subscribed To The following sections describe the events that the Frame Selection DPN will subscribe to. These events
provide information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.3.4 Events Published The following sections describe the events that the Frame Selection DPN will publish. These events
provide status information to other systems that are interested.
7.8.4.1.3.4.1 atst.dhs.vbiBlue.select.status This event provides current status information for the Frame Selection DPN. It will be generated after
processing of a batch, or every 3 seconds, whichever occurs first. The following attributes will be
provided for this event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.3.4.2 atst.dhs.vbiBlue.select.batch This event is generated by the Frame Selection DPN at the beginning and end of each batch processed.
The following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop, error Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
NOTE: The “start” event type occurs as soon as the first input frame in the burst is received. The “stop”
event type occurs as soon as the last frame is processed successfully. The “error” event type may occur at
any point after the “start” event type should a problem occur during processing.
7.8.4.1.4 Detailed Display Data Processing Node
7.8.4.1.4.1 Properties The Detailed Display DPN will be configurable using a set of properties. The following table lists the
properties and is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.detailIn Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-in resides
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 162 of 267
maxData integer bytes 33554432 (4096x4096x2) Max bytes of data as input to this plug-in
maxPluginData integer bytes 33554432 (4096x4096x2) Max bytes of data produced from this plug-
in
dpnHandlerClass string N/A atst.dhs.vbiBlue.detail.Deta
ilDisplayHandler
Name of class that implements this DPN
7.8.4.1.4.1.1 .topicName
Data Type: string
Units: N/A
Valid Values: detailIn
Default Value: N/A
The topicName property represents the DHS BDT topic name that the DPN will subscribe to.
7.8.4.1.4.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.4.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxData property represents the maximum bytes of data that the DPN will accept as input.
7.8.4.1.4.1.4 .maxPluginData Data Type: integer
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxPluginData property represents the maximum bytes of data that the DPN will produce as output.
7.8.4.1.4.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.detail.DetailDisplayHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 163 of 267
7.8.4.1.4.2 Topic Interface: atst.dhs.vbiBlue.detailIn The primary interface to the Detailed Display DPN is through data passed on the atst.dhs.vbiBlue.detailIn
DHS topic. The Detail Display DPN is a subscriber to this topic and will act on all data delivered under
this topic name. The next few sections will provide details on the valid inputs and expected outputs for
this interface.
7.8.4.1.4.2.1 Input Data Organization For the Detailed Display DPN to work correctly, input data must be organized in a particular fashion.
These requirements apply to the information included with each raw frame delivered on the topic.
The Detailed Display DPN expects raw frames to be delivered on the atst.dhs.vbiBlue.detailIn topic.
Each frame delivered on the atst.dhs.vbiBlue.detailIn topic must contain raw frame information (meta-
data and pixel data) as described in section 7.8.2.2.1.1.
7.8.4.1.4.2.1.1 Required Meta Data The Detailed Display DPN uses the frame level meta-data for display to the user in the Detailed Display.
Frame set level meta-data is required on the first frame of the frame set in addition to the frame level
meta-data for that frame. Subsequent frames should only include their frame level meta-data. The
following table provides details on the meta-data elements that are required by the interface as well as the
source for each.
Name Type Value Comment Level Source
BITPIX int 16 or 32 Number of bits per pixel FrameSet CSS
NAXIS int 0,2 or 3 BDT value indicates number of axes in
stream
FrameSet CSS
NAXIS1 int n Number of pixels in axis 1 FrameSet CSS
NAXIS2 int n Number of pixels in axis 2 FrameSet CSS
NAXIS3 int n Number of pixels in axis 3 FrameSet CSS
DATE-BGN str Time format Date/time of first image in data set FrameSet CSS
DATE-OBS str time format date/time of this image Frame CSS
FRATE float frame rate camera operated in FrameSet CSS
OBSTASK str text observation task:
observing, calibration, focus, alignment
FrameSet IC
OPMODE str text operation mode:
sequence, field sample, center-to-limb,
full sun
ObsBlock IC
FOVID int n identification number of this field
sampling subfield
FrameSet IC
FOVN int n number of field sampling subfields FrameSet IC
FOVPAT string text field sampling pattern FrameSet IC
CAOPMODE str text camera operation mode
single frame, multi frame, burst
FrameSet CSS
DATATYPE str text dark, flat, target type, pinhole, science FrameSet IC
FILTER int 1-5 filter wheel position FrameSet IC
WAVELGTH float 393.3-486.1
656.3-854.2
for blue branch of VBI
for red branch of VBI
FrameSet IC
EXPTIME float n exposure time of this image [ms] FrameSet CSS
DELTA_T float n delta t of this image to previous image
[ms]
ObsBlock CSS
PIXELSIZX float x pixel size for each filter in x [arcsec] FrameSet IC
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 164 of 267
Name Type Value Comment Level Source
PIXELSIZY float y pixel size for each filter in y [arcsec] FrameSet IC
BIN_ENB bool T or F binning enabled: true or false FrameSet CSS
BINSIZX int x (BIN_ENB) ? binning area in pixels x : -1 FrameSet CSS
BINSIZY int y (BIN_ENB) ? binning area in pixels y : -1 FrameSet CSS
ROI_ENB int n Number of enabled region of interest on
chip
FrameSet CSS
ROIOXn int x (n<ROI_ENB) ? ROIn origin pixel x : -1 FrameSet CSS
ROIOYn int y (n<ROI_ENB) ? ROIn origin pixel y : -1 FrameSet CSS
TAZIMUTH float n azimuth angle of telescope [deg] FrameSet IC
TELEVATN float n elevation of telescope [deg] FrameSet IC
TTBLANGL float n Coude table angle [deg] FrameSet IC
PAC_ENB bool T or F GOS PAC enabled: true or false FrameSet IC
PAC_ID int n identification number of modulator in
beam
FrameSet IC
PAC_STAT int n modulation state encoded in image Frame IC
7.8.4.1.4.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.4.2.2 Expected Output When valid input is provided to the Detailed Display DPN as described above, the following output
response will occur:
Input raw frame calibrated for full frame
Output selected frames to atst.dhs.vbiBlue.detailOut topic
Generate atst.dhs.vbiBlue.detail.status event (see 7.8.4.1.5.7.1)
Generate atst.dhs.vbiBlue.detail.batch event (see 7.8.4.1.5.7.2)
7.8.4.1.4.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Detailed Display DPN. The following is a list of invalid input scenarios:
First frame in frame set does not include frame set meta-data
Frame set meta-data is invalid
Frame does not include frame level meta-data
Frame level meta-data is invalid
Frame pixel data buffer is invalid
Frame number not valid for current frame set
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.detail.status event will be generated with a status of “bad”. This status will remain until the start of a new frame set is detected.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 165 of 267
atst.dhs.vbiBlue.detail.batch event associated with the current frame set being processed will be generated with an eventType equal to “error”.
7.8.4.1.4.3 Events Subscribed To The following sections describe the events that the Detailed Display DPN will subscribe to. These events
provide information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.4.4 Events Published The following sections describe the events that the Detailed Display DPN will publish. These events
provide status information to other systems that are interested.
7.8.4.1.4.4.1 atst.dhs.vbiBlue.detail.status This event provides current status information for the Detailed Display DPN. It will be generated after
processing of a batch, or every 3 seconds, whichever occurs first. The following attributes will be
provided for this event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.4.4.2 atst.dhs.vbiBlue.detail.batch This event is generated by the Detailed Display DPN at the beginning and end of each frame processed.
The following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop, error Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
7.8.4.1.5 Speckle Input Data Processing Node
7.8.4.1.5.1 Properties The Speckle Input DPN will be configurable using a set of properties. The following table lists the
properties and is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.speckle Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-in resides
maxData integer bytes 33554432 (4096x4096x2) Max bytes of data as input to this plug-in
maxPluginData integer bytes 2097152 (1024x1024x2) Max bytes of data produced from this plug-
in
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 166 of 267
dpnHandlerClass string N/A atst.dhs.vbiBlue.speckle.Ma
sterInputHandler
Name of class that implements this DPN
subtopic.name.1 string N/A atst.dhs.vbiBlue.gain Sub-topic name to subscribe to
subtopic.cameraLine.1 string N/A atst.dhs.vbiBlue Camera line where sub-topic exists
subtopic.maxData.1 integer bytes 33554432 Max bytes of data as input from this sub-
topic
subtopic.key.1 string N/A gain Key used to identify sub-topic to plug-in
interface
subtopic.name.2 string N/A atst.dhs.vbiBlue.dark Sub-topic name to subscribe to
subtopic.cameraLine.2 string N/A atst.dhs.vbiBlue Camera line where sub-topic exists
subtopic.maxData.2 integer bytes 33554432 Max bytes of data as input from this sub-
topic
subtopic.key.2 string N/A dark Key used to identify sub-topic to plug-in
interface
subtopic.name.3 string N/A atst.dhs.wccs.aoReconMatri
x
Sub-topic name to subscribe to
subtopic.cameraLine.3 string N/A atst.dhs.wccs Camera line where sub-topic exists
subtopic.maxData.3 integer bytes TBD Max bytes of data as input from this sub-
topic
subtopic.key.3 string N/A ao Key used to identify sub-topic to plug-in
interface
macroTileSizeX integer pixels 1024 Size of each macro-tile in X pixels
macroTileSizeY integer pixels 1024 Size of each macro tile in Y pixels
7.8.4.1.5.1.1 .topicName
Data Type: string
Units: N/A
Valid Values: speckle
Default Value: N/A
The topicName property represents the DHS BDT topic name that the Speckle plug-in will subscribe to.
7.8.4.1.5.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 167 of 267
7.8.4.1.5.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The maxData property represents the maximum bytes of data that the Speckle plug-in will accept as
input.
7.8.4.1.5.1.4 .maxPluginData Data Type: integer
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 2097152 (1024 X 1024 X 2 bytes)
The maxPluginData property represents the maximum bytes of data that the Speckle plug-in will produce
as output.
7.8.4.1.5.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.speckle.MasterInputHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.5.1.6 .subtopic.name.1 Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.gainOut
The subtopic.name.1 property will be used to configure the Speckle plug-in to subscribe to the DHS BDT
event containing the Gain plug-in output. This will allow the Speckle plug to automatically receive new
Gain calibration files as they are produced from the Gain plug-in.
7.8.4.1.5.1.7 .subtopic.cameraLine.1 Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue
The subtopic.cameraLine.1 property will be used to configure the Speckle plug-in as to which camera line
the Gain plug-in data output event is available on.
7.8.4.1.5.1.8 .subtopic.maxData.1 Data Type: Integer
Units: N/A
Valid Values: N/A
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The subtopic.maxData.1 property specifies the maximum bytes of data the Speckle plug-in will accept as
input from the Gain plug-in.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 168 of 267
7.8.4.1.5.1.9 .subtopic.key.1 Data Type: String
Units: N/A
Valid Values: N/A
Default Value: Gain
The subtopic.key.1 property specifies the key name used by the Speckle plug-in to distinguish delivery of
a Gain sub-topic versus other subtopics.
7.8.4.1.5.1.10 .subtopic.name.2 Data Type: String
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.darkOut
The subtopic.name.2 property will be used to configure the Speckle plug-in to subscribe to the DHS BDT
event containing the Dark plug-in output. This will allow the Speckle plug-in to automatically receive
new Dark calibration files as they are produced from the Dark plug-in.
7.8.4.1.5.1.11 .subtopic.cameraLine.2 Data Type: String
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue
The subtopic.cameraLine.2 property will be used to configure the Speckle plug-in as to which camera line
the Dark plug-in data output event is available on.
7.8.4.1.5.1.12 .subtopic.maxData.2 Data Type: Integer
Units: N/A
Valid Values: N/A
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The subtopic.maxData.2 property specifies the maximum bytes of data the Speckle plug-in will accept as
input from the Dark plug-in.
7.8.4.1.5.1.13 .subtopic.key.2 Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.gain
The subtopic.key.2 property specifies the key name used by the Speckle plug-in to distinguish delivery of
a Dark sub-topic versus other subtopics.
7.8.4.1.5.1.14 .subtopic.name.3 Data Type: String
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.wccs.aoReconMatrix
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 169 of 267
The subtopic.name.3 property will be used to configure the Speckle plug-in to subscribe to the DHS BDT
event containing the AO reconstruction matrix. This will allow the Speckle plug-in to automatically
receive new AO reconstruction matrix data as it becomes available from the WCCS.
7.8.4.1.5.1.15 .subtopic.cameraLine.3 Data Type: String
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.wccs
The subtopic.cameraLine.3 property will be used to configure the Speckle plug-in as to which camera line
the WCCS AO reconstruction matrix topic event is available on.
7.8.4.1.5.1.16 .subtopic.maxData.3 Data Type: Integer
Units: N/A
Valid Values: N/A
Default Value: 33554432 (4096 X 4096 X 2 bytes)
The subtopic.maxData.3 property specifies the maximum bytes of data the Speckle plug-in will accept as
input from the WCCS AO reconstruction matrix topic.
7.8.4.1.5.1.17 .subtopic.key.3 Data Type: string
Units: N/A
Valid Values: N/A
Default Value: AO
The subtopic.key.3 property specifies the key name used by the Speckle plug-in to distinguish delivery of
a AO reconstruction matrix sub-topic versus other subtopics.
7.8.4.1.5.1.18 .macroTileSizeX Data Type: integer
Units: N/A
Valid Values: 512, 1024, 2056
Default Value: 1024
The macroTileSizeX property represents the number of pixels in X used for the macro-tiles created by the
Speckle Input DPN for distribution to the slave nodes.
7.8.4.1.5.1.19 .macroTileSizeY Data Type: integer
Units: N/A
Valid Values: 512, 1024, 2056
Default Value: 1024
The macroTileSizeY property represents the number of pixels in Y used for the macro-tiles created by the
Speckle Input DPN for distribution to the slave nodes.
7.8.4.1.5.2 Topic Interface: atst.dhs.vbiBlue.speckle The primary interface to the Speckle Input DPN is through data passed on the atst.dhs.vbiBlue.speckle
DHS topic. The Speckle Input DPN is a subscriber to this topic and will act on all data delivered under
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 170 of 267
this topic name. The next few sections will provide details on the valid inputs and expected outputs for
this interface.
7.8.4.1.5.2.1 Input Data Organization For the Speckle Input Handler to work correctly, input data must be organized in a particular fashion.
These requirements apply to the information included with each raw frame delivered on the topic, as well
as the relationship between contiguous raw frames delivered on the topic that comprise a frame set.
The Speckle Input DPN expects a series of two or more contiguous raw frames to be delivered on the
atst.dhs.vbiBlue.speckle topic. This series of raw frames constitutes a frame set, which is the required
input before image reconstruction can be performed. Each frame delivered on the
atst.dhs.vbiBlue.speckle topic must contain raw frame information (meta-data and pixel data) as described
in section 7.8.2.2.1.1.
7.8.4.1.5.2.1.1 Required Meta Data The Speckle Input DPN uses the frame set and frame level meta-data to keep track and verify that the
sequence of inputs received is valid. Frame set level meta-data is required on the first frame of the frame
set in addition to the frame level meta-data for that frame. Subsequent frames should only include their
frame level meta-data. The following table provides details on the meta-data elements that are required
by the interface as well as the source for each.
7.8.4.1.5.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
Name Type Value Comment Level Source
BITPIX int 16 or 32 Number of bits per pixel FrameSet CSS
NAXIS int 1, 2, or 3 BDT value indicates number of axes
in stream
FrameSet CSS
NAXIS1 int n Number of pixels in axis 1 FrameSet CSS
NAXIS2 int n Number of pixels in axis 2 FrameSet CSS
NAXIS3 int n Number of pixels in axis 3 FrameSet CSS
DATE-OBS str time format date/time of this image Frame CSS
OBSTASK str text observation task:
observing, calibration, focus,
alignment
FrameSet IC
FOVID int n identification number of this field
sampling subfield
FrameSet IC
FOVN int n number of field sampling subfields FrameSet IC
FOVPAT string text field sampling pattern FrameSet IC
DATATYPE str text dark, flat, target type, pinhole, science FrameSet IC
WAVELGTH float 393.3-486.1
for blue branch of VBI
FrameSet IC
PIXELSIZX float x pixel size for each filter in x [arcsec] FrameSet IC
PIXELSIZY float y pixel size for each filter in y [arcsec] FrameSet IC
AOCMAT str text AO control matrix (file name) FrameSet TBD
Frame Number int n Frame number within a frame set Frame CSS
Frames Per Set int n Number of frames in this frame set FrameSet CSS
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 171 of 267
7.8.4.1.5.2.2 Expected Output When valid input is provided to the Speckle Input DPN as described above, the following output response
will occur:
Raw frames split into macro-tiles and grouped into macro-tile cubes across frame set
AO covariance data associated with each macro-tile cube
Macro-tile cubes and associated AO covariance data delivered to slave nodes
Generate atst.dhs.vbiBlue.speckle.status event (see 7.8.4.1.5.7.1)
Generate atst.dhs.vbiBlue.speckle.batch event (see 7.8.4.1.5.7.2)
7.8.4.1.5.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Speckle Input DPN. The following is a list of invalid input scenarios:
First frame in frame set does not include frame set meta-data
Frame set meta-data is invalid
Frame does not include frame level meta-data
Frame level meta-data is invalid
Frame pixel data buffer is invalid
Frame number not valid for current frame set
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckle.status event will be generated with a status of “bad”. This status will remain until the start of a new frame set is detected.
atst.dhs.vbiBlue.speckle.batch event associated with the current frame set being processed will be generated with an eventType equal to “error”.
7.8.4.1.5.3 Subtopic Interface: atst.dhs.vbiBlue.gain The Speckle Input DPN provides an interface for updating its local gain calibration data through the use
of the sub-topic atst.dhs.vbiBlue.gain. The Speckle Input DPN is a subscriber of this topic and will act on
all data delivered under this topic name.
7.8.4.1.5.3.1 Valid Input Data delivered on the atst.dhs.vbiBlue.gain topic must contain the calibration frame information as
described in section 7.8.2.2.1.2.
7.8.4.1.5.3.1.1 Required Meta Data The gain calibration frame includes meta-data that is used by the Speckle Input DPN for validation
purposes. The following table lists the meta-data that is required:
Name Type Value Comment
timestamp AtstDate Time data was collected
pixelsX int Number of pixels in X
pixelsY Int Number of pixels in Y
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 172 of 267
7.8.4.1.5.3.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 32-bit floating point format
and must have a number of bytes equal to the value of the .subtopic.maxData.1 property.
7.8.4.1.5.3.2 Expected Output When valid input is provided to this interface as described above, the following output response will
occur:
Gain calibration frame will be split into macro-tiles
Calibration macro-tiles delivered to slave nodes
7.8.4.1.5.3.3 Invalid Input Scenarios Input that does not meet the valid input explained in the previous sections will be rejected by the Speckle
Input DPN. The following is a list of invalid input scenarios:
Gain calibration meta-data not present
Gain calibration pixel data buffer is not valid
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckle.status event will be generated with a status of “ill”. This status will remain and the previous gain calibration will continued to be used until either 1) the operator clears the “ill” status or 2) a valid gain calibration is processed.
7.8.4.1.5.4 Subtopic Interface: atst.dhs.vbiBlue.dark The Speckle Input DPN provides an interface for updating its local dark calibration data through the use
of the sub-topic atst.dhs.vbiBlue.dark. The Speckle Input DPN is a subscriber of this topic and will act on
all data delivered under this topic name.
7.8.4.1.5.4.1 Valid Input Data delivered on the atst.dhs.vbiBlue.dark topic must contain the calibration frame information as
described in section 7.8.2.2.1.2.
7.8.4.1.5.4.1.1 Required Meta Data The dark calibration frame includes meta-data that is used by the Speckle Input DPN for validation
purposes. The following table below lists the meta-data that is required:
Name Type Value Comment
timestamp AtstDate Time data was collected
pixelsX int Number of pixels in X
pixelsY Int Number of pixels in Y
7.8.4.1.5.4.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 32-bit floating point format
and must have a number of bytes equal to the value of the .subtopic.maxData.1 property.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 173 of 267
7.8.4.1.5.4.2 Expected Output When valid input is provided to this interface as described above, the following output response will
occur:
Dark calibration frame will be split into macro-tiles
Calibration macro-tiles delivered to slave nodes
7.8.4.1.5.4.3 Invalid Input Scenarios Input that does not meet the valid input explained in the previous sections will be rejected by the Speckle
Input DPN. The following is a list of invalid input scenarios:
Dark calibration meta-data not present
Dark calibration pixel data buffer is not valid
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckle.status event will be generated with a status of “ill”. This status will remain and the previous dark calibration will continued to be used until either 1) the operator clears the “ill” status or 2) a valid dark calibration is processed.
7.8.4.1.5.5 Subtopic Interface: atst.dhs.wccs.aoReconMatrix The Speckle Input DPN provides an interface for receiving a stream of AO covariance data through the
use of the sub-topic atst.dhs.wccs.aoReconMatrix. The Speckle Input DPN is a subscriber of this topic
and will act on all data delivered under this topic name.
7.8.4.1.5.5.1 Valid Input Data delivered on the atst.dhs.wccs.aoReconMatrix topic must contain the AO covariance information as
described in section 7.8.2.2.1.3.
7.8.4.1.5.5.1.1 Required Meta Data The AO covariance data includes meta-data that is used by the Speckle Input DPN for validation
purposes. The following table lists the meta-data that is required:
Name Type Value Comment
timestamp AtstDate Time data was collected
nRows int Number of rows in the data
nCols int Number of columns in the data
nSets int Number of sets in the data
dataElementType string byte, short, int, long, float,
double
Type of data element
dataType string covariance Type of data
7.8.4.1.5.5.1.2 Required Covariance Data The covariance data provided in the IBdtBuffer object’s byte[] buffer must be in the format specified by
the meta-data (nRows, nCols, nSets, etc.) and must have a number of bytes equal to the value of the
.subtopic.maxData.1 property.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 174 of 267
7.8.4.1.5.5.2 Expected Output When valid input is provided to this interface as described above, the following output response will
occur:
AO covariance data appended to local buffer
AO covariance data associated with macro-tile cubes and delivered to slave nodes
7.8.4.1.5.5.3 Invalid Input Scenarios Input that does not meet the valid input explained in the previous sections will be rejected by the Speckle
Input DPN. The following is a list of invalid input scenarios:
AO covariance meta-data not present
AO covariance data buffer is not valid per meta-data
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckle.status event will be generated with a status of “ill”. This status will remain until cleared by the operator.
7.8.4.1.5.6 Events Subscribed To The following sections describe the events that the Speckle Input DPN will subscribe to. These events
provide information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.5.7 Events Published The following sections describe the events that the Speckle Input DPN will publish. These events provide
status information to other systems that are interested.
7.8.4.1.5.7.1 atst.dhs.vbiBlue.speckle.status This event provides current status information for the Speckle Input DPN. It will be generated after
processing of a batch, or every 3 seconds, whichever occurs first. The following attributes will be
provided for this event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.5.7.2 atst.dhs.vbiBlue.speckle.batch This event is generated by the Speckle Input DPN at the beginning and end of each batch processed. The
following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop, error Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 175 of 267
id string Unique identifier for
the batch
NOTE: The “start” event type occurs as soon as the first input frame in the burst is received. The “stop”
event type occurs as soon as the last frame is sent to a slave node. The “error” event type may occur at
any point after the “start” event type should a problem occur during processing.
7.8.4.1.6 Speckle Slave Data Processing Node
7.8.4.1.6.1 Properties The Speckle Slave DPNs will be configurable using a set of properties. The following table lists the set
of Speckle Slave DPN properties and is followed by detailed explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.speckleSlave Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-
in resides
maxData integer bytes 2097152 (1024 x 1024 x 2) Max bytes of data as
input to this plug-in
maxPluginData integer bytes 2097152 (1024 x 1024 x 2) Max bytes of data
produced from this plug-
in
dpnHandlerClass string N/A atst.dhs.vbiBlue.speckle.SlaveInputHandler Name of class that
implements this DPN
subImageSizeX integer pixels 128 Size of each sub-image
in X pixels
subImageSizeY integer pixels 128 Size of each sub-image
in Y pixels
depth:param1 float N/A 1 First depth parameter
used for bispectrum
initialization.
depth:param2 float N/A 0.1 Second depth parameter
used for bispectrum
initialization.
numPhaseIter integer N/A 10 Number of phase
iterations used in the
iterative phase
reconstruction.
thresholdSNR N/A N/A N/A – not implemented
weighting N/A N/A N/A – not implemented
apodisation float percent N/A – not implemented
noiseFilter:param1 boolean N/A N/A – not implemented
noiseFilter:param2 boolean N/A N/A – not implemented
7.8.4.1.6.1.1 .topicName Data Type: string
Units: N/A
Valid Values: speckle
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 176 of 267
Default Value: N/A
The topicName property represents the DHS BDT topic name that the Speckle plug-in will subscribe to.
7.8.4.1.6.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue, atst.dhs.vbiRed
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.6.1.3 .maxData Data Type: integer
Units: N/A
Valid Values: .maxData >= 0
Default Value: 2097152 (1024 x 1024 x 2)
The maxData property represents the maximum bytes of data that the Speckle plug-in will accept as
input.
7.8.4.1.6.1.4 .maxPluginData Data Type: Integer
Units: N/A
Valid Values: .maxPlugInData >= 0
Default Value: 2097152 (1024 x 1024 x 2)
Thew maxPluginData property represents the maximum bytes of data that the Speckle plug-in will
produce as output.
7.8.4.1.6.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.speckle.SlaveInputHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.6.1.6 .subImageSizeX Data Type integer
Units: N/A
Valid Values: 64, 128, 256, 512
Default Value: 128
The subImageSizeX property represents the number of pixels in X used for the sub-images (tiles)
processed by the Speckle algorithm.
7.8.4.1.6.1.7 .subImageSizeY Data Type: integer
Units: N/A
Valid Values: 64, 128, 256, 512
Default Value: 128
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 177 of 267
The subImageSizeY property represents the number of pixels in Y used for the sub-images (tiles)
processed by the Speckle algorithm.
7.8.4.1.6.1.8 .depthParam1 Data Type: float
Units: N/A
Valid Values: N/A
Default Value: 1.0
The depthParam1 property specifies the first depth parameter value used during initialization of the
bispectrum position values.
7.8.4.1.6.1.9 .depthParam2 Data Type: float
Units: N/A
Valid Values: N/A
Default Value: 0.1
The depthParam2 property specifies the second depth parameter value used during initialization of the
bispectrum position values.
7.8.4.1.6.1.10 .numPhaseIter Data Type: integer
Units: N/A
Valid Values: numPhaseIter > 0
Default Value: 10
The numPhaseIter property specifies the number of phase reconstruction iterations to perform.
7.8.4.1.6.1.11 .thresholdSNR Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.gain
TBD
7.8.4.1.6.1.12 .weighting Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.gain
TBD
7.8.4.1.6.1.13 .apodisation Data Type:
Units: N/A
Valid Values: N/A
Default Value:
TBD
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 178 of 267
7.8.4.1.6.1.14 .noiseFilter:param1 Data Type:
Units: N/A
Valid Values: N/A
Default Value:
TBD
7.8.4.1.6.1.15 .noiseFilter:param2 Data Type:
Units: N/A
Valid Values: N/A
Default Value:
TBD
7.8.4.1.6.2 Topic Interface: atst.dhs.vbiBlue.speckleSlave The primary interface to the Speckle Slave DPN is through macro-tile data passed on the
atst.dhs.vbiBlue.speckleSlave DHS topic. The Speckle Slave DPN is a subscriber to this topic and will
act on all data delivered under this topic name as long as the macro-tile identified by the meta-data
matches that assigned to the slave node. All other data received will be ignored. The next few sections
will provide details on the valid inputs and expected outputs for this interface.
7.8.4.1.6.2.1 Input Data Organization For the Speckle Slave DPN to work correctly input data must be organized in a particular fashion. These
requirements apply to the information included with each macro-tile delivered on the topic, as well as the
relationship between contiguous raw frames delivered on the topic that comprise a macro-tile cube.
The Speckle Input DPN expects a series of two or more contiguous macro-tiles to be delivered on the
atst.dhs.vbiBlue.speckleSlave topic. This series of macro-tiles constitutes a macro-tile cube, which is the
required input before image reconstruction can be performed. Each macro-tile delivered on the
atst.dhs.vbiBlue.speckleSlave topic must contain macro-tile information (meta-data and pixel data) as
described in section 7.8.2.2.1.4.
7.8.4.1.6.2.1.1 Required Meta Data The Speckle Slave DPN uses frame set, frame, maco-tile cube, and macro-tile meta-data to keep track and
verify that the sequence of inputs received is valid. Frame and frame set meta-data described in section
7.8.4.1.5.2.1 are required and should simply inherited from the source frame. Therefore, if the macro-tile
is from the first frame in the frame set it will include the frame set meta-data.
In addition to the required frame set and frame level meta-data, the Speckle Slave DPN requires macro-
tile information to be provided as well. The following table provides details on the macro-tile meta-data
elements that are required by the interface:
Name Type Value Comment Level
xxspeckle_macrox int n Number of pixels in X axis Macro-tile
xxspeckle_macroy int n Number of pixels in Y axis Macro-tile
xxspeckle_nmacro int n Total number of macro tiles per frame Macro-tile
xxspeckle_macron int n Macro tile number within frame Macro-tile
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 179 of 267
xxspeckle_ncube int n Number of macro tiles per cube Macro-tile
xxspeckle_cuben int n Macro tile number within cube Macro-tile
7.8.4.1.6.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.6.2.2 Expected Output When valid input is provided to the Speckle Slave DPN as described above, the following output response
will occur:
Macro-tile cube pixel data converted to 32-bit floating point
Macro-tile cube data calibrated
Macro-tile cube data processed using Speckle image reconstruction
Single reconstructed macro-tile produced as output
Generate atst.dhs.vbiBlue.speckleN.status event (see 7.8.4.1.6.4.1)
Generate atst.dhs.vbiBlue.speckleN.batch event (see 7.8.4.1.6.4.2)
7.8.4.1.6.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Speckle Slave DPN. The following is a list of invalid input scenarios:
First macro-tile in macro-tile cube does not include frame set meta-data
Frame set meta-data is invalid
Macro-tile does not include frame level meta-data
Frame level meta-data is invalid
Macro-tile does not include macro-tile level meta-data
Macro-tile level meta-data is invalid
Macro-tile pixel data buffer is invalid
Macro-tile number not valid for current macro-tile cube
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckleN.status event will be generated with a status of “bad”. This status will remain until the start of a new macro-tile cube is detected.
atst.dhs.vbiBlue.speckleN.batch event associated with the current macro-tile cube set being processed will be generated with an eventType equal to “error”.
7.8.4.1.6.3 Events Subscribed To The following sections describe the events that the Speckle Slave DPN will subscribe to. These events
provide information needed by the module to perform as expected.
7.8.4.1.6.3.1 atst.ics.vbiBlue.pixelscale This event is published by the VBI Blue IC every time the pixel scale for the camera is calculated by a
maintenance mode. The Speckle Slave DPN must obtain this information and update its local data so that
the pixel scale can be used in the reconstruction process.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 180 of 267
7.8.4.1.6.4 Events Published The following sections describe the events that the Speckle Slave DPN will publish. These events
provide status information to interested systems.
7.8.4.1.6.4.1 atst.dhs.vbiBlue.speckleN.status This event provides current status information for the Speckle Slave DPN. It will be generated after
processing of a batch, or every 3 seconds, whichever occurs first. The following attributes will be
provided for this event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.6.4.2 atst.dhs.vbiBlue.speckleN.batch This event is generated by the Speckle Slave DPN each time a new batch of slave inputs is processed.
The following attributes will be provided in this event:
Attribute Name Type Values Description
eventType string start, stop Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
The “start” event type will be generated when the first input frame of a batch is received. The “stop”
event type will be generated when the last processed frame is transferred to the Speckle Output DPN.
7.8.4.1.7 Speckle Output Data Processing Node
7.8.4.1.7.1 Properties The Speckle Output DPN will be configurable using a set of properties. These properties can be divided
into two groups. The first group consists of properties that are fixed and remain unchanged from one
observation to the next. The second group of properties are those that are dynamic, and may change on a
per observation basis.
The following table lists the fixed set of Speckle Output DPN properties and is followed by detailed
explanation of each:
Name Type Units Value Comment
topicName string N/A atst.dhs.vbiBlue.speckleOut Topic to subscribe to
cameraLine string N/A atst.dhs.vbiBlue Camera line where plug-
in resides
maxData integer bytes 4194304 (1024x1024x4) Max bytes of data as
input to this plug-in
maxPluginData integer bytes 67108864 (4096x4096x4) Max bytes of data
produced from this plug-
in
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 181 of 267
dpnHandlerClass string N/A atst.dhs.vbiBlue.speckle.MasterOutputHand
ler
Name of class that
implements this DPN
7.8.4.1.7.1.1 .topicName Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue.speckleOut
Default Value: N/A
The topicName property represents the DHS BDT topic name that the Speckle Output DPN will subscribe
to.
7.8.4.1.7.1.2 .cameraLine Data Type: string
Units: N/A
Valid Values: atst.dhs.vbiBlue
Default Value: N/A
The cameraLine property represents the camera line of the instrument in which the topicName will be
available.
7.8.4.1.7.1.3 .maxData Data Type: integer
Units: bytes
Valid Values: .maxData >= 0
Default Value: 4194304 (1024 X 1024 X 4 bytes)
The maxData property represents the maximum bytes of data that the Speckle plug-in will accept as
input.
7.8.4.1.7.1.4 .maxPluginData Data Type: integer
Units: bytes
Valid Values: .maxPlugInData >= 0
Default Value: 67108864 (4096 X 4096 X 4 bytes)
Thew maxPluginData property represents the maximum bytes of data that the Speckle plug-in will
produce as output.
7.8.4.1.7.1.5 .dpnHandlerClass Data Type: string
Units: N/A
Valid Values: N/A
Default Value: atst.dhs.vbiBlue.speckle.MasterOutputHandler
The dpnHandlerClass property specifies the name of the class that provides the plug-in functionality.
7.8.4.1.7.2 Topic Interface: atst.dhs.vbiBlue.speckleOut The primary interface to the Speckle Output DPN is through macro-tile data passed on the
atst.dhs.vbiBlue.speckleOut DHS topic. The Speckle Output DPN is a subscriber to this topic and will
act on all data delivered under this topic name. The next few sections will provide details on the valid
inputs and expected outputs for this interface.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 182 of 267
7.8.4.1.7.2.1 Input Data Organization For the Speckle Output DPN to work correctly, input data must be organized in a particular fashion.
These requirements apply to the information included with each macro-tile delivered on the topic, as well
as the relationship between contiguous raw frames delivered on the topic that comprise a macro-tile cube.
The Speckle Output DPN expects a series of two or more contiguous macro-tiles to be delivered on the
atst.dhs.vbiBlue.speckleOut topic. This series of macro-tiles constitutes a macro-tile set, which is the
required input before re-assembly to full frame can be performed. Each macro-tile delivered on the
atst.dhs.vbiBlue.speckleOut topic must contain macro-tile information (meta-data and pixel data) as
described in section 7.8.2.2.1.4.
7.8.4.1.7.2.1.1 Required Meta Data The Speckle Output DPN uses frame set, frame, and macro-tile meta-data to keep track and verify that the
sequence of inputs received is valid. Frame and frame set meta-data described in section 7.8.4.1.5.2.1 are
required and should simply be inherited from the source frame. Note that this implies that all macro-tiles
will contain the frame set meta data.
In addition to the required frame set and frame level meta-data, the Speckle Output DPN requires macro-
tile information to be provided as well. The following table provides details on the macro-tile meta-data
elements that are required by the interface:
Name Type Value Comment Level
xxspeckle_macrox int n Number of pixels in X axis Macro-tile
xxspeckle_macroy int n Number of pixels in Y axis Macro-tile
xxspeckle_macron int n Macro tile number within frame Macro-tile
7.8.4.1.7.2.1.2 Required Pixel Data The pixel data provided in the IBdtBuffer object’s byte[] buffer must be in 16-bit quantization format and
must have a number of bytes equal to the value of the .maxData property.
7.8.4.1.7.2.2 Expected Output When valid input is provided to the Speckle Output DPN as described above, the following output
response will occur:
Macro-tiles re-assembled to full frame
Full frame meta-data updated
Full frame output transferred to data store
Generate atst.dhs.vbiBlue.speckleOut.status event (see 7.8.4.1.7.4.1)
Generate atst.dhs.vbiBlue.speckleOut.batch event (see 7.8.4.1.7.4.2)
7.8.4.1.7.2.3 Invalid Input Scenarios Input that does not follow the data organization explained in the previous sections will be rejected by the
Speckle Output DPN. The following is a list of invalid input scenarios:
Macro-tile does not include frame set meta-data
Frame set meta-data is invalid
Macro-tile does not include frame level meta-data
Frame level meta-data is invalid
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 183 of 267
Macro-tile does not include macro-tile level meta-data
Macro-tile level meta-data is invalid
Macro-tile pixel data buffer is invalid
Macro-tile number not valid for current full frame assembly
When an invalid input scenario is detected, the following will occur:
System will log the error and alert operations support team.
atst.dhs.vbiBlue.speckleOut.status event will be generated with a status of “bad”. This status will remain until the start of a new macro-tile set is detected.
atst.dhs.vbiBlue.speckleOut.batch event associated with the current macro-tile set being processed will be generated with an eventType equal to “error”.
7.8.4.1.7.3 Events Subscribed To The following sections describe the events that the Speckle Output DPN will subscribe to. These events
provide information needed by the module to perform as expected.
None defined at this time.
7.8.4.1.7.4 Events Published The following sections describe the events that the Speckle Output DPN will publish. These events
provide status information to interested systems.
7.8.4.1.7.4.1 atst.dhs.vbiBlue.speckleOut.status This event provides current status information for the Speckle Output DPN. It will be generated after
processing of a batch, or every 3 seconds, whichever occurs first. The following attributes will be
provided for this event:
Attribute Name Type Values Description
status string good | bad | ill Indicator of DPN
health status
7.8.4.1.7.4.2 atst.dhs.vbiBlue.speckleOut.batch This event is generated each time a new batch of slave outputs is processed. The following attributes will
be provided in this event:
Attribute Name Type Values Description
eventType string start, stop Type of event that
occurred
timestamp AtstDate Timestamp when event
occurred
id string Unique identifier for
the batch
The “start” event occurs as soon as the first slave output is received. The “stop” event occurs as soon as
the full frame output is sent to the transfer store.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 184 of 267
7.8.5 Detailed Design
7.8.5.1 Module detailed design
The hierarchical decomposition for each module introduced in section 7.8.2.1 provides a block level view
of the components that comprise each module. The block level view intentionally leaves out
implementation level details such as the programming language and framework elements used. The next
few sections will provide these and other detailed design information for each of the three main software
components of the VBI Blue DPP solution.
7.8.5.1.1 Dark DPN The Dark DPN is implemented as a DHS data processing component in the VBI Blue camera line. As
such, it follows an object oriented design methodology in which the technical architecture is provided by
a base class in the DHS, and custom functional behavior is added through extensions to that class.
7.8.5.1.1.1 Class Diagram The class diagram shown in Figure 84 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Dark DPN.
Figure 84: Dark DPN Class Diagram
The Dark DPN is implemented using the BaseProcessingComponent class of the DHS framework. This
class provides all the technical architecture functionality of the DHS such as subscribing to a topic, sub-
topic, or event.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 185 of 267
At startup the BaseProcessingComponent instantiates a data handling object that provides an interface as
defined by IDataHandler. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. The class it uses to create this data handling object is determined by the value of
the .dpnHandlerClass property.
The data handling functional behavior required by the Dark DPN is provided by the DarkHandler class,
classes from the jCUDA library, CUDA interface to the GPU, and the custom Dark GPU kernel software
itself. Please refer to the specific section for each class for more details.
7.8.5.1.1.2 Deployment Diagram Figure 85 below shows the deployment diagram for the components of the Dark DPN.
Figure 85: Dark DPN Deployment Diagram
The Dark DPN components utilize both CPU and GPU resources and will therefore be deployed on a
CPU/GPU enabled server. This server will have both 10Gb and 1Gb Ethernet connections to allow for
communications with the data and command channel networks respectively. The server will also have
10Gb Ethernet connections to other servers hosting other DPNs in the VBI Blue DPP. The DarkHandler,
DarkKernels binary, jCUDA libraries, and CUDA libraries will be deployed to the server along with the
DHS/BDT framework library.
7.8.5.1.1.3 DarkHandler The DarkHandler class is a subclass of the DHS BaseProcessingComponent class. It also provides an
implementation of the methods defined by the IDataHandler class. Therefore, it can be used by the
BaseProcessingComponent as a data handling object. The DarkHandler must also interface with the code
on the GPU that implements the dark calibration frame generation algorithm. Therefore, at initialization
it uses the jCUDA library to create a GPU context, load references to the kernels from the DarkKernels
binary, and initialize the GPU memories.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 186 of 267
7.8.5.1.1.3.1 Lifecycle The DarkHandler object is created when the DHS loads the BaseProcessingComponent representing the
Dark DPN. The BaseProcessingComponent knows which class to load by using the value of the
.dpnHandlerClass property. It is destroyed when the DHS unloads the BaseProcessingComponent.
7.8.5.1.1.3.2 Member Variables
7.8.5.1.1.3.2.1 Context The DarkHandler will contain a private member variable of type cuContext (from jcuda.driver) which is
used to configure the GPU context. The context consists of settings for host/device synchronization and
blocking.
7.8.5.1.1.3.2.2 Device The DarkHandler will contain a private member variable of type cuDevice (from jcuda.driver) which is
used to identify which system GPU should be used when making calls to other jCUDA object methods.
7.8.5.1.1.3.2.3 Module The DarkHandler will contain a private member variable of type cuModule (from jcuda.driver) which is
used as a reference to the loaded DarkKernels binary. This object allows access to references for the
kernel functions contained in the DarkKernels binary.
7.8.5.1.1.3.2.4 Functions The DarkHandler will contain several private member variables of type cuFunction (from jcuda.driver),
each acting as a reference to a kernel function in the DarkKernels loaded binary. These objects are then
used as parameters to other jCUDA object methods when invoking kernel functions.
7.8.5.1.1.3.2.5 Function Parameters The DarkHandler will contain several private member variables of type Pointer (from jcuda), each acting
as a pointer to a set of parameters for a different kernel function. These function parameters include
pointers to device memory and constant values. Therefore, they can be established during initialization,
and only modified to perform double buffering switches.
7.8.5.1.1.3.3 Methods In addition to the methods inherited from its parent classes, the following methods are realized,
overwritten by, or private to the DarkHandler class.
7.8.5.1.1.3.3.1 onInit The onInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow the
DPN to allocate resources during the initialization process.
7.8.5.1.1.3.3.2 onUnInit The onUnInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow
the DPN to de-allocate resources during the shutdown process.
7.8.5.1.1.3.3.3 process The process method of the IDataHandler interface will be realized by the DarkHandler and provide the
functional behavior required to perform dark calibration frame generation.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 187 of 267
7.8.5.1.1.3.4 Load and Processing Channels Many NVIDIA GPUs provide the capability to perform memory load/unload and kernel execution in
parallel. This capability allows hiding of memory load/unload latencies and therefore improves the
overall performance of the application.
Due to the tight performance requirements of the Speckle image reconstruction process, the use of CUDA
Channels to hide memory load/unload latency is a good choice. Therefore, the DarkHandler uses three
channels: Load, Unload, and Processing.
The Load Channel is used for all memory operations related to loading the GPU with data. Calls to
CUDA API routines such as cudaMalloc, cudaMemcpy (host to device), and cudaMemset will be
assigned to this channel. The Unload Channel is used for all memory operations related to unloading data
from the GPU. Calls to CUDA API routines such as cudaMemcpy (device to host) will be assigned to
this channel. The Processing Channel is used for all GPU kernel executions and will therefore be used for
CUDA API calls to cudaLaunchKernel.
7.8.5.1.1.3.5 Flowcharts The following flowcharts capture the functional steps performed by specific methods and processes.
Method: process
Receive Frame
Extract Metadata
Convert 16 bit to
32 bit
Load to GPU
Execute GPU
kernel
Last frame in
frame set?
Return
Unload from GPU
Write to output
topic and
calibration store
No
Yes
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 188 of 267
7.8.5.1.1.4 jCUDA Driver Library The jCUDA driver library (jcuda.driver) contains classes that provide the bindings to the NVIDIA CUDA
driver C library. The DarkHandler uses these classes to create objects that it can then use to configure,
load data to/from, and launch kernels on the GPU hardware.
7.8.5.1.1.5 NVIDIA CUDA C Libraries The NVIDIA CUDA C libraries provide an API that allows programs to interact with NVIDIA GPU
hardware.
7.8.5.1.1.6 DarkKernels The DarkKernels component will be a CUDA PTX file that houses all the C kernel functions that are used
in the dark calibration frame processing. The PTX format is XML based and allows the kernel code to be
platform/GPU independent. The PTX file can be loaded and compiled into a binary by the CUDA driver
at runtime.
7.8.5.1.2 Gain DPN The Gain DPN is implemented as a DHS data processing component in the VBI Blue camera line. As
such, it follows an object oriented design methodology in which the technical architecture is provided by
a base class in the DHS, and custom functional behavior is added through extensions to that class.
7.8.5.1.2.1 Class Diagram The class diagram shown in Figure 86 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Gain DPN.
Figure 86: Gain DPN Class Diagram
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 189 of 267
The Gain DPN is implemented using the BaseProcessingComponent class of the DHS framework. This
class provides all the technical architecture functionality of the DHS such as subscribing to a topic, sub-
topic, or event.
At startup the BaseProcessingComponent instantiates a data handling object that provides an interface as
defined by IDataHandler. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. The class it uses to create this data handling object is determined by the value of
the .dpnHandlerClass property.
The data handling functional behavior required by the Gain DPN is provided by the GainHandler class,
classes from the jCUDA library, CUDA interface to the GPU, and the custom Gain GPU kernel software
itself. Please refer to the specific section for each class for more details.
7.8.5.1.2.1.1 Deployment Diagram Figure 87 below shows the deployment diagram for the components of the Gain DPN.
Figure 87: Gain DPN Deployment Diagram
The Gain DPN components utilize both CPU and GPU resources and will therefore be deployed on a
CPU/GPU enabled server. This server will have both 10Gb and 1Gb Ethernet connections to allow for
communications with the data and command channel networks respectively. The server will also have
10Gb Ethernet connections to other servers hosting other DPNs in the VBI Blue DPP. The GainHandler,
GainKernels binary, jCUDA libraries, and CUDA libraries will be deployed to the server along with the
DHS/BDT framework library.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 190 of 267
7.8.5.1.2.2 GainHandler The GainHandler class is a subclass of the DHS BaseProcessingComponent class. It also provides an
implementation of the methods defined by the IDataHandler class. Therefore, it can be used by the
BaseProcessingComponent as a data handling object. The GainHandler must also interface with the code
on the GPU that implements the gain calibration frame generation algorithm. Therefore, at initialization
it uses the jCUDA library to create a GPU context, load references to the kernels from the GainKernels
binary, and initialize the GPU memories.
7.8.5.1.2.2.1 Lifecycle The GainHandler object is created when the DHS loads the BaseProcessingComponent representing the
Gain DPN. The BaseProcessingComponent knows which class to load by using the value of the
.dpnHandlerClass property. It is destroyed when the DHS unloads the BaseProcessingComponent.
7.8.5.1.2.2.2 Member Variables
7.8.5.1.2.2.2.1 Context The GainHandler will contain a private member variable of type cuContext (from jcuda.driver) which is
used to configure the GPU context. The context consists of settings for host/device synchronization and
blocking.
7.8.5.1.2.2.2.2 Device The GainHandler will contain a private member variable of type cuDevice (from jcuda.driver) which is
used to identify which system GPU should be used when making calls to other jCUDA object methods.
7.8.5.1.2.2.2.3 Module The GainHandler will contain a private member variable of type cuModule (from jcuda.driver) which is
used as a reference to the loaded GainKernels binary. This object allows access to references for the
kernel functions contained in the GainKernels binary.
7.8.5.1.2.2.2.4 Functions The GainHandler will contain several private member variables of type cuFunction (from jcuda.driver),
each acting as a reference to a kernel function in the GainKernels loaded binary. These objects are then
used as parameters to other jCUDA object methods when invoking kernel functions.
7.8.5.1.2.2.2.5 Function Parameters The GainHandler will contain several private member variables of type Pointer (from jcuda), each acting
as a pointer to a set of parameters for a different kernel function. These function parameters include
pointers to device memory and constant values. Therefore, they can be established during initialization,
and only modified to perform double buffering switches.
7.8.5.1.2.2.3 Methods In addition to the methods inherited from its parent classes, the following methods are realized,
overwritten by, or private to the GainHandler class.
7.8.5.1.2.2.3.1 onInit The onInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow the
DPN to allocate resources during the initialization process.
7.8.5.1.2.2.3.2 onUnInit The onUnInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow
the DPN to de-allocate resources during the shutdown process.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 191 of 267
7.8.5.1.2.2.3.3 process The process method of the IDataHandler interface will be realized by the GainHandler and provide the
functional behavior required to perform gain calibration frame generation.
7.8.5.1.2.2.3.4 subTopicReceive The subTopicReceive method of the IDataHandler interface will be realized by the GainHandler and
provides the functional behavior required to handle receipt of new dark calibration frames.
7.8.5.1.2.2.4 Load and Processing Channels Many NVIDIA GPUs provide the capability to perform memory load/unload and kernel execution in
parallel. This capability allows hiding of memory load/unload latencies and therefore improves the
overall performance of the application.
Due to the tight performance requirements of the Speckle image reconstruction process, the use of CUDA
Channels to hide memory load/unload latency is a good choice. Therefore, the GainHandler uses three
channels: Load, Unload, and Processing.
The Load Channel is used for all memory operations related to loading the GPU with data. Calls to
CUDA API routines such as cudaMalloc, cudaMemcpy (host to device), and cudaMemset will be
assigned to this channel. The Unload Channel is used for all memory operations related to unloading data
from the GPU. Calls to CUDA API routines such as cudaMemcpy (device to host) will be assigned to
this channel. The Processing Channel is used for all GPU kernel executions and will therefore be used for
CUDA API calls to cudaLaunchKernel.
7.8.5.1.2.2.5 Flowcharts The following flowcharts capture the functional steps performed by specific methods and processes.
Method: process
Receive Frame
Extract Metadata
Convert 16 bit to
32 bit
Load to GPU
Execute GPU
kernel
Last frame in
frame set?
Return
Unload from GPU
Write to output
topic and
calibration store
No
Yes
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 192 of 267
7.8.5.1.2.3 jCUDA Driver Library The jCUDA driver library (jcuda.driver) contains classes that provide the bindings to the NVIDIA CUDA
driver C library. The GainHandler uses these classes to create objects that it can then use to configure,
load data to/from, and launch kernels on the GPU hardware.
7.8.5.1.2.4 NVIDIA CUDA C Libraries The NVIDIA CUDA C libraries provide an API that allows programs to interact with NVIDIA GPU
hardware.
7.8.5.1.2.5 GainKernels The GainKernels component will be a CUDA PTX file that houses all the C kernel functions that are used
in the gain calibration frame processing. The PTX format is XML based and allows the kernel code to be
platform/GPU independent. The PTX file can be loaded and compiled into a binary by the CUDA driver
at runtime.
7.8.5.1.3 Frame Selection DPN The Frame Selection DPN is implemented as a DHS data processing component in the VBI Blue camera
line. As such, it follows an object oriented design methodology in which the technical architecture is
provided by a base class in the DHS, and custom functional behavior is added through extensions to that
class.
7.8.5.1.3.1 Class Diagram The class diagram shown in Figure 88 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Frame Selection
DPN.
Figure 88: Frame Selection DPN Class Diagram
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 193 of 267
The Frame Selection DPN is implemented using the BaseProcessingComponent class of the DHS
framework. This class provides all the technical architecture functionality of the DHS such as subscribing
to a topic, sub-topic, or event.
At startup the BaseProcessingComponent instantiates a data handling object that provides an interface as
defined by IDataHandler. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. The class it uses to create this data handling object is determined by the value of
the .dpnHandlerClass property.
The data handling functional behavior required by the Frame Selection DPN is provided by the
FrameSelectionHandler class, classes from the jCUDA library, CUDA interface to the GPU, and the
custom Frame Selection GPU kernel software itself. Please refer to the specific section for each class for
more details.
7.8.5.1.3.1.1 Deployment Diagram Figure 89 below shows the deployment diagram for the components of the Frame Selection DPN.
Figure 89: Frame Selection DPN Deployment Diagram
The Frame Selection DPN components utilize both CPU and GPU resources and will therefore be
deployed on a CPU/GPU enabled server. This server will have both 10Gb and 1Gb Ethernet connections
to allow for communications with the data and command channel networks respectively. The server will
also have 10Gb Ethernet connections to other servers hosting other DPNs in the VBI Blue DPP. The
FrameSelectionHandler, FrameSelectionKernels binary, jCUDA libraries, and CUDA libraries will be
deployed to the server along with the DHS/BDT framework library.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 194 of 267
7.8.5.1.3.2 FrameSelectionHandler The FrameSelectionHandler class is a subclass of the DHS BaseProcessingComponent class. It also
provides an implementation of the methods defined by the IDataHandler class. Therefore, it can be used
by the BaseProcessingComponent as a data handling object. The FrameSelectionHandler must also
interface with the code on the GPU that implements the frame selection algorithms. Therefore, at
initialization it uses the jCUDA library to create a GPU context, load references to the kernels from the
FrameSelectionKernels binary, and initialize the GPU memories.
7.8.5.1.3.2.1 Lifecycle The FrameSelectionHandler object is created when the DHS loads the BaseProcessingComponent
representing the Gain DPN. The BaseProcessingComponent knows which class to load by using the
value of the .dpnHandlerClass property. It is destroyed when the DHS unloads the
BaseProcessingComponent.
7.8.5.1.3.2.2 Member Variables
7.8.5.1.3.2.2.1 Context The FrameSelectionHandler will contain a private member variable of type cuContext (from jcuda.driver)
which is used to configure the GPU context. The context consists of settings for host/device
synchronization and blocking.
7.8.5.1.3.2.2.2 Device The FrameSelectionHandler will contain a private member variable of type cuDevice (from jcuda.driver)
which is used to identify which system GPU should be used when making calls to other jCUDA object
methods.
7.8.5.1.3.2.2.3 Module The FrameSelectionHandler will contain a private member variable of type cuModule (from jcuda.driver)
which is used as a reference to the loaded FrameSelectionKernels binary. This object allows access to
references for the kernel functions contained in the FrameSelectionKernels binary.
7.8.5.1.3.2.2.4 Functions The FrameSelectionHandler will contain several private member variables of type cuFunction (from
jcuda.driver), each acting as a reference to a kernel function in the FrameSelectionKernels loaded binary.
These objects are then used as parameters to other jCUDA object methods when invoking kernel
functions.
7.8.5.1.3.2.2.5 Function Parameters The FrameSelectionHandler will contain several private member variables of type Pointer (from jcuda),
each acting as a pointer to a set of parameters for a different kernel function. These function parameters
include pointers to device memory and constant values. Therefore, they can be established during
initialization, and only modified to perform double buffering switches.
7.8.5.1.3.2.3 Methods In addition to the methods inherited from its parent classes, the following methods are realized,
overwritten by, or private to the FrameSelectionHandler class.
7.8.5.1.3.2.3.1 onInit The onInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow the
DPN to allocate resources during the initialization process.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 195 of 267
7.8.5.1.3.2.3.2 onUnInit The onUnInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow
the DPN to de-allocate resources during the shutdown process.
7.8.5.1.3.2.3.3 process The process method of the IDataHandler interface will be realized by the FrameSelectionHandler and
provide the functional behavior required to perform gain calibration frame generation.
7.8.5.1.3.2.3.4 subTopicReceive The subTopicReceive method of the IDataHandler interface will be realized by the
FrameSelectionHandler and provides the functional behavior required to handle receipt of new dark and
gain calibration frames.
7.8.5.1.3.2.4 Load and Processing Channels Many NVIDIA GPUs provide the capability to perform memory load/unload and kernel execution in
parallel. This capability allows hiding of memory load/unload latencies and therefore improves the
overall performance of the application.
Due to the tight performance requirements of the Speckle image reconstruction process, the use of CUDA
Channels to hide memory load/unload latency is a good choice. Therefore, the FrameSelectionHandler
uses three channels: Load, Unload, and Processing.
The Load Channel is used for all memory operations related to loading the GPU with data. Calls to
CUDA API routines such as cudaMalloc, cudaMemcpy (host to device), and cudaMemset will be
assigned to this channel. The Unload Channel is used for all memory operations related to unloading data
from the GPU. Calls to CUDA API routines such as cudaMemcpy (device to host) will be assigned to
this channel. The Processing Channel is used for all GPU kernel executions and will therefore be used for
CUDA API calls to cudaLaunchKernel.
7.8.5.1.3.2.5 Flowcharts The following flowcharts capture the functional steps performed by specific methods and processes.
Method: process
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 196 of 267
7.8.5.1.3.3 jCUDA Driver Library The jCUDA driver library (jcuda.driver) contains classes that provide the bindings to the NVIDIA CUDA
driver C library. The FrameSelectionHandler uses these classes to create objects that it can then use to
configure, load data to/from, and launch kernels on the GPU hardware.
Receive Frame
Extract Metadata
Extract ROI
Load to GPU
Execute GPU
kernels
Last frame in
frame set?
Return
Unload from GPU
Update metadata
No
Yes
Convert 16 to 32
bit
Publish to output
topic
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 197 of 267
7.8.5.1.3.4 NVIDIA CUDA C Libraries The NVIDIA CUDA C libraries provide an API that allows programs to interact with NVIDIA GPU
hardware.
7.8.5.1.3.5 FrameSelectionKernels The FrameSelectionKernels component will be a CUDA PTX file that houses all the C kernel functions
that are used in the frame selection processing. The PTX format is XML based and allows the kernel
code to be platform/GPU independent. The PTX file can be loaded and compiled into a binary by the
CUDA driver at runtime.
7.8.5.1.4 Detailed Display DPN The Detailed Display DPN is implemented as a DHS data processing component in the VBI Blue camera
line. As such, it follows an object oriented design methodology in which the technical architecture is
provided by a base class in the DHS, and custom functional behavior is added through extensions to that
class.
7.8.5.1.4.1 Class Diagram The class diagram shown in Figure 90 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Detailed Display
DPN.
Figure 90: Detailed Display DPN Class Diagram
The Detailed Display DPN is implemented using the BaseProcessingComponent class of the DHS
framework. This class provides all the technical architecture functionality of the DHS such as subscribing
to a topic, sub-topic, or event.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 198 of 267
At startup the BaseProcessingComponent instantiates a data handling object that provides an interface as
defined by IDataHandler. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. The class it uses to create this data handling object is determined by the value of
the .dpnHandlerClass property.
The data handling functional behavior required by the Detailed Display DPN is provided by the
DetailedDisplayHandler class, classes from the jCUDA library, CUDA interface to the GPU, and the
custom Detailed Display GPU kernel software itself. Please refer to the specific section for each class for
more details.
7.8.5.1.4.1.1 Deployment Diagram Figure 91 below shows the deployment diagram for the components of the Detailed Display DPN.
Figure 91: Detailed Display DPN Deployment Diagram
The Detailed Display DPN components utilize both CPU and GPU resources and will therefore be
deployed on a CPU/GPU enabled server. This server will have both 10Gb and 1Gb Ethernet connections
to allow for communications with the data and command channel networks respectively. The server will
also have 10Gb Ethernet connections to other servers hosting other DPNs in the VBI Blue DPP. The
DetailedDisplayHandler, DetailedDisplayKernels binary, jCUDA libraries, and CUDA libraries will be
deployed to the server along with the DHS/BDT framework library.
7.8.5.1.4.2 DetailedDisplayHandler The DetailedDisplayHandler class is a subclass of the DHS BaseProcessingComponent class. It also
provides an implementation of the methods defined by the IDataHandler class. Therefore, it can be used
by the BaseProcessingComponent as a data handling object. The DetailedDisplayHandler must also
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 199 of 267
interface with the code on the GPU that implements the detailed display algorithms. Therefore, at
initialization it uses the jCUDA library to create a GPU context, load references to the kernels from the
DetailedDisplayKernels binary, and initialize the GPU memories.
7.8.5.1.4.2.1 Lifecycle The DetailedDisplayHandler object is created when the DHS loads the BaseProcessingComponent
representing the Gain DPN. The BaseProcessingComponent knows which class to load by using the
value of the .dpnHandlerClass property. It is destroyed when the DHS unloads the
BaseProcessingComponent.
7.8.5.1.4.2.2 Member Variables
7.8.5.1.4.2.2.1 Context The DetailedDisplayHandler will contain a private member variable of type cuContext (from
jcuda.driver) which is used to configure the GPU context. The context consists of settings for host/device
synchronization and blocking.
7.8.5.1.4.2.2.2 Device The DetailedDisplayHandler will contain a private member variable of type cuDevice (from jcuda.driver)
which is used to identify which system GPU should be used when making calls to other jCUDA object
methods.
7.8.5.1.4.2.2.3 Module The DetailedDisplayHandler will contain a private member variable of type cuModule (from jcuda.driver)
which is used as a reference to the loaded DetailedDisplayKernels binary. This object allows access to
references for the kernel functions contained in the DetailedDisplayKernels binary.
7.8.5.1.4.2.2.4 Functions The DetailedDisplayHandler will contain several private member variables of type cuFunction (from
jcuda.driver), each acting as a reference to a kernel function in the DetailedDisplayKernels loaded binary.
These objects are then used as parameters to other jCUDA object methods when invoking kernel
functions.
7.8.5.1.4.2.2.5 Function Parameters The DetailedDisplayHandler will contain several private member variables of type Pointer (from jcuda),
each acting as a pointer to a set of parameters for a different kernel function. These function parameters
include pointers to device memory and constant values. Therefore, they can be established during
initialization, and only modified to perform double buffering switches.
7.8.5.1.4.2.3 Methods In addition to the methods inherited from its parent classes, the following methods are realized,
overwritten by, or private to the DetailedDisplayHandler class.
7.8.5.1.4.2.3.1 onInit The onInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow the
DPN to allocate resources during the initialization process.
7.8.5.1.4.2.3.2 onUnInit The onUnInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow
the DPN to de-allocate resources during the shutdown process.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 200 of 267
7.8.5.1.4.2.3.3 process The process method of the IDataHandler interface will be realized by the DetailedDisplayHandler and
provide the functional behavior required to perform frame calibration.
7.8.5.1.4.2.3.4 subTopicReceive The subTopicReceive method of the IDataHandler interface will be realized by the
DetailedDIsplayHandler and provide the functional behavior required to handle receipt of new dark and
gain calibration frames.
7.8.5.1.4.2.4 Load and Processing Channels Many NVIDIA GPUs provide the capability to perform memory load/unload and kernel execution in
parallel. This capability allows hiding of memory load/unload latencies and therefore improves the
overall performance of the application.
Due to the tight performance requirements of the Speckle image reconstruction process, the use of CUDA
Channels to hide memory load/unload latency is a good choice. Therefore, the DetailedDisplayHandler
uses three channels: Load, Unload, and Processing.
The Load Channel is used for all memory operations related to loading the GPU with data. Calls to
CUDA API routines such as cudaMalloc, cudaMemcpy (host to device), and cudaMemset will be
assigned to this channel. The Unload Channel is used for all memory operations related to unloading data
from the GPU. Calls to CUDA API routines such as cudaMemcpy (device to host) will be assigned to
this channel. The Processing Channel is used for all GPU kernel executions and will therefore be used for
CUDA API calls to cudaLaunchKernel.
7.8.5.1.4.2.5 Flowcharts The following flowcharts capture the functional steps performed by specific methods and processes.
Method: process
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 201 of 267
7.8.5.1.4.3 jCUDA Driver Library The jCUDA driver library (jcuda.driver) contains classes that provide the bindings to the NVIDIA CUDA
driver C library. The DetailedDisplayHandler uses these classes to create objects that it can then use to
configure, load data to/from, and launch kernels on the GPU hardware.
7.8.5.1.4.4 NVIDIA CUDA C Libraries The NVIDIA CUDA C libraries provide an API that allows programs to interact with NVIDIA GPU
hardware.
7.8.5.1.4.5 DetailedDisplayKernels The DetailedDisplayKernels component will be a CUDA PTX file that houses all the C kernel functions
that are used in the frame calibration processing. The PTX format is XML based and allows the kernel
code to be platform/GPU independent. The PTX file can be loaded and compiled into a binary by the
CUDA driver at runtime.
7.8.5.1.5 Speckle Input DPN
The Speckle Input DPN is implemented as a DHS data processing component in the VBI Blue camera
line. As such, it follows an object oriented design methodology in which the technical architecture is
provided by a base class in the DHS, and custom functional behavior is added through extensions to that
class.
7.8.5.1.5.1.1 Class Diagram The class diagram shown in Figure 92 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Speckle Input
DPN.
Receive Frame
Extract Metadata
Convert 16 bit to
32 bit
Load to GPU
Execute GPU
kernels
Return
Unload from GPU
Write to output
topic
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 202 of 267
Figure 92: Speckle Input DPN Class Diagram
The Speckle Input DPN is implemented using the BaseProcessingComponent class of the DHS
framework. This class provides all the technical architecture functionality of the DHS such as subscribing
to a topic, sub-topic, or event. The BaseProcessingComponent class contains a data handling object that
implements the IDataHandler interface. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. Functionality specific to the Speckle Input DPN is therefore provided by the
MasterInputHandler class which implements the methods defined in the IDataHandler interface. Thus
when data is received it is handed from the BaseProcessingCompoennt to the MasterInputHandler.
7.8.5.1.5.1.2 Deployment Diagram Figure 93 below shows the deployment diagram for the components of the Speckle Input DPN.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 203 of 267
Figure 93: Speckle Input DPN Deployment Diagram
The Speckle Input DPN components do not utilize any GPU hardware and will therefore be deployed on a
CPU only blade of the CRAY CX-1000 server. This “input blade” will have both 10Gb and 1Gb Ethernet
connections to allow for communications with the data and command channel networks respectively. The
input blade will also have 10Gb Ethernet connections to each of the Speckle slave node blades. The
MasterInputHandler and Utilities classes will be deployed to the input blade along with the DHS/BDT
framework library.
7.8.5.1.5.1.3 SpeckleInputHandler The SpeckleInputHandler is a Java class that implements the IDataHandler interface. Therefore, through
polymorphism an instance of SpeckleInputHandler can be used as an IDataHandler object by the
BaseProcessingComponent.
7.8.5.1.5.1.3.1 Lifecycle The SpeckleInputHandler object is instantiated by the BaseProcessingComponent when loaded by the
DHS. The BaseProcessingComponent knows the name of the class to load by using the value of the
.dpnHandlerClass property.
7.8.5.1.5.1.3.2 Member Variables None defined at this time.
7.8.5.1.5.1.3.3 Methods The SpeckleInputHandler provides implementations of all the methods defined in the IDataHandler
interface.
7.8.5.1.5.1.3.4 Flowcharts The following flowcharts capture the major functional steps performed by the specified method or
processes.
Method: process
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 204 of 267
7.8.5.1.5.2 Speckle Slave DPN
The Speckle Slave DPN is implemented as a DHS data processing component in the VBI Blue camera
line. As such, it follows an object oriented design methodology in which the technical architecture is
provided by a base class in the DHS, and custom functional behavior is added through extensions to that
class.
7.8.5.1.5.2.1 Class Diagram The class diagram shown in Figure 94 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Speckle Slave
DPN.
Receive Frame
Extract Metadata
Performed on each frame
Split into macro-
tiles
Add macro-tile
meta-data
Publish to slave
nodes
Return
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 205 of 267
Figure 94: Speckle Slave DPN Class Diagram
The Speckle Slave DPN is implemented using the BaseProcessingComponent class of the DHS
framework. This class provides all the technical architecture functionality of the DHS such as subscribing
to a topic, sub-topic, or event.
At startup the BaseProcessingComponent instantiates a data handling object that provides an interface as
defined by IDataHandler. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
IDataHandler interface. The class it uses to create this data handling object is determined by the value of
the .dpnHandlerClass property.
The data handling functional behavior required by the Speckle Slave DPN is provided by the
SlaveInputHandler class, classes from the jCUDA library, CUDA interface to the GPU, and the custom
Speckle GPU kernel software itself. Please refer to the specific section for each class for more details.
7.8.5.1.5.2.2 Deployment Diagram Figure 95 below shows the deployment diagram for the components of the Speckle Slave DPN.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 206 of 267
Figure 95: Speckle Slave DPN Deployment Diagram
The Speckle Slave DPN components utilize both CPU and GPU resource and will therefore be deployed
on a CPU/GPU blade of the CRAY CX-1000 server. This “slave blade” will have both 10Gb and 1Gb
Ethernet connections to allow for communications with the data and command channel networks
respectively. The slave blade will also have 10Gb Ethernet connections to the blades hosting the Speckle
Input DPN and Speckle Output DPN. The SlaveInputHandler, SpeckleKernels binary, jCUDA libraries,
and CUDA libraries will be deployed to the slave blade along with the DHS/BDT framework library.
7.8.5.1.5.3 SlaveInputHandler The SlaveInputHandler class is a subclass of the DHS BaseProcessingComponent class. It also provides
an implementation of the methods defined by the IDataHandler class. Therefore, it can be used by the
BaseProcessingComponent as a data handling object. The SlaveInputHandler must also interface with the
Speckle code running on the GPU. Therefore, at initialization it uses the jCUDA library to create a GPU
context, load references to the kernels from the SpeckleKernels binary, and initialize the GPU memories.
7.8.5.1.5.3.1 Lifecycle The SlaveInputHandler object is created when the DHS loads the BaseProcessingComponent representing
the Speckle Slave DPN. The BaseProcessingComponent knows which class to load by using the value of
the .dpnHandlerClass property. It is destroyed when the DHS unloads the BaseProcessingComponent.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 207 of 267
7.8.5.1.5.3.2 Member Variables
7.8.5.1.5.3.2.1 Context The SlaveInputHandler will contain a private member variable of type cuContext (from jcuda.driver)
which is used to configure the GPU context. The context consists of settings for host/device
synchronization and blocking.
7.8.5.1.5.3.2.2 Device The SlaveInputHandler will contain a private member variable of type cuDevice (from jcuda.driver)
which is used to identify which system GPU should be used when making calls to other jCUDA object
methods.
7.8.5.1.5.3.2.3 Module The SlaveInputHandler will contain a private member variable of type cuModule (from jcuda.driver)
which is used as a reference to the loaded SpeckleKernels binary. This object allows access to references
for the kernel functions contained in the SpeckleKernels binary.
7.8.5.1.5.3.2.4 Functions The SlaveInputHandler will contain several private member variables of type cuFunction (from
jcuda.driver), each acting as a reference to a kernel function in the SpeckleKernels loaded binary. These
objects are then used as parameters to other jCUDA object methods when invoking kernel functions.
7.8.5.1.5.3.2.5 Function Parameters The SlaveInputHandler will contain several private member variables of type Pointer (from jcuda), each
acting as a pointer to a set of parameters for a different kernel function. These function parameters
include pointers to device memory and constant values. Therefore, they can be established during
initialization, and only modified to perform double buffering switches.
7.8.5.1.5.3.3 Methods In addition to the methods inherited from its parent classes, the following methods are realized,
overwritten by, or private to the SlaveInputHandler class.
7.8.5.1.5.3.3.1 onInit The onInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow the
DPN to allocate resources during the initialization process.
7.8.5.1.5.3.3.2 onUnInit The onUnInit method is part of the IDataHandler interface and is a hook provided by the DHS to allow
the DPN to de-allocate resources during the shutdown process.
7.8.5.1.5.3.3.3 process The process method of the IDataHAndler interface will be realized by the SlaveInputHandler and provide
the functional behavior required to perform Speckle image reconstruction.
7.8.5.1.5.3.3.4 subTopicReceive The subTopicReceive method of the IDataHandler interface will be realized by the SlaveInputHandler
and provide the functional behavior required to handle gain, dark, and AO matrix subtopic data handling.
7.8.5.1.5.3.3.5 eventNotify The eventNotify method of the IDataHandler interface will be realized by the SlaveInputHandler and
provide the functional behavior required to handle all subscribed events.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 208 of 267
7.8.5.1.5.3.3.6 Initialize Bi-spectrum Positions During initialization, the SlaveInputHandler will use information from the properties database to calculate
the bispectrum coordinate positions to use during Speckle image reconstruction. These bispectrum
coordinate positions remain fixed unless the SlaveInputHandler is re-initialized.
7.8.5.1.5.3.4 Load and Processing Channels Many NVIDIA GPUs provide the capability to perform memory load/unload and kernel execution in
parallel. This capability allows hiding of memory load/unload latencies and therefore improves the
overall performance of the application.
Due to the tight performance requirements of the Speckle image reconstruction process, the use of CUDA
Channels to hide memory load/unload latency is a good choice. Therefore, the SlaveInputHandler uses
three channels: Load, Unload, and Processing.
The Load Channel is used for all memory operations related to loading the GPU with data. Calls to
CUDA API routines such as cudaMalloc, cudaMemcpy (host to device), and cudaMemset will be
assigned to this channel. The Unload Channel is used for all memory operations related to unloading data
from the GPU. Calls to CUDA API routines such as cudaMemcpy (device to host) will be assigned to
this channel. The Processing Channel is used for all GPU kernel executions and will therefore be used for
CUDA API calls to cudaLaunchKernel.
7.8.5.1.5.3.5 Flowcharts The following flowcharts capture the functional steps performed by specific methods and processes.
Method: process
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 209 of 267
Method: subTopicReceive
Receive macro-tile
Frequency
Decomposition
Bispectrum
Normalization
Recursive Phase
Reconstruction
Phase
Normalization
Iterative Phase
Reconstruction
Phase
Normalization
Phase – Amplitude
Combine
Reconstruct
Spatial Image from
Frequency Domain
Average
Bispectrum Values
Average Fourier
Amplitudes
Average Phase
Initialization Values
Performed on each tile
Calculated for each tile
Averaged over all tiles in
the macro-tile cube
Performed on single
reduction tile
Transfer macro-tile
Unload from GPU
to host output
buffer
Load macro-tile to
GPU input buffer
Load Channel
Processing Channel
> Task boxes that align horizontally occur in the same kernel
> Solid lines indicate flow
> Dotted lines indicate when data is used
Normalize Average
Phase Initialization
Values
Last macro-tile in
cube?
Calculate light level
Convert int16 to
float32
Calibrate
No
Yes
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 210 of 267
7.8.5.1.5.4 jCUDA Driver Library The jCUDA driver library (jcuda.driver) contains classes that provide the bindings to the NVIDIA CUDA
driver C library. The SlaveInputHandler uses these classes to create objects that it can then use to
configure, load data to/from, and launch kernels on the GPU hardware.
7.8.5.1.5.5 jCUDA jCufft Library The jCUDA JCufft library (jcuda.jcufft) contains classes that provide the bindings to the NVIDIA
CUDA CUFFT C library. The SlaveInputHandler uses these classes to create objects that it can then use
to configure and launch Fast Fourier Transform (FFT) algorithms on the GPU.
7.8.5.1.5.6 NVIDIA CUDA C Libraries The NVIDIA CUDA C libraries provide an API that allows programs to interact with NVIDIA GPU
hardware.
7.8.5.1.5.7 SpeckleKernels The SpeckleKernels component will be a CUDA PTX file that houses all the C kernel functions that are
used in the Speckle image processing. The PTX format is XML based and allows the kernel code to be
platform/GPU independent. The PTX file can be loaded and compiled into a binary by the CUDA driver
at runtime. The following sections will provide details on GPU data structures required by these kernel
functions as well as inputs, outputs, execution strategy, and processing steps of each kernel function.
7.8.5.1.5.7.1 GPU Data Structures
7.8.5.1.5.7.1.1 Image Fourier Phases The Fourier phase data for the image will be stored in a GPU memory buffer so it can be accessed quickly
by the GPU kernel functions. This buffer is of the type cufftComplex in which each element consists of
two 4 byte float values refered by .x and .y respectively. The cufftComplex type is used because it is
required for the CUDA FFT library functions. Figure 96 below illustrates how the macro-tile cube (frame
set) image data is represented by the GPU data buffer.
Receive Sub-topic
Extract Metadata
Return
Gain/Dark
calibration
image?
Load to Gain/Dark
calibration GPU
buffer
Load to AO matrix
GPU buffer
AO matrix data?
No
Yes Yes
Unknown subtopic
No
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 211 of 267
Figure 96: Fourier Phase GPU Data Structure
7.8.5.1.5.7.1.2 Image Fourier Amplitudes Fourier amplitudes of each image pixel are averaged across the entire macro-tile cube data set. This
averaging is performed by one of the GPU kernel functions and the results are stored to a buffer in GPU
global memory. Figure 97 below illustrates how Fourier amplitude data is represented by the GPU data
buffer.
Figure 97: Fourier Amplitude GPU Data Structure
7.8.5.1.5.7.1.3 Bi-spectrum positions The bi-spectrum positions are a calculated series of triple indices into the image pixel data. Each set of 3
indices represents the u, v, and uv components of the image that will be used together in the Speckle
image reconstruction algorithms. The positions are calculated during initialization of the
SlaveInputHandler, loaded to the GPU, and remain fixed unless the SlaveInputHandler is re-initialized.
The bi-spectrum positions are stored in an array of unsigned short integers. An unsigned short integer can
be used as an index into the image array because 16-bits unsigned is sufficient for indexing into an array
of data for a 1024x1024 image. Figure 98 below illustrates how the bi-sepctrum position coordinate data
is represented by the GPU data structure.
Figure 98: Bi-spectrum Position Coordinates GPU Data Structure
NX*(NY/2 + 1)
NZ1 2
1 2
BATCH1 2
float float
1 2
Frame Set
Frame
Tile
Pixels (2)
(cufftComplex)
NX*(NY/2 + 1)1 2
BATCH1 2
Frame
Tile Pixels
(float)
Number of
bispectrum
positions
1 2
ushort ushort
1 2
Bi-spectrum
Positions
Bi-spectrum
(u, v, and uv)ushort
3
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 212 of 267
7.8.5.1.5.7.1.4 Bi-spectrum averages The bi-spectrum averaging GPU kernel function calculates the bi-spectrum averages using the bi-
spectrum positions of the image data. The resulting bi-spectrum averages are stored in a buffer in GPU
global memory so they can be used during subsequent steps of the Speckle image reconstruction process.
The GPU buffer is of the type cufftComplex and its size is based on the number of tiles and bi-spectrum
positions. Figure 99 below illustrates how the bi-spectrum average values are represented by the GPU
data buffer.
Figure 99: Bi-spectrum Averages GPU Data Structure
7.8.5.1.5.7.1.5 Bi-spectrum init values The phase reconstruction process requires that initial guesses be made for three Fourier phase values of
each tile. These initial guesses are calculated using strategic spatial frequency positions, stored in a
buffer, and then used later on during phase reconstruction. The buffer is therefore of the type
cufftComplex and has a size equal to 3 times the number of tiles. Figure 100 below illustrates how the bi-
spectrum value initial guesses are represented by the GPU data buffer.
Figure 100: Bi-spectrum Initialization Values GPU Data Structure
7.8.5.1.5.7.1.6 Phase Consistency In order to evaluate the uncertainty of the phase estimation, we need to keep track of the number of
updates performed to the phase of an image pixel. The array can then be used later in the process for
weighting and noise filter purposes. This array is of type float and its size depends on the number of
pixels and tiles. Figure 101 below illustrates how the phase consistency array is represented by the GPU
data array.
cufftComplex
Number of
bispectrum
positions
1 2
Bi-spectrum
AveragescufftComplex cufftComplex
float float
1 2
Bi-spectrum
Average
Tiles
Number of tiles
per frame
cufftComplex
1 2
Bi-spectrum
Initial
Guesses
cufftComplex cufftComplex
float float
1 2
Bi-spectrum
Initial Guess
Values
Tiles
Number of tiles
per frame
3
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 213 of 267
Figure 101: Phase Consistency GPU Data Structure
7.8.5.1.5.7.1.7 Image Fourier Reconstructed Phases The Fourier phase data for the reconstructed image will be stored in a GPU memory buffer so it can be
accessed quickly by the GPU kernel functions. This buffer is of the type cufftComplex in which each
element consists of two 4 byte float values refered by .x and .y respectively. The cufftComplex type is
used because it is required for the CUDA FFT library functions. Figure 102 below illustrates how the
macro-tile cube (frame set) image data is represented by the GPU data buffer.
Figure 102: Reconstructed Fourier Phase GPU Data Structure
7.8.5.1.5.7.2 GPU (Kernel) Functions The SpeckleKernels includes CUDA kernel functions that implement the algorithms for each processing
step of the Speckle image reconstruction.
7.8.5.1.5.7.2.1 Bi-spectrum Averaging The bi-spectrum averaging kernel performs three computational tasks:
Average Phase Initialization Values
Average Fourier Amplitudes
Average Bi-spectrum Values
These three tasks are grouped together into one kernel because the structure of the kernel’s parallel thread
execution grid is conducive to re-use of pixel data read from GPU global memory.
Inputs
NX*(NY/2 + 1)1 2
BATCH1 2
Frame
Pixel phase
consistency
(float)
NX*(NY/2 + 1)
NZ1 2
1 2
BATCH1 2
float float
1 2
Frame Set
Frame
Tile
Pixels (2)
(cufftComplex)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 214 of 267
Image Fourier Phase Buffer
Image Fourier Amplitude Buffer
Bi-spectrum Position Buffer
Bi-spectrum Initial Guess Buffer
Bi-spectrum Averages Buffer Outputs
Image Fourier Amplitude Buffer – Updated with running averages
Bi-spectrum Initial Guess Buffer – Updated with running mean
Bi-spectrum Averages Buffer – Updated with running averages
Execution
The kernel execution grid is structured to achieve pixel level parallelism. Thus each thread works
on one bi-spectrum position set (u, v, uv) and calculates the resulting bi-spectrum value for those
vectors. The running average of the values is calculated as the kernel is launched during
processing of subsequent images in the macro-tile cube. Figure 103 below illustrates how the
kernel execution grid will be structured.
Figure 103: Bi-spectrum Averaging Kernel Execution Grid
7.8.5.1.5.7.2.2 Bi-spectrum Normalization The bi-spectrum normalization kernel is launched after the bi-spectrum averaging has completed. This
kernel normalizes all the bi-spectrum values using a pixel level parallel approach.
Inputs
Bi-spectrum Initial Guess Buffer
Bi-spectrum Averages Buffer
Outputs
Bi-spectrum Initial Guess Buffer – Normalized
1024 threads1
1024 threads
1024 threads
1024 threads 1024 threads
2
#bi-spectrum pos / TPB
# Tiles
……………………
.
Grid X Dimension
1024 threads
1024 threads
1024 threads……………………
.
1024 threads……………………
.
1 2
Grid Y Dimension
.
.
.
.
.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 215 of 267
Bi-spectrum Averages Buffer – Normalized
Execution
Since pixel level parallelism is used, each thread in the kernel execution grid works on a single
bi-spectrum value. The structure of the kernel execution grid is the same as that for the bi-
spectrum averaging, which is shown in Figure 103.
7.8.5.1.5.7.2.3 Bi-spectrum Recursive Phase Reconstruction The bi-spectrum recursive phase reconstruction performs phase reconstruction by first seeding the
frequency values using the bi-spectrum initial guesses, and then calculating the complex conjugate (uv)
for all bi-spectrum positions of a tile in order.
Inputs
Image Fourier Reconstructed Phase Buffer
Bi-spectrum Position Buffer
Bi-spectrum Initial Guess Buffer
Bi-spectrum Averages Buffer
Phase Consistency Buffer
Outputs
Image Fourier Reconstructed Phase Buffer – Updated with reconstructed phase data
Phase Consistency Buffer – Updated with visit counts for pixels
Execution
This algorithm requires the kernel to be launched such that each thread works on a single tile, and
performs the calculations for all bi-spectrum positions. Therefore, each block is configured to
contain a number of threads equal to the max warp size (32 for C2050). Figure 104 below
illustrates the execution grid for this kernel.
Figure 104: Bi-sepctrum Recursive Phase Reconstruction Kernel Execution Grid
7.8.5.1.5.7.2.4 Bi-spectrum Iterative Phase Reconstruction The bi-spectrum iterative kernel performs iterative phase reconstruction for u, v, and uv using the initial
bi-spectrum guesses and uv outputs from the recursive phase reconstruction. The output from each
iteration is reconstructed phase data that is then used as the reference data for the next iteration.
Inputs
Image Fourier Reconstructed Phase Buffer
32 threads1 32 threads 32 threads
# Tiles / 32
……………………
.
Grid X Dimension
1 2
Grid Y Dimension
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 216 of 267
Bi-spectrum Position Buffer
Bi-spectrum Initial Guess Buffer
Bi-spectrum Averages Buffer
Phase Consistency Buffer
Outputs
Image Fourier Reconstructed Phase Buffer – Updated with reconstructed phase data
Phase Consistency Buffer – Updated with visit counts for pixels
Execution
The iterative phase reconstruction is one of the most computationally intensive kernels in the
Speckle process. We must therefore utilize as much of the GPU parallel computing power as
possible. To accomplish this we divide the problem into sub-problems, execute the sub-problems
in parallel, and combine the results of each sub-problem at the end.
In this case the sub-problem is a subset of bi-spectrum positions. The output of the sub-problem
will be an entire Image Fourier Reconstructed Phase Buffer. Only those phase values updated
during solving of the sub-problem are non-zero. We then take the output buffers from each sub-
problem and combine them using another parallel addition kernel.
When determining the size of each sub-problem the limiting factor is memory. Because each
sub-problem must output an entire Fourier reconstructed phase buffer, only so many of these
output buffers can exist in GPU memory at one time. On the C2050, the 3GB of global memory
allow for only 192 of these buffers to be allocated for each tile. Therefore only 8 threads per
block, and 24 blocks can be used. Figure 105 below illustrates the structure of the kernel
execution grid.
Figure 105: Bi-spectrum Iterative Phase Reconstruction Kernel Execution Grid
8 threads1
8 threads
8 threads
8 threads 8 threads
2
24
# Tiles
……………………
.
Grid X Dimension
8 threads
8 threads
8 threads……………………
.
8 threads……………………
.
1 2
Grid Y Dimension
.
.
.
.
.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 217 of 267
7.8.5.1.5.7.2.5 Phase Normalization The phase normalization kernel is launched after the recursive phase reconstruction and after each
iteration of the iterative phase reconstruction. This kernel normalizes all the reconstructed Fourier phase
values using a pixel level parallel approach.
Inputs
Image Fourier Reconstructed Phase Buffer
Phase Consistency Buffer
Outputs
Image Fourier Reconstructed Phase Buffer - Normalized
Execution
Since pixel level parallelism is used, each thread in the kernel execution grid works on a single
pixel. Figure 106 below illustrates the structure of the kernel execution grid.
Figure 106: Phase Normalization Kernel Execution Grid
7.8.5.1.5.7.2.6 Phase and Amplitude Combination The phase and amplitude combination kernel function is the final step of the Speckle image
reconstruction process. It adds the Fourier amplitude averages to the reconstructed Fourier phase values.
The resulting output is what will be returned to the host.
Inputs
Image Fourier Reconstructed Phase Buffer
Image Fourier Amplitude Averages
Outputs
1024 threads1
1024 threads
1024 threads
1024 threads 1024 threads
2
# pixel buffer elements /
TPB
# Tiles
……………………
.
Grid X Dimension
1024 threads
1024 threads
1024 threads……………………
.
1024 threads……………………
.
1 2
Grid Y Dimension
.
.
.
.
.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 218 of 267
Image Fourier Reconstructed Phase Buffer – Phase and Amplitude combined
Execution
Since pixel level parallelism is used, each thread in the kernel execution grid works on a single
pixel. The structure of the kernel execution grid is the same as that for the phase normalization,
which is shown in Figure 106.
7.8.5.1.5.8 Speckle Output DPN
The Speckle Slave DPN is implemented as a DHS data processing component in the VBI Blue camera
line. As such, it follows an object oriented design methodology in which the technical architecture is
provided by a base class in the DHS, and custom functional behavior is added through extensions to that
class.
7.8.5.1.5.8.1 Class Diagram The class diagram shown in Figure 107 illustrates the relationships between the DHS framework software
elements and the software written to provide specific functional behavior required of the Speckle Output
DPN.
Figure 107: Speckle Output DPN Class Diagram
The Speckle Output DPN is implemented using the BaseProcessingComponent class of the DHS
framework. This class provides all the technical architecture functionality of the DHS such as subscribing
to a topic, sub-topic, or event. The BaseProcessingComponent class contains a data handling object that
implements the IDataHandler interface. When data is received on the main topic, sub-topic, or event, the
BaseProcessingComponent passes it to the data handling object through the methods defined in the
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 219 of 267
IDataHandler interface. Functionality specific to the Speckle Output DPN is therefore provided by the
MasterOutputHandler class which implements the methods defined in the IDataHandler interface. Thus
when data is received it is handed from the BaseProcessingCompoennt to the MasterOutputHandler.
7.8.5.1.5.8.2 Deployment Diagram Figure 108 below shows the deployment diagram for the components of the Speckle Output DPN.
Figure 108: Speckle Output DPN Deployment Diagram
The Speckle Output DPN components utilize only a CPU resource and will therefore be deployed on a
CPU only blade of the CRAY CX-1000 server. This “output blade” will have both 10Gb and 1Gb
Ethernet connections to allow for communications with the data and command channel networks
respectively. The output blade will also have 10Gb Ethernet connections to the blades hosting the
Speckle Slave DPNs. The SlaveOutputHandler and Utilities components will be deployed to the output
blade along with the DHS/BDT framework library.
7.8.5.1.5.8.3 SpeckleOutputHandler The SpeckleOutputHandler is a Java class that implements the IDataHandler interface. Therefore,
through polymorphism an instance of SpeckleOutputHandler can be used as an IDataHandler object by
the BaseProcessingComponent.
7.8.5.1.5.8.3.1 Lifecycle The SpeckleOutputHandler object is instantiated by the BaseProcessingComponent when loaded by the
DHS. The BaseProcessingComponent knows the name of the class to load by using the value of the
.dpnHandlerClass property.
7.8.5.1.5.8.3.2 Member Variables None defined at this time.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 220 of 267
7.8.5.1.5.8.3.3 Methods The SpeckleOutputHandler provides implementations of all the methods defined in the IDataHandler
interface. These implementations provide the functional behavior required of the SpeckleOutputHandler.
7.8.5.1.5.8.3.4 Flowchart The following flowcharts capture the major functional steps performed during certain functions of the
SpeckleOutputHandler.
Method: process
Receive Frame
Extract macro-tile
metadata
Merge macro-tile to
full frame output
buffer
Finalize full frame
meta-data
Publish to
distribution node
topic
Return
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 221 of 267
7.9 OTHER DELIVERABLES
7.9.1 Documentation
The VBI software systems will be delivered to the ATST community per the defined project schedule. A
final design has been described in this CDD document, and a final design will be provided as part of the
CDR. All public interfaces to the VBI software systems have been identified in this CDD and specified
through the referenced ICDs. A final design for these public interfaces will be provided for the CDR. An
operations manual will also be provided in time for the IT&C project activities.
All source code and software packages will be delivered to the ATST project. Each source code file will
be fully documented and follow a style consistent with the defined ATST best practices. Source code will
be maintained in a CVS repository accessible to all ATST software personnel.
7.9.2 Security
All communications between the VBI software systems, both internally and with other ATST systems,
shall be secure. Security of these communications shall be provided by the ATST network and AURA IT
services.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 222 of 267
7.10 SOFTWARE ANALYSIS
7.10.1 Real-time Performance for Time Critical Actions
During preliminary design for the VBI some concerns were identified surrounding the ability of the
Command-Action-Response model of CSF to perform time critical actions. Of particular concern were
scenarios where movement of a mechanism at a precise time and under a tight completion deadline was
required. One such scenario exists for the VBI in which the filter wheel must complete a 90 degree move
in a 200ms window. Performing this move by using the standard Command-Action-Response method
from the Instrument Sequencer script to the Motion Controller at the moment the move must be
performed was found to be unsatisfactory. The primary reason for this was due to jitter of up to 200ms
found randomly during communications over the CSF distributed network. It was determined that
Command-Action-Response and CSF were not designed to meet real-time performance requirements, and
thus another method would be required for such scenarios.
In response to our concerns, the ATST software group developed and tested real-time motion control
solution. The gist of the solution was to push the sequencing of moves down into the Delta Tau motion
controller since it runs a real-time OS and thus can provide deterministic behavior. The new solution
requires an array of positions and a digital IO input address to be passed to a motion program on the Delta
Tau. The motion program executing on the Delta Tau will then execute a move to the next position in the
array when a trigger pulse is received at the given digital IO input address. Therefore, it is also the
responsibility of the caller to configure the TRADS system to generate pulses at the desired times, and
connect the output pins of the TRADS board to the digital IO input of the Delta Tau. Figure 109 provides
a block level diagram of the components involved in the real-time solution.
Figure 109: Real-time Move Block Diagram
Delta Tau
Hardware
Real-time Linux OS
Instrument
Sequencer
Filter Wheel
Motion Controller
Delta Tau
Interface Software
Time Base
Controller
Tsync Interface
Software
Tsync Hardware
Motion Program
Digital IO Output Pins
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 223 of 267
The ATST software group performed extensive testing on the real-time move solution. Testing was
performed for control of a single motor as well as for control of four motors simultaneously. In summary,
it was found that latency between when the trigger was generated to when the move was started was 1.12
milliseconds with a very small standard deviation. For more details on the tests performed and the results
please refer TN-154 Motion Controller Performance.
7.10.2 Synchronization and Timing for VBI Observing Use Cases
The VBI Blue operates in a manner that requires the actions of its camera and mechanisms to be
synchronized. The types of synchronization required are as follows:
Start Time
Ensure mechanisms and camera are synced at initial start time of an observation sequence
Mechanism Triggering
Ensure mechanisms are triggered to move precisely at the end of an exposure
The VBI OCD provides several use cases for observing. In the next few sections we will look at each of
these use cases and explain how the VBI software system will perform them.
7.10.2.1.1.1 Use Case 1: Image bursts at single wavelength continuously In the first use case the VBI is configured to continuously take bursts of 80 frames @ 30fps with 20ms
exposure time of the same wavelength. In this use case, the filter wheel mechanism is moved to the
desired wavelength before the exposures begin, and no other mechanism movement occurs after that
point. Therefore, only start time synchronization is required.
Upon receiving the observing configuration from the ICS, the IC will begin by moving the filter wheel
and focus stages to the position corresponding to desired wavelength. Although a real-time move is not
required in this use case, the IC will use the real-time move method of the motion controller to remain
consistent since other use cases require it. The parameters passed to the real-time move method of the
filter and focus motion controllers will include the demand position and a flag indicating that the move
should be done immediately without a trigger.
Once the filter wheel move has been completed the IC will send a configuration to the camera. The
camera configuration will contain a calculated absolute start time (vcc.global:startMode=atStartTime,
vcc.global:scheduling:initialStartTime=<calculated time>) that is set far enough in the future to allow
for the camera to finish configuring itself. The IC will use the TimeBaseController to obtain the reference
time from the TRADS card and calculate a camera absolute start time that is aligned with the camera
configuration exposure rate (i.e. 30Hz) as well as the camera execution schedule for the configuration
(3s). Again, although an absolute start time is not truly needed in this use case, it will be used to remain
consistent with other use cases. An example configuration that would be sent to the camera is shown is as
follows:
.global:referenceT0=2011/01/30:14:00:00.000
.global:startMode=atStartTime
.global:scheduling:initialStartTime=2011/01/30:14:15:00.000
.global:config:executeCount=0
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 224 of 267
.global:config:table:configID[0]=config1
.global:config:table.count[0]=1
.global:scheduling:table:offset[0]=3.0
.config:ID=config1
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=20.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
The camera system will use its TRADS board to generate the pulses necessary to drive the camera to take
exposures at the desired rate. Detail on how the VCC utilizes the TRADS interface can be found in the
Camera Systems Software design document. Figure 110 below illustrates the timing of the camera bursts
of 80 frames @ 30Hz with 20ms exposure time occurring every 3 seconds.
Figure 110: Observing Use Case 1 Timing Diagram
7.10.2.1.1.2 Use Case 2: Image bursts at 3 different wavelengths continuously In the second use case the VBI is configured to continuously take bursts of 80 frames @ 30 fps for three
different wavelengths using a different exposure time at each wavelength. After each burst, 333ms is
allotted for moving mechanisms for the next wavelength and re-configuring the camera. Due to the
limited time between bursts, the mechanism moves need to occur precisely at the end of camera exposure.
Therefore, this use case requires both start time and mechanism triggering synchronization.
Upon receiving the observing configuration from the ICS, the IC will begin by moving the filter wheel
and focus stages to their position corresponding to the first wavelength. This is done using the motion
controller’s real-time move interface. The call to the interface will include the sequence of positions
corresponding to all three wavelengths, a flag to indicate a move to the first position should be done
immediately, and the IO address to watch for triggering of subsequent moves.
Once the filter wheel and focus stage moves have completed, the IC will calculate the absolute start time
for the observation. This absolute start time will be used by the camera and TimeBaseController to signal
the start of exposures and mechanism triggering respectively. To calculate an absolute start time the IC
first uses the TimeBaseController to obtain the reference time from TRADS. The IC adds a buffer to this
time to allow for configuration of the camera and TimeBaseController. It will also align the start time
with the camera configuration exposure rate (i.e. 30Hz) as well as the camera execution schedule for the
configurations (3s).
The IC will now build and send a configuration to the camera. This configuration will specify the rate
and exposure time for each wavelength. It will also provide the absolute start time and offset between
Seconds 0 32.667 6 9 11.667
Camera …...
80 frames @ 30Hz
…...
5.667
…...
8.667
…...
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 225 of 267
each burst (3s). The camera system will use its TRADS board to execute the bursts at the desired offset
and generate the pulses necessary to drive the camera to take exposures at the desired rate. Details on
how the VCC utilizes the TRADS interface can be found in the Camera Systems Software design
document. An example configuration that would be sent to the camera is shown is as follows:
.global:referenceT0=2011/01/30:14:00:00.000
.global:startMode=atStartTime
.global:scheduling:initialStartTime=2011/01/30:14:15:00.000
.global:config:executeCount=0
.global:config:table:configID[0]=config1
.global:config:table.count[0]=1
.global:config:table:configID[1]=config2
.global:config:table.count[1]=1
.global:config:table:configID[2]=config3
.global:config:table.count[2]=1
.global:scheduling:table:offset[0]=3.0
.global:scheduling:table:offset[1]=3.0
.global:scheduling:table:offset[2]=3.0
.config:ID=config1
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=10.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
.config:ID=config2
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=20.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
.config:ID=config3
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=5.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
Finally, the IC will use the TimeBaseController to configure the TSync card to generate trigger pulses for
the mechanism moves. The absolute start time will be used to configure when the trigger pulses should
start being generated. There are four attributes used to configure the trigger pulses generated by the
TSync card. These are the reference time epoch (sync:t0) the pulse rate (sync:rate), the pulse width
(sync:width), and the phase offset (sync:offset). These four attributes control the pulse from one of the
TSync board’s output pins. Since the filter and focus stages move together after each burst, a single
output pin of the TSync card can be used to trigger both stages. This output pin would be configured with
the attributes: sync:t0:<t0>, sync:rate=3s, sync:width=333ms, sync:offset=2.667s.
Figure 111 below illustrates the timing of camera and mechanism triggers for this use case. The camera
performs bursts of 80 frames @ 30Hz with 22ms exposure time occurring every 3 seconds. The triggered
move for the filter/focus stages occurs exactly at the end of each burst.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 226 of 267
Figure 111: Observing Use Case 2 Timing Diagram
7.10.2.1.1.3 Use Case 3: Varying image framesets at four wavelengths continuously In the third use case the VBI is configured to continuously take frame sets of varying lengths at four
different wavelengths. Settings for each frame set vary in exposure time and binning. After each frame
set, 333ms is allotted for changing the wavelength and re-configuring the camera. Since the number of
frames in each frame set varies, the cadence of the observation sequence is not fixed. However, the
duration between bursts of a wavelength from one cycle to the next must be kept equidistant. Therefore,
this use case required both start time and mechanism triggering synchronization.
In this use case, the time duration will vary between each burst within a cycle. As a result, the triggering
of mechanisms cannot be done using a single fixed period pulse waveform. Instead, four different fixed
pulse waveforms are needed. Rather than using four TSync output pins, a single pin will be configured to
generate a pulse at the start of each cycle. The motion controller will use the receipt of this pulse as a
reference, and use its real-time clock to generate the pulses needed for the mechanisms moves. This
model will be a special case of the motion controller real-time move interface.
Upon receiving the observing configuration from the ICS, the IC will begin by moving the filter wheel
and focus stages to their position corresponding to the first wavelength. This is done using the motion
controller’s real-time move interface. The call to the interface will include the sequence of all three
wavelengths, a flag to indicate a move to the first position should be done immediately, and the IO
address to watch for trigger for subsequent moves.
Once the filter wheel and focus stage moves have completed, the IC will calculate the absolute start time
for the observation. This absolute start time will be used by the camera and TimeBaseController to signal
the start of exposures and mechanism triggering respectively. To calculate an absolute start time the IC
first uses the TimeBaseController to obtain the reference time from TRADS. The IC adds a buffer to this
time to allow for configuration of the camera and TimeBaseController. It will also align the start time
with the camera configuration exposure rate (i.e. 30Hz) as well as the camera execution schedule for the
configurations (3s).
The IC will now build and send a configuration to the camera. This configuration will specify the rate
and exposure time for each wavelength. The scheduling information will include the absolute start time,
but will not specify an offset in order to allow the bursts to be taken as fast as possible. The camera
system will use its TRADS board to execute the bursts and generate the pulses necessary to drive the
camera to take exposures at the desired rate. Details on how the VCC utilizes the TRADS interface can
be found in the Camera Systems Software design document. An example configuration that would be
sent to the camera is shown is as follows:
0 32.634 6 9 11.634
…...
80 frames @ 30Hz
…...
5.634
…...
8.634
…...
Filter/Focus
Seconds
Camera
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 227 of 267
.global:referenceT0=2011/01/30:14:00:00.000
.global:startMode=atStartTime
.global:scheduling:initialStartTime=2011/01/30:14:15:00.000
.global:config:executeCount=0
.global:config:table:configID[0]=config1
.global:config:table.count[0]=1
.global:config:table:configID[1]=config2
.global:config:table.count[1]=1
.global:config:table:configID[2]=config3
.global:config:table.count[2]=1
.global:config:table:configID[3]=config4
.global:config:table.count[3]=1
.config:ID=config1
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=20.000
.config:exposure:table:rawFrames[0]=10
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=10
.config:ID=config2
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=10.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
.config:ID=config3
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=20.000
.config:exposure:table:rawFrames[0]=1
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=1
.config:ID=config4
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=5.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
Finally, the IC will use the TimeBaseController to configure the TSync card to generate trigger pulses for
the mechanism moves. The absolute start time will be used to configure when the trigger pulses should
start being generated. There are four attributes used to configure the trigger pulses generated by the
TSync card. These are the reference time epoch (sync:t0) the pulse rate (sync:rate), the pulse width
(sync:width), and the phase offset (sync:offset). These four attributes control the pulse from one of the
TSync board’s output pins. Since we wish to generate a pulse at the start of each cycle, a single output
pin of the TSync card can be used. This output pin would be configured with the attributes:
sync:t0:<t0>, sync:rate=7.033s, sync:width=33ms, sync:offset=0s.
Figure 112 below illustrates the timing of camera and mechanism triggers for this use case.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 228 of 267
Figure 112: Observing Use Case 3 Timing Diagram
7.10.2.1.1.4 Use Case 4: Image bursts at different wavelengths in field sampling mode In the fourth use case the VBI is configured to continuously take bursts of 80 frames @ 30 fps for two
different wavelengths while sampling the entire field. After each burst, 333ms is allotted for moving the
camera x and y stages so that the next field can be imaged. After all the fields have been imaged, the
camera x and y will return to the initial field position while the filter and focus stages are moved to
positions corresponding to the second wavelength. In this use case, all stages are moved to their initial
positions before the exposures begin, and triggered mechanism moves are used after each burst
completes. Therefore, both start time and mechanism triggering synchronization is required.
Upon receiving the observing configuration from the ICS, the IC will begin by moving the filter wheel
and focus stages to their position corresponding to the first wavelength. It will also move the camera x
and y stages to their positions corresponding to the first sub-field. This is done using each stage’s motion
controller real-time move interface. For the filter and focus stages, the call to the interface will include
the sequence of positions corresponding to the wavelengths, a flag to indicate a move to the first position
should be done immediately, and the IO address to watch for triggering of subsequent moves. For the
camera x and y stages, the call to the interface will include the sequence of field sampling positions, a flag
to indicate a move to the first positions should be done immediately, and the IO address to watch for
triggering of subsequent moves.
The TimeBaseController will be used to setup the TSync card of the IC to generate the trigger signals for
the real-time mechanism moves. There are four attributes used to configure the trigger pulses generated
by the TSync card. These are the reference time epoch (sync:t0) the pulse rate (sync:rate), the pulse
width (sync:width), and the phase offset (sync:offset). These four attributes control the pulse from one of
the TSync board’s output pins. Since the camera x and y stages change at the same frequency, a single
output pin of the TSync card can be used to trigger both stages. This output pin would be configured with
the attributes: sync:t0:<t0>, sync:rate=3s, sync:width=333ms, sync:offset=2.667s. Since the filter and
focus stages move together after all sub-fields have been imaged (after 4 bursts), a single output pin of the
0 0.6670.333 3.667 4.033 6.7
…...
10 frames @ 30Hz
…...
3.333 3.7
…...
Cycle Trigger
(Tsync out)
Seconds
Camera
80 frames @ 30Hz 1 frame @ 30Hz 80 frames @ 30Hz
Filter/Focus 2
(Delta Tau)
Filter/Focus 3
(Delta Tau)
…... …...
80 frames @ 30Hz10 frames @ 30Hz
7.033 7.367 7.7 10.367
Filter/Focus 4
(Delta Tau)
Filter/Focus 1
(Delta Tau)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 229 of 267
TSync card can be used to trigger both stages. This output pin would be configured with the attributes:
sync:t0:<t0>, sync:rate=12s, sync:width=333ms, sync:offset=2.667s.
Once all stages have finished moving to their initial positions the IC will send a configuration to the
camera. The camera configuration will contain a calculated absolute start time
(vcc.global:startMode=atStartTime, vcc.global:scheduling:initialStartTime=<calculated time>) that is
set far enough in the future to allow for the camera to finish configuring itself. The IC will use the
TimeBaseController obtain the reference time from TRADS. It will then calculate a camera absolute start
time that is aligned with the camera configuration exposure rate (i.e. 30Hz) as well as the camera
execution schedule for the configuration (3s). An example configuration that would be sent to the camera
is shown is as follows:
.global:referenceT0=2011/01/30:14:00:00.000
.global:startMode=atStartTime
.global:scheduling:initialStartTime=2011/01/30:14:15:00.000
.global:config:executeCount=0
.global:config:table:configID[0]=config1
.global:config:table.count[0]=1
.global:config:table:configID[1]=config2
.global:config:table.count[1]=1
.global:scheduling:table:offset[0]=3.0
.global:scheduling:table:offset[1]=3.0
.config:ID=config1
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=10.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
.config:ID=config2
.config:exposure:rate:value=30.000
.config:exposure:table:time[0]=20.000
.config:exposure:table:rawFrames[0]=80
.config:frameset:numberOfSets=1
.config.frame.framesPerSet=80
The camera system will use its TRADS board to generate the pulses necessary to drive the camera to take
exposures at the desired rate. Details on how the VCC utilizes the TRADS interface can be found in the
Camera Systems Software design document. Figure 113 below illustrates the timing of camera and
mechanism triggers for this use case.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 230 of 267
Figure 113: Observing Use Case 4 Timing Diagram
7.10.3 Speckle Image Reconstruction
7.10.3.1 Overview The performance requirement for real-time Speckle image reconstruction is based on the use case of an 80
frame burst at 30 frames per second being taken ever y 3 seconds. Thus we have a frame being taken
every 33ms, and the duration of the 80 frame burst is therefore 2.66ms. The remaining 333ms between
the end of a burst and the start of the next is used to switch wavelengths by moving the mechanisms of the
instrument. Therefore, from the perspective of the Speckle image reconstruction plug-in the timeline of
events looks like that shown in Figure 114.
Figure 114: High Level Timeline of Events for Speckle Plug-in
When considering how real-time performance can be achieved it helps to look at the system as a
producer-consumer problem. For our case, the producer would be the camera systems and the consumer
is the Speckle image reconstruction plug-in. Thus it becomes evident that in order for the system to work
correctly, the consumer (Speckle plug-in) must be able to process data at a rate equal to or faster than that
at which the producer (camera systems) is providing data. Thus in order to perform Speckle image
reconstruction in true real time, all processing of burst n must be done before burst n+1 arrives. If this
cannot be accomplished, then the consumer will fall behind and time to delivery (duration from when data
is received to when output is produced) will degrade linearly. Figure 115 illustrates this problem:
Seconds 0 12 14.667
Camera
Filter/Focus
Camera X/Y
80 frames @ 30Hz
…... …... …... …... …...
32.667 6 9 11.6675.667 8.667
1 2 3
1 2 3
Burst #
Time (s)
…. n
6 9 3n
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 231 of 267
Figure 115: Consumer falls behind Producer
To better understand what processing has to be done by the consumer in the 3 second time frame, we can
break the Speckle image reconstruction process into three major steps:
Acquisition
Processing (Speckle Image Reconstruction)
o Pre-processing of each frame
Break frame into 128x128 tiles
Convert 16 bit quantization to 32 bit floating point
Calibrate
Fourier Transform real to complex
o Gather statistics over all frames in burst
Light level
Bi-spectrum averaging / Normalization
Averaging of Fourier amplitudes
Averaging of phase initialization values / Normalization
o Reconstruction using all frames in burst
Recursive phase reconstruction / Normalization
Iterative phase reconstruction / Normalization
Phase and Amplitude combination
Fourier Transform complex to real
Distribution
The acquisition step involves receipt of the frames from the Bulk Data Transport (BDT) and delivering
them to the processing step. The processing step will perform some tasks on each individual frame, while
other tasks will require data from all the frames in the burst. The distribution step will deliver the
reconstructed output back to the BDT.
Since the size of each input frame will be 4k x 4k, it is not feasible for a single multi-core CPU to perform
individual frame processing tasks within 33ms, let alone whole burst processing tasks in the remaining
333ms. It has also been shown that processing of a 2k x 2k frame by divide and conquer using MPI and a
cluster of multi-core CPUs cannot be done in 3.0s [Woeger, 2007]. Therefore, based on feasibility studies
1 42 3 5 6 7
1 2 3 4 5 6
8 9 10
7 8 9 10
3 6 9
Burst 1 reconstruction finished.
Time to delivery = 4sBurst 10 reconstruction finished.
Time to delivery = 13 s
Burst #
Reconstructed
Output #
Time (s)
…. n
…. n
Burst n reconstruction finished.
Time to delivery = burst duration + n*processing delta
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 232 of 267
performed by an outside consultant (EM Photonics) it was determined that Speckle image reconstruction
could be performed in near-real time using Graphical Processing Unit (GPU) clusters.
7.10.3.2 Acquisition and Delivery to GPU Enabled Processing Nodes GPUs provide the ability to execute computational algorithms in a massively parallel manner. All of the
Speckle image reconstruction processing steps listed above can be written to utilize the parallel execution
model of the GPU. However, even with the processing power of the GPU, it is still necessary to use
multiple GPUs processing nodes to achieve near real-time performance. We must therefore look at the
most efficient way to distribute data to many GPU processing nodes to accomplish the processing steps
required for Speckle image reconstruction.
As mentioned before, there are some tasks in the processing step that can be done using data from each
frame individually, and others that must be done using data from all frames in the burst. However, it is
also possible to divide the 4k x 4k input frame into smaller frames, called macro-tiles, so that many GPU
enabled nodes can be used to perform both individual and whole burst processing tasks in parallel. A
macro-tile represents an area of the 4k x 4k input frame much larger than the 128x128 tile size used
during processing. For example, if a macro-tile of size 1k x 1k is used, we can divide a 4k x 4k input
frame into 16 1k x 1k macro-tiles and distribute them for processing on 16 different GPU nodes. Figure
116 illustrates the use of macro-tiles to distribute input images to many GPU enabled processing nodes.
Figure 116: Using Macro-tiles for Speckle Data Distribution
Using the 10Gbit Ethernet network behind the BDT, the master node can transfer macro-tiles to the slave
nodes very quickly. For example, a macro-tile of size 1k x 1k can be transferred from master to slave in
about 27ms (1024x1024x16bit / 10e9 bits/s). Macro-tile sizes will most likely be smaller than 1k x 1k,
however even at this size acquisition and distribution to slave nodes can easily be done in less than 33ms.
7.10.3.3 Pipelined Approach to Processing As we started before, in order for the system to work correctly the consumer must be able to process the
inputs at a rate equal to or faster than that at which the producer provides them. Thus with bursts being
produced every 3.0s, the processing budget for completion of the Speckle image reconstruction
processing steps is 3.0 seconds.
Slave Node 1
CPU GPU
Slave Node 2
CPU GPU
Slave Node 16
CPU GPU
.
.
.
.
Master Node
CPU
4k x 4k
4k x 4k
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 233 of 267
Both the “pre-processing” and “gather statistics” steps can be done on individual macro-tiles with some
results being averaged over the entire macro-tile cube. Based on prototype tests we found that it is
possible to complete this processing in the 33ms timeframe allowed for each macro-tile. However, the
“reconstruction” step requires all of the macro-tile cube data to be present. In a true real-time solution
the “reconstruction” step would have to be done in 333ms or less. Based on prototype tests it is clear that
completing reconstruction in 333ms or less is not feasible. Therefore a near real-time approach must be
sought out.
Assuming the GPU is fully utilized for execution of all Speckle image reconstruction processing steps
during the 3.0s, a pipelined approach can be utilized as illustrated in Figure 117.
Figure 117: Pipelined Approach for Speckle Image Reconstruction
In order for the pipelined approach to work effectively, double buffering on the GPU is essential. One
buffer can be used to store incoming macro-tiles from burst n during “acquisition” while the other buffer
is used to process macro-tiles from burst n+1 during “speckle processing”. A key to the success of this
approach is that NVIDIA GPU hardware supports concurrent execution of memory copy and kernel tasks.
This feature allows the input/output buffer to be loaded/unloaded while the GPU is executing a processing
kernel.
With the pipelined model in place, we now must simply ensure that the processing budgets for each stage
of the pipeline are always met. For example, if the acquisition stage were to run longer than 3 seconds, it
would not be able to keep up with the rate at which the camera systems are producing images. However,
it is important to note that each stage should be optimized to take as little time as possible in order to
reduce the overall time to delivery. Figure 118 below illustrates how the pipelined approach will provide
a fixed time to delivery of 9 seconds in the worst case.
Pipeline Stage 3
Pipeline Stage 2
Pipeline Stage 1
Acquisition
Speckle Processing
Re-assembly and Distribution
Input
Output
1s 2s 3s0s
Burst n+1
Burst n
Burst n-1
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 234 of 267
Figure 118: Pipelined Solution Performance
The pipelined model provides the benefit of allowing a full 3 second processing budget for the speckle
processing to take place while maintaining an overall fixed time to delivery for the Speckle solution. The
main drawback to the pipelined approach is that the output is no longer available in true real-time (time to
delivery = 0), but instead in near real-time at the fixed time to delivery.
7.10.3.4 Prototype Development As part of the research for the VBI preliminary design document an outside consultant (EMPhotonics)
was employed to do a 2 week study to help determine if near real-time Speckle image reconstruction
could be achieved. The focus of the study was on how this might be achieved using Graphical Processing
Unit (GPU) hardware. GPU hardware enables high performance data processing through the use of a
massively parallel execution model. The results of the study indicated that near-real time performance
was achievable on such hardware, but at a very high cost. The study itself did not include development of
any prototype codes (outside the scope of the contract), but instead relied on statistics from existing
algorithms and the expertise of the consultant to draw its conclusions. It was therefore of interest to the
VBI team to develop a functional prototype as a proof of concept that could strengthen our direction as
part of the final design.
The goal of the prototype was to implement several of the critical algorithms of the Speckle image
reconstruction process and test their performance on GPU hardware. The processing steps that were
implemented are as follows:
Pre-processing of each macro-tiles
o Fourier Transform real to complex
Gather statistics over macro-tile cube
o Bi-spectrum averaging / Normalization
o Averaging of Fourier amplitudes
o Averaging of phase initialization values / Normalization
Reconstruction of macro-tile cube
o Recursive phase reconstruction / Normalization
o Iterative phase reconstruction / Normalization
o Phase and Amplitude combination
1 42 3 5 6 7
1
8 9 10
3 6 9
Burst 1 reconstruction finished.
Time to delivery = 9s
Burst 10 reconstruction finished.
Time to delivery = 9s
Acquisition
Speckle
Processing
Time (s)
…. n
Burst n reconstruction finished.
Time to delivery = 9s
1
42 3 5 6 7 8 9 10 …. n
42 3 5 6 7 8 9 10 …. nRe-assembly /
Distribution
36 9n
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 235 of 267
o Fourier Transform complex to real
Acquisition of data was done by simply reading 32 bit floating point image data from a file. The
reconstructed output image was simply written to file for inspection, and no re-assembly or distribution
was performed.
The prototype code was written in C, using the CUDA API to interface with NVIDIA GPU hardware.
The GPU hardware that was used for the prototype was the C2050. The specification for the C2050 is as
follows:
Tesla C2050
CUDA Driver Version: 3.20
CUDA Runtime Version: 3.20
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 3220897792 bytes
Multiprocessors x Cores/MP = Cores: 14 (MP) x 32 (Cores/MP) = 448 (Cores)
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Clock rate: 1.15 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: No
Integrated: No
Support host page-locked memory mapping: Yes
Compute mode: Default (multiple host threads can use this
device simultaneously)
Concurrent kernel execution: Yes
Device has ECC support enabled: No
Device is using TCC driver mode: No
The results of the prototype performance testing are shown in Table 3 below. All times are given in
milliseconds.
Macro-tile Size FFT Bi-spectrum
Averaging
Reconstruction Total GPUs
needed for
4k x 4k
1024x1024 99.1 2266 10715.1 13080.1 16
512x512 17.3 385.9 5078.1 5481.3 64
256x256 3.2 70.4 1752.9 1826.5 256 Table 3: Speckle Prototype Performance
Based on these figures, we can derive that the number of GPUs needed to stay within the 3.0 second
processing budget would be about 155. However, the cost of a 155 GPU system is outside the budget
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 236 of 267
allocated for this project. Therefore we still need to obtain an improvement factor of 3-5 times (30-50
GPUs) through optimization and use of better hardware during implementation.
The primary bottleneck found in the prototype was the iterative phase reconstruction. Our original
prototype version of this kernel required reading and writing of the same Fourier phase values during a
single iteration. Therefore, race conditions are possible as multiple threads can be using the data
concurrently. One solution we looked at involved using atomic operations for these reads and writes.
However use of these atomic operations resulted in very poor performance. Our latest prototype version
avoids the atomic operations by breaking the problem into sub-problems, solving each sub-problem to
produce a separate Fourier phase output vector, and then combining the outputs into a single output
vector. This approach yielded the best performance so far, but use of the many output vectors resulted in
reaching the 3GB memory limit of our prototype system’s GPU and therefore limited the number of sub-
problems we could execute in parallel. We expect that a GPU with 6GB memory will enable a 2 times
speed up of this version. Finally, we are continuing to develop alternate algorithms for this step, and as
we learn more about the capabilities of CUDA and the GPU hardware we expect to find additional
opportunities for optimization.
7.10.3.5 Risk Mitigation As shown in the previous section, results from our Speckle prototype indicate that a 3 to 5 times
improvement over our prototype is needed. Although we are confident this can be achieved, we can only
know through implementation. Therefore it is important to have a risk mitigation plan should we not be
able to achieve the near real-time solution on hardware that fits our budget.
The risk mitigation plan for the Speckle image reconstruction solution is to achieve real-time
reconstruction for every other burst. This plan would increase the processing budget to 6.0s. Based on
our prototype test results we could achieve this now using a 59 GPU system. Thus we are well within
reach of our goal of 30-50 GPUs.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 237 of 267
7.10.4 Using jCUDA to bridge Java to CUDA C libraries
The Data Handling System (DHS) provides a Java based framework from which data processing plug-ins
are built upon. The DHS provides the technical architecture that handles delivery of data and events to
and from the plug-ins. The plug-ins themselves provides the functional behavior specific to meeting their
data processing requirements.
Since the plug-ins are built upon the DHS framework, at the top level they are implemented using the
Java programming language. It is therefore the responsibility of the plug-in developer to interface from
the top level Java code to other libraries that perform desired data processing services. In particular, for
plug-ins that require the use of GPU hardware to perform computationally intensive tasks, an interface to
the lower level C drivers for the GPU is required.
The VBI data processing pipeline includes 5 plug-ins (gain, dark, frame selection, speckle, detailed
display) that require the use of GPU hardware to meet near-real time computational requirements. The
GPUs selected for the project are those from the manufacturer NVIDIA. As part of NVIDIAs Compute
Unified Device Architecture (CUDA) engine, several C libraries are provided that simplify how programs
interface with the GPU. The VBI plug-ins must therefore be able to call these C library functions in order
to perform tasks such as moving data to/from the GPU and executing massively parallel data processing
algorithms. Therefore, special software is needed to bridge the gap between the high level Java code of
the plug-in, and the low level C code of the CUDA libraries.
When researching options to bridge the Java to C gap we considered three options. First, we could ask
the DHS to provide a C/C++ version of the DHS framework. The problem with this approach is that it
would require the DHS to maintain two code bases. Maintenance of two code bases can be time
consuming and problematic, therefore it was decided that this approach would only be considered if no
other acceptable option was available. The second option we considered was to write a separate C++
class for interfacing with the CUDA libraries and have the plug-in make calls to it through the Java
Native Interface (JN I). JNI is a programming framework that allows Java code to call and be called by
native applications (OS/hardware specific programs) and libraries written in other languages such as
C/C++. Although this approach is feasible, it would require significantly more development effort. The
third approach considered was to use jCUDA, which is a set of libraries that provide the binding to the
NVIDIA CUDA C libraries. Therefore, jCUDA provides the JNI interface directly to the CUDA C
libraries, packaging it all into a nice set of convenient Java classes. With this approach, all of the plug-in
code can be written in java expect for the kernel code, which will still be written in C. The plug-in will
then use the jCUDA API to perform tasks such as moving data to/from the GPU and launching kernels on
the GPU. This was clearly the most attractive choice from the standpoint of development effort.
However, we first had to determine if the performance of the jCUDA library was acceptable compared to
using only C code.
To analyze the use of the jCUDA library versus only C code, we implemented a test program in both Java
and C. Both version of the code perform the same three tasks as follows:
Copy 15MB of memory from host to device
Perform Fast Fourier Transform on 225 image tiles of size 128x128 each
Execute a kernel
The Java version of the test program utilizes the jCUDA library to execute all three tasks. It is therefore
compiled and linked with the jCUDA libraries using the java complier (javac). However, the kernel code
itself is written in C as is required for execution on the GPU. The Java version therefore uses the jCUDA
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 238 of 267
library to load a binary version of the kernel (.cubin file generated by the nvcc CUDA compiler). The C
version of the test program contains all the code, including the kernel, in one source file. It uses the
CUDA C libraries to execute all three tasks. It is therefore compiled and linked with the CUDA C
libraries using the CUDA compiler (nvcc).
The two version of the test program were executed on the same hardware/software configuration. The
configuration used was as follows:
AMD Phenom 9950 Quad-Core Processor
Ubuntu 64-bit Linux – kernel version 2.6.35-28
8GB RAM
NVIDIA CUDA library version 4.0
The results of the memory copy test can be seen in Figure 119. This graph clearly shows that the C
version of the test program performs the memory copy about 1ms faster than the Java version. However,
in the worst possible use case a plug-in would have 33ms to handle and load an image to the GPU
memory. Therefore the 1ms difference is not a concern.
Figure 119: jCUDA Memory Copy Performance
Then results of the FFT test are shown in Figure 120 below. Again, this graph shows that the C version
of the test program performs the FFT slightly faster than the Java version of the program. However, in
the case of the FFT the difference is only a matter of tens of microseconds, and is therefore not of concern
for the VBI plug-in applications.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 239 of 267
Figure 120: jCUDA FFT Performance
The results of the kernel execution are shown in Figure 121 below. Again, this graph shows that the C
version of the test program performs the kernel execution slightly faster than the Java version of the
program. However, in the case of the kernel execution the difference is only a matter of a couple hundred
microseconds, and is therefore not of concern for the VBI plug-in applications.
Figure 121: jCUDA Kernel Execution Performance
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 240 of 267
8 HAZARD ANALYSIS
Figure 122: Hazard Analysis Chart
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 241 of 267
9 COST AND SCHEDULE ESTIMATES
9.1 COST ESTIMATE
The ATST budget for the VBI is $1,682,351 – this was the original estimate for both the blue and red
channels of the instrument. Also included was a 39% contingency of $656,117 that is now held by the
project. At the PDR, a budget was shown for the blue channel only of $1,479,579.
The original budget was allocated early in the project and a 39% override was allocated because the
bottom-up estimate, particularly the cost estimate, was considered high risk. Further design effort showed
the need for near-real-time image reconstruction which was not part of the original budget. In addition,
the estimates were not based on a design effort with the rigorous detail, design review and approval
processes and associated detailed documentation required of an effort that will successful in the complex
environment of the ATST. All the instrument workpackages are struggling with underestimated budgets,
but the project is working to find solutions.
The red channel of the VBI was originally designed with interference filters. These filters either cannot
be manufactured or do not meet the science requirements of the VBI. Alternatives were investigated, but
the only solution identified that enables the VBI to meet all of the science requirements is the addition of
a Fabry-Perot filter. Due to the high cost of the Fabry-Perot filter, the VBI team is awaiting the decision
of the project to either eliminate the red channel, allocate contingency funds to cover the additional cost,
or secure additional funding.
The top level budget presented at PDR was as follows:
VBI-Blue Master Budget as presented at PDR Final design labor $214,447 Construction phase labor $620,146 Speckle code development $200,000 Lenses/Mirrors $97,842 Filters $115,748 Mechanical - optics mounts, filter wheel, stages $26,966 Mechanical - spares $14,960 Benches $12,480 Controls and misc. $176,990
total $1,479,579
After PDR, a more detailed schedule was developed (per PDR committee recommendations) and a
change request was submitted to the project to move the updated budgets and schedule into the project
accounting and earned value systems. The change request is still tied up in discussions involving the red
channel and has not been implemented, so the project reports, including the earned-value-analysis reports
for the VBI are not tracking properly. We expect this problem to be rectified soon.
The following budgets are compared to the PDR budgets, not the budgets currently held in the project.
For this reason, the actuals had to be extracted from the NSO accounting system and compared to the
PDR budgets - the following reports are based on this exercise and show the changes from the budget
presented at the PDR.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 242 of 267
9.1.1 Final Design
The final design effort (in the time between the PDR and the CDR) is shown in the table below.
PDR Estimates as
presented
Final Design VBI
Team
Final Design Contractor
Final Design Total
Difference
Final Design Labor $214,447 $159,248 $1,950 $161,197.72 -$53,249.29 The number of hours estimated for the final design effort was very close to the number of actual hours
worked, but the project estimates for labor are high.
9.1.2 Construction Phase Labor
PDR Estimates as
presented
Final Design (cost to
date)
Construction Phase
Total (since PDR)
Difference
Construction Phase Labor $581,437 $1,079,894 $1,079,894 $498,457
Fabrication Labor $38,709 $11,776 $22,201 $33,978 -$4,731 Speckle code development $200,000 $0 $0 $0 -$200,000
total $820,146 $11,776 $1,102,095 $1,113,872 $293,726 Fabrication labor was added into this table because it was budgeted for the construction phase. Some of
this fabrication effort that was budgeted for the construction phase was finished during final design for
risk mitigation and directly decreases construction phase effort.
Construction phase labor -
Added 0.95 FTE ($154,344) software development bringing speckle development in-house.
Adding escalation (as recommended at PDR) - added $230,852.
A more detailed schedule was developed, increasing the number of FTE by about 0.85 ($117,991).
Speckle code development - brought in-house.
Considered in total, the labor estimates for the construction phase increased by $293,726. Most of this
increase ($230,852) can be attributed to the escalation of labor which the PDR committee pointed out as
missing at the PDR.
9.1.3 Construction Phase Non-labor
PDR Estimates as
presented
Final Design (encumbran-ces to date)
Construction Phase
(planned)
Total (since PDR)
Difference
Optics - lenses / mirrors $97,842 $0 $97,842 $97,842 $0 Filters $115,748 $82,958 $20,000 $102,958 -$12,790 Mechanical - mounts, filter wheel, stages
$26,996 $14,115 $12,851 $26,966 -$30
Controls and misc. $166,990 $57,735 $120,524 $178,259 $11,269 Mechanical - spares $14,960 $18,300 $18,300 $3,340
Benches $12,480 $12,480 $12,480 $0
Thermal $10,000 $1,102 $1,102 -$8,898
total $445,016 $154,808 $283,099 $437,907 -$7,109
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 243 of 267
Filters - filters were ordered - the cost was $12,790 lower than original estimates. By working with the
filter vendor, we were able to modify the fabrication specifications to lower the cost. $20,000 remains as
risk mitigation.
Mechanical - The filter wheel has been built. Optical mounts will be ordered or fabricated during
construction.
Controls - The Delta Tau equipment and motion stages were ordered for risk mitigation testing. Test
equipment has also been ordered.
Spares - Spares will be ordered during construction.
Benches - Benches will be ordered during construction.
Thermal - We originally planned for a secondary coolant loop, but we were able to tap into the electrical
rack cooling system at considerable savings.
9.1.4 Construction Phase Totals
PDR Budget CDR Budget Difference
totals $1,265,162 $1,551,779 $286,617 The Current budget for the VBI blue channel has increased by $286,616, mostly due to the addition of
escalation to the labor budgets, but also due to a more detailed labor estimate.
9.1.5 Detailed Budget Items
Details of the labor allocation can be found in the Drawings Appendix.
Blue Filters Filter wavelength CDR cost
Hβ 486.1 $40,985 Ca II K 393.34 $23,394 G-band 430.5 $9,238 Blue Continuum 450.0 $9,341
total $82,958
Optics
Optic price tooling total cost Field Lens $3,078 $608 $3,686 Collimator Lens $6,750 $5,265 $12,015 Image Lens $12,623 $608 $13,231 Objective Lens $39,881 $720 $40,601 400mm Flat Mirror $13,984 $13,984 100mm Flat Mirror $1,094 $1,094 2nd objective lens $12,623 $608 $13,231
total $97,842
Controls and Misc.
Camera project Electrical (from electrical sheet) $7,195 Final Instrument Computer $3,329
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 244 of 267
Tools / test equipment $100,000 Software / licenses $10,000 total $120,524
Spare Parts
Delta Tau Power PMAC $4,375 Delta Tau ACC-24E3 PWM amp $1,300 Delta Tau ACC-84E serial interface $657 Delta Tau ACC-36E A/D board $740 Delta Tau ACC-65E I/O board $525 Delta Tau ACC-R2 UMAC rack $1,175 Delta Tau 3U081 Single axis 8A amplifier $1,430 Delta Tau 3U042 Dual axis 4A amplifier $2,630 Parker 404100XRMP linear stage $2,816 Parker SM232AL-NPSN servo motor $965 Aerotech S76-149-A slotless motor $1,085 Copely JSP-090-10 $296 Asco SD8202G051V $306
Total $18,300 Electronics
From Detailed Electrical Design Electronics (from BOM – Drawings Appendix) $3,462 PCB fab $400 Power dist. chassis $1,200 Filter wheel cables $338 Encoder cables $120 Power cabling $346 Power connectors $180 Ethernet cabling $46 Switch enclosure $63 Terminals $120 Wiring $825 Dist. status cables $95 Total $7,195 Thermal Control
part number cost
Proportioning valve Asco SD8202G051V $306
Electronic valve control Copely JSP-090-10 $296 Coolant lines, valves, fittings misc. $500 Camera thermal control system project provided $0 total $1,102
9.1.6 Contingency
Budget contingency was calculated based on the techniques in ATST SPEC-0045 and shown in the table
below.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 245 of 267
Technical Cost Schedule Contingency Baseline Contingency
Tech
nic
al
Ris
k F
acto
r
Cost
Ris
k F
acto
r
Sched
ule
Ris
k F
acto
r
%
Bud
get
Construction phase labor 4 2 6 1 4 1 18 $1,079,894 $194,381
Speckle code development
6 4 6 1 2 1 32
$154,334 $49,387
Lenses/Mirrors 4 4 4 2 2 1 26 $97,842 $25,439
Filters 8 4 3 2 2 1 40 $82,958 $33,183
Mechanical - optics mounts, filter wheel, stages
4 4 3 2 2 1 24 $12,851 $3,084
Benches 1 2 1 1 2 1 5 $12,480 $624
Control electronics / spares
2 2 1 2 2 1 8
$25,495 $2,040
Thermal control 2 2 2 2 2 1 10 $1,102 $110
Computers 1 2 1 1 2 1 5 $3,329 $166
Rack, electrical distribution
1 2 1 1 2 1 5
$7,195 $360
Tools, test equipment 1 2 4 1 2 1 8 $100,000 $8,000
Software / licenses 1 2 4 1 2 1 8 $10,000 $800
$1,587,480 $317,574
The VBI-blue workpackage recommends that the project to set aside $317,574 in contingency funds.
9.2 PROJECT SCHEDULE
The project schedule is too large to fit into this document and remain readable - please refer to the
Drawings Appendix for the VBI project schedule.
The project schedule dependencies (those not shown on the VBI project schedule itself) are listed below.
VBI Fab - Software Development
o OCS - Other Operator UIs
o Mini DHS - DHS - testing
o ICS - base components - test and release
o ICS - standard instrument framework - test and release
VBI Fab - System Test / Verification
o Mini DHS - Data Processing Pipeline - testing
o ICS - Final Construction - OMS Updates
o Camera HW instrument development - Development camera design/software available
VBI Fab - Integrate final camera into instrument
o Camera HW Visible Light Camera Procurement - Visible light cameras received
VBI - Install & Self-Test
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 246 of 267
o DHS - Contract mgmt complete
o Coude’ Env. Clean room finishes complete
o Camera HW - Design and fabricate camera auxiliary hardware and cabling
o Vent gate automation algorithm
o Telescope available for WFC installation
o Coude lab available for installation
VBI Engineering (alignment & calibration)
o Facility OCS IT complete
o Facility ICS IT complete
o Telescope Engineering #2
VBI Engineering (speckle plug-in, performance test)
o NIRSP engineering phase 1
VBI System Demonstration (project acceptance)
o Telescope engineering #3
o WFC acceptance testing - with VBI
VBI & VISP Testing
o NIRSP acceptance testing
9.3 RISK ASSESSMENT
ATST Spec 37 defines the project approach to risk analysis and mitigation. A brief summary of the
approach is presented below.
Each risk is assessed to determine both the likelihood of occurrence and its seriousness or impact to the
project according to the following table.
Table 4.1. Risk Assessment Table
Rating for Likelihood and Seriousness for each Risk
Likelihood Seriousness
Negligible Negligible
L Rated as Low L Rated as Low
M Rated as Medium M Rated as Medium
H Rated as High H Rated as High
VH Very High VH Very High
Each risk is then analyzed to assess the degree of effect the risk may have on the project.
Table 4.2 Risk Grade Table
Grade: Combined effect of Likelihood/Seriousness
Likelihood
Seriousness
negligible low medium high
negligible 1 2
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 247 of 267
low 1 2 3
medium 1 2 3 4
high 2 3 4 5
The table does not include the Very High category. When a Very High risk was encountered, a 1 was
added.
The risk grade change column uses the following table to indicate quickly the effect of the last risk
assessment.
Table 4.3. Risk Grade Change Table
Change to Grade Since Last Assessment
New New risk
— No change to Grade
Grading decreased
Grading increased
The following guidelines were developed specifically for VBI project risks.
The Cost, Schedule, and Requirement columns are checked to identify the primary type of risk.
The Severity column is quantified according to the following table:
schedule delay < 1 mo. L
schedule delay < 2 mo. M
schedule delay < 4 mo. H
schedule delay < 6 mo. VH
cost overage < $3K
cost overage < $10K L
cost overage < $20K M
cost overage < $50K H
cost overage > $50K VH
Threat to requirements VH
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 248 of 267
9.3.1 VBI Risk Register
Pre Mitigation
Risk
Post Mitigation
Risk
Risk Item
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
cost
sch
edu
le
req
uir
emen
t
Gra
de
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
1 Filter Manufacturability VH H 6 X X ↓ VH L 4
2 Filter cost overrun-slight (20%) M M 3 X ↓ M L 2
3 Filter cost overrun-severe (50%) H L 3 X ↓ H
4 Filter failure VH X − VH
5 Delivered filter does not meet
specs. VH L 1 X − VH L 1
6 Optics Manufacturability VH X − VH
7 Optics cost overrun L L 1 X ↓ L
8 Delivered optics do not meet
specs. VH X − VH
9 Parts fabrication - schedule
overrun L L 1 X − L L 1
10 Parts fabrication - parts failure M X − M
11 Parts fabrication - filter wheel
design is flawed M L 2 X ↓ M
12 Software design - ASI model does
not meet spec. H VH 6 X ↓ H
13 Software design - timing problems H X − H
14 Software design - bugs or
performance issues H X − H
15 Control system - critical problems
with new technology M L 2 X − M L 2
16 Control system - blown motor L M 2 X ↓ L L 1
17 Control system - component
failure M M 3 X ↓ L M 2
18 Control system - damaged
mechanisms L M 2 X ↓ L L 1
19 Speckle design - feasibility VH H 6 X ↓ VH L 4
20 Speckle design - cost VH H 6 X ↓ M H 4
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 249 of 267
Pre Mitigation
Risk
Post Mitigation
Risk
Risk Item
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
cost
sch
edu
le
req
uir
emen
t
Gra
de
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
21 Speckle design - hardware
obsolescence L L 1 X − L L 1
22 Speckle design - camera format
increase M H 4 X − L VH 1
23 Excessive delivered wavefront
error VH X − VH
24 Camera format increase H X − H
25 Camera pixel size change M H 4 X ↓ L H 3
26 Camera mass increase L X − L
27
Camera cooling system not
compatible with Coudé
requirements
L L 1 X − L L 1
28 Alignment problems M L 2 X ↓ L L 1
29 Filter wheel speed causes
problems with vibration M L 2 X ↓ M
30 Key Person Loss H X ↑ H M 4
31 Damage during instrument
assembly - motor L L 1 X − L L 1
32 Damage during instrument
assembly - drive L L 1 X − L L 1
33 Damage during instrument
assembly - electronics L L 1 X − L L 1
34 Damage during instrument
assembly - optics H L 3 X − H L 3
35 Damage during instrument
assembly - opto-mechanical L X − L
36 System test - performance does
not match model. VH L 1 X − VH L 1
37 Filter coating deterioration H X − H
38 Shipping damage VH L 1 X − VH L 1
39 Loss during transport VH X − VH
40 Plane crash or boat sinkage VH X − VH
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 250 of 267
Pre Mitigation
Risk
Post Mitigation
Risk
Risk Item
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
cost
sch
edu
le
req
uir
emen
t
Gra
de
Sev
erit
y
Lik
elih
oo
d
Th
rea
t
41 Customs problems VH X − VH
42 Weather or salt water damage VH L 1 X − VH L 1
Table 4 Risk Register
9.3.2 Risk Description and Mitigation Plan
1-5 The filters are the highest risk items in the design in that they are extremely narrow interference
filters and difficult to manufacture. Quotes were sent to many filter manufacturers and only BARR
Associates (now Materion Precision) returned a quote on all filters. The VBI team met with Materion and
discussed the challenges and trade-offs in the filter specifications. The meetings were productive and
resulted in a decrease of risk and cost without sacrificing performance.
The VBI team also met with Andover Corporation who quoted all but the Ca II K filter.
Andovers filters also have the disadvantage of less temperature stability than the Materion filters.
Materion filters stability is <0.005nm/˚C whereas Andover filters have a temperature stability of
0.015nm/˚C which would present temperature control problems for the instrument.
6-8 The optics can be manufactured and have been quoted. There are two conic surfaces in the optics
chain but they are considered low risk because they are not highly aspherical.
9-10 Every attempt has been made to use off-the-shelf mechanical components. Components that must
be custom made have been designed in detail in Solidworks by Scott Gregory who has extensive
experience with these types of designs at the Dunn Solar Telescope. All parts will be fabricated in the
Sunspot machine shop by Ron Long who also has extensive experience fabricating optical mounts and
instrumentation components for the Dunn Solar Telescope. The only mechanical component that will be
contracted outside the project is the filter wheel motor frame which will be furnace brazed.
11 The PDR committee expressed that filter wheel was higher risk than was estimated at PDR. The
wheel is a direct drive, high inertia design, well outside of standard servo design norms. For this reason,
the filter wheel was constructed and tested. The design meets requirements and is well behaved. This
exercise resulted in the design risk dropping to a negligible level.
12 The software design for the VBI at PDR was based on the ATST ASI model. As early as possible
in the development of the ASI model, the VBI team ran testing on the model to validate the latency and
jitter of the model. At that time, the model performance was not capable of meeting VBI requirements.
The ATST software team developed a new model which has been adopted by the VBI. This model has
been tested and easily meets the VBI timing requirements.
15 The Delta Tau control system is developed on new technology that was only released recently.
Delta Tau is still debugging their software but it is maturing rapidly. By the time the VBI is ready for
commissioning, the Delta Tau system will be mature. The VBI team will work closely with Delta Tau to
ensure that the system is stable and bug-free. Testing to date has been successful and the Delta Tau
system has been performing well. Technical support from Delta Tau has been good.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 251 of 267
16-18 During operation, the possibility exists that a failure may occur. The VBI team has mitigated this
risk by choosing a simple and elegant design with low complexity. The VBI team will mitigate failure
risk by maintaining a full set of spares for all motion control and mechanical components that are prone to
failure.
19-21 The feasibility of near real-time speckle image reconstruction was a high risk early in the project.
The VBI team commissioned an independent study with an organization specializing in image
reconstruction to determine the feasibility and scope of image reconstruction. The results of this study
were encouraging. Since PDR, the core speckle reconstruction algorithm has been implemented on GPU
architecture and functions, although further work needs to be done to improve the speed. Near real-time
Speckle reconstruction development is considered low risk.
22 It is possible that the camera format of the VBI will change before commissioning. If this is the
case, the camera will be operated in a 4K X 4K region-of-interest mode so that the speckle reconstruction
and data handling system can accommodate the data stream.
23 The VBI will only be able to achieve diffraction limited images if the wavefront error delivered
by the wavefront correction system meets specifications.
24 If the camera format size increases, the VBI will operate in a 4K X 4K region-of-interest mode so
that the data handling system can accommodate the data steam.
25 If the pixel size of the camera changes, the image doublet lens will need to be replaced. The
remainder of the design will accommodate a pixel size change. The cost of this lens is approximately
$13,000 and is included in the budget. In addition, if the camera pixel format increases to more than
10um, the camera stages will need to be replaced with longer stages (the motors and encoders will be
retained) at a cost of $2,800 for each of the two stages. The length of the optical path will also become
longer. This will be taken into consideration when the bench layout of the Coude lab is worked out with
the project.
26 It is possible that the final camera will be larger than the current baseline camera. The camera
stages have excess capacity and can accommodate any reasonable camera size.
27 It is possible that the final camera may have a cooling system incompatible with the Coude Lab
cooling systems. If this is the case, a secondary cooling system may be required. It was thought that this
cost would be the responsibility of the VBI workpackage, but the camera workpackage will absorb any
increased cost for cooling issues.
28 The VBI team has ensured that a wavefront sensor and an interferometer will be available during
the alignment phase of the project. A laser tracker will also be available if needed.
29 The vibration of the filter wheel has been tested and is not severe.
30 If any of the VBI team members is lost, the project will incur a schedule delay of several months
until the key person can be replaced. At the PDR, this risk was estimated to be slight, but the PDR
committee disagreed. This risk has been elevated on the risk register for this reason. There is a period of
non-activity in the VBI schedule that could be used to absorb the hiring and training of a replacement for
a key person, but the budget will be impacted due to the extensive training that will be required. The
schedule was also developed considering the present personnel who are very motivated, knowledgable,
and highly productive. There is no guarantee that a skilled key man could be replaced with an equally
productive person.
31-33 Configuring brushless motors can be hazardous because they are externally commutated and if
the commutation configured incorrectly, they will overheat and fail. There is also the danger of burning
out motor drives or electronics due to overloads, surges, or ESD. The motors have been successfully
commutated and all of the control systems are working well.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 252 of 267
34 Damage to optics and optical mounts is possible during assembly. Due to budget constraints, it is
not feasible to maintain a set of spare optics for the instrument. Therefore every effort will be made to
ensure that the optics will be handled by careful and experienced personnel.
35 Spare opto-mechanical assemblies will be kept on hand for the more delicate components.
37 Filter coating deterioration is a risk that will be mitigated by choosing filter manufacturers with a
history of stable filter coatings. Barr uses hard coatings that have proven stability.
38 The risk of shipping damage to Maui is significant. To mitigate this risk, a shipping plan has
been detailed.
39-40 There is danger that critical components will be lost during transport. The optical elements will
be hand carried and a loss of any of these will result in project delay until new components can be
manufactured. With the exception of the custom opto-mechanical components, all other parts can be re-
purchased. The critical opto-mechanical components will be shipped separate from their spares and the
spares will only be shipped when the originals are received in Maui in good condition. All components
will be shipped well ahead of time to ensure adequate time to deal with any delays.
41 Customs problems are not considered high risk, but every effort will be made to insure that all
shipping documents are in order. Components will be shipped well ahead of time to ensure adequate time
to deal with any delays.
42 The only components shipping by sea are the optics benches. These will be insured and inspected
upon arrival. If any damage is evident, a claim will be filed and the manufacturer will be responsible to
supply replacement components or replace the benches themselves. The benches will be shipped well
ahead of time to allow time for this eventuality.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 253 of 267
10 CONSTRUCTION PHASE PLANNING
10.1 FABRICATION PLAN
Fabrication of the VBI will take place in the instrumentation development facilities at NSO/SP in
Sunspot, NM. These facilities include a machine shop, anodizing room, electronics shop, assembly and
staging areas, and access to telescopes, light feeds and spectrographs for testing and verification, as
required.
The COTS mechanical and electrical parts used in the VBI design are standard, readily available
components. Purchase orders for these components, as well as the optical components, will originate
from Sunspot.
Parts requiring fabrication will be manufactured in the machine shop and all aluminum parts subsequently
anodized in Sunspot. The only assembly that will require some outside fabrication is the filter wheel
motor housing, which will require furnace brazing of its three individual pieces into one hermetically
sealed unit and then have a black chrome plating applied. Solar Furnaces, Inc. has already reviewed the
design and is at least one vendor willing to perform the furnace brazing of the housing. Any of several
commercial companies would be able to apply the black chrome plating to the copper housing. All other
mechanical fabrication will take place in Sunspot.
All wiring, assembly and initial testing will take place in Sunspot. Fixed optical element mounts will be
assembled and inspected to assure compliance with the allotted opto-mechanical error budget. Filter
alignment within the filter wheel will be tested and aligned on a solar telescope fed spectrograph.
10.2 QUALITY CONTROL AND QUALITY ASSURANCE PLAN
10.2.1 Definitions
10.2.1.1 Quality Control / Verification Quality control, also known as verification, is a process used to evaluate whether or not a product,
service, or system complies with regulations, specs, or conditions imposed at the start of development
phase. Another way to think of this process is “Are you building it right?”.
10.2.1.2 Quality Assurance / Validation Quality assurance, also known as validation, is a process used to establish evidence that provides a
high level of confidence that the product, service, or system accomplishes its intended requirements.
Another way to think of this process is “Are you building the right thing?”
10.2.1.3 CCB – Change Control Board
10.2.1.4 CCD – Critical Design Document
10.2.1.5 ICD – Interface Control Document
10.2.2 Quality Control Tasks
The following tasks will be performed as part of the QA process for the VBI. Auditing of these tasks will
be performed by the work package manager and the ATST QC/QA personnel.
10.2.2.1 Requirements Verification All new and changed requirements of the system will be verified to ensure the following:
They are directly related to an approved item from the CCB
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 254 of 267
They are consistent, feasible, and testable
They have been appropriately allocated to the correct mechanical, hardware, software, and
operational processes
Those that are related to safety, security, and criticality have been verified by the rigorous
processes governing those areas
The requirements verification process will be conducted by formal review. It will require
participation and sign-off by the following team members:
Work package primary investigator
Work package manager
Work package engineer responsible for the change
10.2.2.2 Design Verification All new and changed design elements will be verified to ensure the following:
Design is traceable to requirements.
Design provides details describing how requirements will be met
Design implements safety, security, and other critical requirements correctly as shown by
suitably rigorous methods.
The design verification process will be conducted by formal review and requires participation of the
following team members:
Work package manager
Work package engineer responsible for the change
At least one engineering representative from a different ATST work package
10.2.2.3 As-Built Verification All new and changed components will be verified once construction is complete to ensure the
following:
Applicable standards are being followed
Applicable best practices standards are being followed
The as-built verification process will be conducted by an informal review process such as email. The
review process must be performed and signed off by at least one engineer from a different ATST
work package.
10.2.2.3.1 Software Specific Tasks The following tasks apply specifically to software source code.
ATST software coding and commenting standards are met
ATST software best practices are met per SPEC-0005
10.2.2.4 Documentation Verification In conjunction with new or changed component being released the following documentation must be
provided/updated:
Detailed design documentation (i.e. CDD), ICDs, etc.
Performance benchmarks (for performance critical modules)
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 255 of 267
Test documentation (unit, component, integration, and user acceptance)
Operations document
The documentation verification process will be conducted by informal review, such as email. The
review process must be performed and signed off by the following team members:
ATST QA/QC representative
ATST release manager
10.2.3 Quality Assurance Tasks
The following tasks shall be performed as part of the quality assurance process for the VBI. The results
of these tasks will be documented and reviewed as part of the “Document Verification” task of the QC
process.
10.2.3.1 Unit Testing Unit testing involves the testing of individual units of work to ensure they are fit for use. A new or
changed VBI component shall be unit tested. This testing shall be performed before proceeding to
component level testing. The tasks that must be completed as part of unit testing are as follows:
Prepare new and/or changed unit tests and related documentation
Ensure traceability of new and/or changed tests to requirements
Execute new, changed, and existing unit tests upon build of component
Document unit test results
Unit testing will be performed by the work package engineer responsible for the change or his/her
designee.
10.2.3.2 Component Testing Component testing involves the testing of the VBI system as a whole to ensure correctness. All new
or changed VBI components that have 1) passed unit testing and 2) will be part of a release shall
undergo VBI component testing. Component testing shall be performed before proceeding to
integration level testing. The tasks that must be completed as part of component testing are as
follows:
Prepare new and/or changed component tests and related documentation
Ensure traceability of new and/or changed tests to requirements
Execute new, changed, and existing component tests
Document component test results
Component testing will be performed by the responsible work package engineer responsible for the
change or his/her designee.
10.2.3.3 Integration Testing Integration testing involves testing an intended VBI release to ensure it integrates correctly with all
other ATST systems for which it interfaces. Integration testing shall be performed in a qualified
ATST test environment that uses mechanical, hardware, and software systems equivalent to the
production systems. Integration testing shall be performed before proceeding to user acceptance level
testing. The tasks that must be completed as part of integration testing are as follows:
Prepare new and/or changed integration tests and related documentation
Ensure traceability of new and/or changed tests to requirements
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 256 of 267
Coordinate integration test schedule with test engineer of interfacing systems
Execute new, changed, and existing integration tests
Document integration test results
Integration testing will be performed by the work package engineer responsible for the change and the
designated test engineer for each interfacing ATST system. Results shall be reviewed and signed off
by the following team members before proceeding to user acceptance testing:
Work package manager
Work package manager(s) for all systems that interface with the released component
Work package engineer responsible for release
Test engineer for all systems that interface with the released component
10.2.3.3.1 Software Specific Tasks The following tasks are specific to integration testing for software releases.
Any software defects (bugs) identified in testing will be logged in JIRA tracking system
All test cases impacted by the defect must be re-tested once the defect is resolved
Any un-resolved software defects must be approved by the review team before
proceeding to User Acceptance Testing
Upon successful completion of integration testing the software source code will be
tagged in CVS to indicate it is part of a release. Test documentation will include
reference to this release number.
10.2.3.4 User Acceptance (Verification) Testing User acceptance testing involves performing tests for which the user will validate the output of the
system to determine pass/fail status. User acceptance testing shall be performed in a production
environment, or a qualified test environment that is approved by the user. User acceptance testing
shall be performed before a release can be made operational for production use. The tasks that must
be completed as part of user acceptance testing are as follows:
User to prepare new and/or changed integration tests and related documentation
Ensure traceability of new and/or changed tests to requirements
Coordinate user acceptance test schedule with production or test environments and
systems
Execute new, changed, and existing user acceptance tests
Document test results
User acceptance testing will be performed by the user, with the support of the work package engineer
responsible for the release, and test engineers from other interfacing systems. Before proceeding to
production, the release must be approved by the following team members:
Work package user (i.e. owner or primary investigator)
Work package manager(s) for all systems that interface with the released component
Work package engineer responsible for release
Test engineer for all systems that interface with the released component
ATST release manager
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 257 of 267
10.2.3.4.1 Software Specific Tasks The following tasks are specific to user acceptance testing for software releases.
Only software source code from the CVS tag that matches the release number identified
in the integration test documentation may be used for user acceptance testing.
Any software defects identified during testing will be logged in JIRA tracking system
All test cases impacted by the defect must be re-tested once the defect is resolved
Any un-resolved software defects must be approved by the review team before
proceeding to production release.
Test documentation should include reference to the CVS tag for this release.
10.3 VERIFICATION TEST PLAN
10.3.1 Unit Tests
10.3.1.1 Hardware Optics
Mirrors
Interferometer Setup
Lenses
Interferometer Setup
Filters
Spectrograph at DST
Mechanical
Filterwheel
Repositioning accuracy/repeatability with pinhole of DST
Camera Stage
Movement accuracy/repeatability with interferometer setup
Mirror mounts
Position accuracy/repeatability with interferometer setup
Camera
Test & Verification Performed by the ATST Camera Project
10.3.1.2 Software
10.3.1.2.1 Engineering/OCS Graphical User Interface Test procedure: Inspection
Engineering Interface:
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 258 of 267
All motorized mechanical elements must be adjustable and functional/operational through both
high and low level inputs.
All relevant camera settings must be adjustable and functional/operational through both high and
low level inputs.
All Plug-In parameters must be adjustable and functional/operational.
OCS Interface consists of a subset of the engineering interface and will expose only the high level inputs.
10.3.1.2.2 ICS script functionality
10.3.1.2.2.1 ‘Dark’ Test procedure:
Operational task ‘dark’ is invoked by operator, the TCS moves a dark target slide into the GOS, and
reports ‘ready’. OCS commands VBI via ICS to acquire data.
The VBI ICS Dark script commands the CSS to acquire the commanded amount of data with the
commanded parameters and directs it via BDT through the DHS. Once the operation has finished the
script returns ‘done’.
Tested functionality:
parameters (camera, mechanical stages, etc) are set properly by script
detection of proper target slide ‘dark’
performance of BDT
10.3.1.2.2.2 ‘Gain’ Test procedure:
Operational task ‘gain’ is invoked by operator, the TCS moves the field stop slide into the GOS, starts
moving randomly over the solar surface near Sun center avoiding active regions (AO DM ‘unflat’?), and
reports ‘ready. OCS commands VBI via ICS to acquire data. VBI starts exposures and sends data via
BDT (with obsTask header) through the DHS.
The VBI ICS Flat script commands the CSS to acquire the commanded amount of data with the
commanded parameters and directs it via BDT through the DHS. Once the operation has finished the
script returns ‘done’.
Tested functionality:
parameters (camera, mechanical stages, etc) are set properly by script
performance of BDT
10.3.1.2.2.3 ‘Observe’ Test procedure:
Operational mode ‘observe’ is invoked by operator, the TCS moves a field stop slide into the GOS, and
reports ‘ready’. OCS commands VBI via ICS to acquire data.
The VBI ICS Observe script commands the CSS to acquire the commanded amount of data with the
commanded parameters and directs it via BDT through the DHS. Once the operation has finished the
script returns ‘done’.
Tested functionality:
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 259 of 267
parameters (camera, mechanical stages, etc) are set properly by script
performance of BDT
10.3.1.2.2.4 ‘Focus’ Test procedure:
Operational mode ‘focus’ is invoked by operator, the TCS moves a line grid target slide into the GOS,
and reports ‘ready’. OCS commands VBI via ICS to acquire data.
VBI commands the CSS to start exposures and the CSS sends data via the ICS command channel back to
the ICS. The VBI ICS Focus script evaluates the image quality metric, and moves the camera stage to a
new focus position. These steps are performed about 3-4 times, after which the optimal focus position is
determined, then the script returns ‘done’.
Tested functionality:
parameters (camera, mechanical stages, etc) are set properly by script
detection of proper target slide ‘line grid’
performance/functionality of sending images to the ICS via command channel
focus positioning algorithm
performance: find focus in < 1min
10.3.1.2.2.5 ‘Target’ Test procedure:
Operational mode ‘target’ is invoked by operator, the TCS moves a line grid target slide into the GOS,
and reports ‘ready’. OCS commands VBI via ICS to acquire data.
VBI commands the CSS to start exposures and the CSS sends data via the ICS command channel back to
the ICS. The VBI ICS Target script evaluates the image pixelscale, then returns ‘done’.
Tested functionality:
parameters (camera, mechanical stages, etc) are set properly by script
detection of proper target slide ‘line grid’
performance/functionality of sending images to the ICS via command channel
image pixelscale determination algorithm
performance: find image pixelscale in < 1min
10.3.1.2.2.6 ‘Alignment’ Test procedure:
Operational mode ‘alignment’ is invoked by operator, the TCS moves a pinhole target slide into the GOS,
and reports ‘ready’. OCS commands VBI via ICS to acquire data.
VBI commands the CSS to start exposures and the CSS sends data via the ICS command channel back to
the ICS. The VBI ICS Alignment script determines the centered camera position, then returns ‘done’.
Tested functionality:
parameters (camera, mechanical stages, etc) are set properly by script
detection of proper target slide ‘pinhole’
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 260 of 267
performance/functionality of sending images to the ICS via command channel
camera stage centering algorithm
performance: find center in < 1min
10.3.1.2.2.7 ‘WaveCal’ Test procedure:
Operational mode ‘wavecal’ is invoked by operator, the TCS moves a pinhole target slide into the GOS,
and reports ‘ready’. OCS commands VBI via ICS to acquire data.
The VBI performs the same actions as if the Operational mode ‘gain’ was active.
Tested functionality:
See ‘Gain’.
10.3.1.2.2.8 ‘PolCal’ Test procedure:
Operational mode ‘polcal’ is invoked by operator, the TCS moves a polarization calibration optics into
the GOS, and reports ‘ready’. OCS commands VBI via ICS to acquire data.
Tested functionality:
ensure NOOP
10.3.1.2.2.9 ‘TelCal’ Test procedure:
Operational mode ‘telcal’ is invoked by operator, the TCS moves telescope polarization calibration optics
into the GOS, and reports ‘ready’. OCS commands VBI via ICS to acquire data.
Tested functionality:
ensure NOOP
10.3.1.2.3 Processing/Detailed Display Plug-Ins
10.3.1.2.3.1 Dark Calibration Image Plug-In Test procedure:
Operational mode ‘dark’ is invoked by operator, the TCS moves the dark slide into the GOS and reports
‘ready’ when done. OCS commands VBI via ICS to acquire data. VBI starts exposures and sends data via
BDT (with obsMode header) through the DHS.
The Dark Calibration Image Plug-In detects the obsMode header and begins to average images. When the
final image of the BDT frame set has been seen by the Plug-In, the Plug-In finalizes its computation and
stores the result of the averaging process in the Calibration Data Store. In addition, the raw data is
simultaneously transferred to the Data Store.
Tested functionality:
detection of obsTask
averaging algorithm/number of images required
real-time performance
access/bandwidth to Calibration Data Store
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 261 of 267
10.3.1.2.3.2 Gain Calibration Image Plug-In Test procedure:
Operational mode ‘gain’ is invoked by operator, the TCS moves the field stop slide into the GOS, starts
moving randomly over the solar surface near Sun center avoiding active regions (AO DM ‘unflat’?), and
reports ‘ready. OCS commands VBI via ICS to acquire data. VBI starts exposures and sends data via
BDT (with obsTask header) through the DHS.
The Gain Calibration Image Plug-In detects the obsTask header, retrieves the latest dark calibration image
from the Calibration Data Store, and begins to average images. When the final image of the BDT frame
set has been seen by the Plug-In, the Plug-In finalizes its computation and stores the result of the
averaging process in the Calibration Data Store. In addition, the raw data is simultaneously transferred to
the Data Store.
Tested functionality:
detection of obsTask
averaging algorithm/number of images required
real-time performance
access/bandwidth to/from Calibration Data Store
10.3.1.2.3.3 Frame Selection Plug-In Test procedure:
Operational mode ‘observe’ is invoked by operator, the TCS moves the field stop slide into the GOS and
the telescope to the requested coordinates on the Sun, and reports ‘ready. OCS commands VBI via ICS to
acquire data. VBI starts exposures and sends data via BDT (with obsTask header) through the DHS.
If activated, Frame Selection Plug-In detects the obsTask header, retrieves the latest dark and gain
calibration image from the Calibration Data Store (if user requested), and begins to compute the image
metric (user input) within a region of interest (user input). When the final image of the BDT stream has
been seen by the Plug-In, the Plug-In finalizes its computation, finds the index of the best N of M (user
input) images and transmits only those raw images as output through the BDT modifying the BDT header
to reflect the new stream properties.
Tested functionality:
detection of obsTask
image metric algorithms
real-time performance
access/bandwidth from Calibration Data Store
10.3.1.2.3.4 Speckle Image Reconstruction Plug-In Test procedure:
Operational mode ‘observe’ is invoked by operator, the TCS moves the field stop slide into the GOS and
the telescope to the requested coordinates on the Sun, and reports ‘ready. OCS commands VBI via ICS to
acquire data. VBI starts exposures and sends data via BDT (with obsTask header) through the DHS.
If activated, Speckle Image Reconstruction Plug-In detects the obsTask header, retrieves the latest dark
and gain calibration image from the Calibration Data Store, and begins computation of the reconstruction.
When the final image of the BDT frame set has been seen by the Plug-In, the Plug-In finalizes its
computation and transmits only the reconstructed image as output through the BDT, modifying the BDT
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 262 of 267
header to reflect the new stream properties. The reconstruction has to be completed before the next burst
is acquired for reconstruction.
Tested functionality:
detection of obsTask
image reconstruction algorithm
AO data ingestion from AO data pipeline, and its correct evaluation
(real-time) performance
access/bandwidth from Calibration Data Store
10.3.1.2.3.5 QAS Detailed Display Test procedure: Inspection
The Detailed Display Plug-In should be capable of displaying both unprocessed and processed data. This
means it has to be capable of interpreting the modified BDT header information.
10.3.2 General Verification of VBI ISRD requirements
10.3.2.1 Spectral Range Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths, with the adaptive optic system operational. In addition, dark and flat calibration images
are to be acquired.
Images are to be calibrated using the dark and flat calibration images. Spatially down-scaled, calibrated
images must show the characteristic features of the observed wavelengths; this requires comparison to
data previously observed with other telescopes using similar filter band widths.
10.3.2.2 Field of View Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths, with the adaptive optic system operational. In addition, dark and flat calibration and
line grid target images are to be acquired.
Using the line grid images and the known distance of the grid lines, the pixel scale is to be computed.
This implies that the image scale in the prime focus (arcseconds/mm) is known at this point. The pixel
scale may be verified for photospheric wavelengths using an azimuthally integrated spatial power
spectrum of the calibrated images at Sun center (showing solar granulation) and comparing its peak
position to literature.
The observed Field of View can now be computed by multiplying the pixel scale value with the number
of illuminated pixels.
This procedure is to be repeated for all initial VBI wavelengths.
10.3.2.3 Static Aberrations
10.3.2.3.1 Option 1: A wavefront sensor replaces the VBI science camera and measures the static aberrations of the beam path
starting at M3.
This measurement should be performed with the adaptive optics locked on a structure at Sun center.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 263 of 267
10.3.2.3.2 Option 2: The light beam from the Sun is split using a beam splitter. While one beam is used as reference beam and
fed through an optical fiber onto the VBI Coude table, the test beam propagates through all optical
elements of the light feed to the VBI. The reference beam will be brought to interference with the test
beam just before the VBI camera using another beam splitter, and the interferogram is measured with the
VBI camera. It will deliver a measurement of the summed static optical aberrations of all elements of the
VBI light feed, with the exception of M1 and M2. Internal seeing effects may have to be mitigated by
temporal averages of the interferograms.
This measurement should be performed with the adaptive optics locked on the light source.
10.3.2.3.3 Option 3: A phase diversity sensor cube in front of the aligned VBI camera and appropriate phase retrieval
algorithms are used to estimate the static aberrations that contribute to the wavefront error in the VBI
optical setup.
Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths. High signal-to-noise ratio images (increased exposure times/summing) should serve as
input to mitigate possible inaccuracies of the phase retrieval algorithms due to noise. Temporal averages
of the results should be computed to reduce the effect of seeing.
This measurement should be performed with the adaptive optics locked on a structure at Sun center.
Note: The PD sensor should ideally be in a telecentric optical setup.
10.3.2.3.4 Option 4: (Multi-frame) blind deconvolution (MFBD) algorithms are used to retrieve the summed wavefront errors
in VBI’s optical feed.
Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths. High signal-to-noise ratio images (increased exposure times/summing) should serve as
input to mitigate possible inaccuracies of the phase retrieval algorithms due to noise. Temporal averages
of the results should be computed to reduce the effect of seeing.
This measurement should be performed with the adaptive optics locked on a structure at Sun center.
10.3.2.4 Spatial Sampling Similar procedure as described in 6.2.2.2.
Relative Photometry
Pointing ATST to the solar limb, images of the solar atmosphere are to be acquired at all initial
photospheric VBI wavelengths, with the adaptive optic system operational. In addition, dark and flat
calibration and line grid target images are to be acquired.
After post-facto processing of the dark and gain calibrated images, the scattered/stray light contribution
can be estimated from the edge included in the images: the integrated deviation of the intensity gradient
from a Θ-function can serve as estimation for the scattered/stray light.
10.3.2.5 Synchronization between Channels Verification of simultaneous camera hardware triggers using an oscilloscope.
10.3.2.6 Multi-Wavelength Cadence Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths, with the adaptive optic system operational. In addition, dark and flat calibration images
are to be acquired.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 264 of 267
The VBI is to observe the solar atmosphere by cycling through all wavelengths acquiring 80 frames at
one wavelength and moving its filterwheel to the next wavelengths position continuing to do so for 4
hours to ensure the required 3.2s cadence between image bursts for one complete observing day.
10.3.2.7 Signal-to-Noise Ratio Pointing ATST to the center of the Sun, images of the solar atmosphere are to be acquired at all initial
VBI wavelengths, with the adaptive optic system operational. In addition, dark and flat calibration images
are to be acquired.
For each wavelength, the dark calibration image and gain calibration image is computed. The flat images
are subsequently gain corrected. From the gain corrected flat images, a mean spatial power spectrum is
computed, and azimuthally averaged. The high spatial frequency tail is a good estimate for the noise in
the images.
A comparison of this value to the azimuthally averaged spatial power spectrum of a single calibrated
image serves as metric for the signal-to-noise ratio in the image.
10.3.3 Science Verification Plan
The science verification plan addresses an instrument verification beyond simple unit tests and
verification of instrument performance to match its requirements. It tests the functionality of the full
system including all telescope & instrument mechanisms (software and hardware) and their interplay
from end to end. It does so by running what could be a regular experiment at ATST with the instrument.
As such, the science verification plan should involve the following:
What should be tested
Usage performance (requirements should flow from VBI OCD)
operatability / usability
Quality Assurance
Instrument Status Feedback
repeatability / stability
Delivered algorithms for removal of instrument signature
Assure Scientific Value of the instrument data
How should it be tested (detailed plan of)
who (test driver: RA, instrument scientist, *not* instrument builders) performs
what action
when
using a readily available solar target (quiet sun, limb, …)
several days’ worth of observational data
instrument scientist performs test for trends etc off-mountain
looking for spurious trends in data sets
distributions of properties (granular sizes, motions, etc)
Possibly cross-calibration against other data
For the VBI instrument, the following procedure is envisioned. The VBI will observe at all of its
wavelengths the solar disc center showing ‘quiet sun granulation’, a target that is readily available.
In intervals of increasing duration (1h, 2h, 4h, 8h), data is acquired using an operational WCS and and
seeing conditions that correspond to Fried parameters ≈ 10 cm.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 265 of 267
10.3.3.1.1 Step 1: The operator or ATST instrument scientist uses the VBI Observatory Control System (OCS) user
interface (both graphical and command line) to import/enter the experiment parameters into the ATST
database at least one day prior to execution of the science verification plan. The experiment parameters
include, but are not limited to
used wavelengths [all 4]
number of images to be taken at each wavelength [80 images]
exposure time at each wavelength
usage of speckle imaging plugin at each wavelength [‘on’ for each wavelength]
Acquisition rate [maximum]
The user interfaces should allow easy and efficient input.
If possible, the data should be acquired with similar settings co-spatial and co-temporal at a second
facility. This may require coordinated observations or a separate proposal to the other facility, or at least
the request of pointing information from other facilities.
10.3.3.1.2 Step 2: On the day of the planned execution of the science verification plan, prior to sunrise, the experiment
parameters are loaded and verified for accuracy.
Before the ATST operational mode is changed to ‘Dark’ by the operator, the parameters for dark data
acquisition are verified by the ATST instrument scientist. Once the ATST ‘Dark’ operational mode is
activated by the operator, the VBI should begin data acquisition within a fraction of a second after the
TCS has reported that the PA&C dark target slide is in place, using the previously verified, active ‘Dark’
operational mode parameters. Data acquisition should not last longer than 20 seconds. Raw data should be
visible on the DHS QuickLook Display. There should be a notification about instrument status. The
averaged dark calibration image should be available in the ATST Calibration Store. The raw frames
should be available on the ATST Data Storage System.
Before the ATST operational mode is changed to ‘Align’ by the operator, the parameters for automatic
camera alignment are verified by the ATST instrument scientist. Once the ATST ‘Align’ operational
mode is activated by the operator, the VBI should begin data acquisition within a fraction of a second
after the TCS has reported that the PA&C pinhole target slide is in place, using the previously verified,
active ‘Align’ operational mode parameters. The control loop should perform its alignment algorithms on
the image data fed back to it, and finish with a camera position that aligns the pinhole on the sensor center
within one minute. Raw data should be visible on the DHS QuickLook Display. There should be a
notification about instrument status.
Prior to invocation of the ATST operational mode ‘Focus’ by the operator, the parameters for automatic
camera focusing are verified by the ATST instrument scientist. Once the ATST ‘Focus’ operational mode
is activated by the operator, the VBI should begin data acquisition within a fraction of a second after the
TCS has reported that the PA&C grid or focus target slide is in place, using the previously verified, active
‘Focus’ operational mode parameters. The control loop should perform its focus algorithms on the image
data fed back to it, and finish with a camera position that focuses the focus target on the sensor within one
minute. Raw data should be visible on the DHS QuickLook Display. There should be a notification about
instrument status. As this procedure likely invalidates the gain table, the successful execution of the
‘Focus’ procedure requires a subsequent ‘Gain’ operational mode.
Before the ATST operational mode is changed to ‘Gain’ by the operator, the parameters for gain data
acquisition are verified by the ATST instrument scientist. Once the ATST ‘Gain’ operational mode is
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 266 of 267
activated by the operator, the VBI should begin data acquisition within a fraction of a second after the
TCS has reported that the PA&C field slide is in place, the field is randomly moving and the WCS DM is
unflat, using the previously verified, active ‘Gain’ operational mode parameters. Data acquisition should
not last longer than 20 seconds. Raw data should be visible on the DHS QuickLook Display. There should
be a notification about instrument status. The average gain calibration image should be available in the
ATST Calibration Store. The raw frames should be available on the ATST Data Storage System.
At this point, to verify the previously entered parameters, the ATST operational mode ‘Setup’ is invoked.
Once the ATST ‘Setup’ operational mode is activated by the operator, the VBI should begin data
acquisition within a fraction of a second after the TCS has reported that the PA&C field slide is in place,
the telescope is in position, and WCS is operational, using the previously verified, active ‘Setup’
operational mode parameters to be tested. There should be a notification about instrument status.
Acquired data should be accessible via the ATST DHS QuickLook Display. The Detailed Display should
display the result of the DHS speckle reconstruction plug-in – no artifacts should appear in the images.
Quick analysis tools like histogram functions or line cuts, etc. should be available for a first quality
assurance.
This concludes the second step of science verification, which, depending on the ATST optical stability,
deals with steps regularly performed prior to science data acquisition.
10.3.3.1.3 Step 3: Science data acquisition is performed using the ATST ‘Observe’ operational mode.
Prior to invocation of the ATST operational mode ‘Observe’ by the operator, the experiment parameters
are verified by the ATST instrument scientist. Once the ATST ‘Observe’ operational mode is activated by
the operator, the VBI should begin data acquisition within a fraction of a second after the TCS has
reported that the PA&C field slide is in place, the telescope is in position, and the WCS is operational,
using the previously verified, active ‘Observe’ operational mode parameters.
There should be a notification about instrument status.
Acquired data should be accessible via the ATST DHS QuickLook Display. The Detailed Display should
display the result of the DHS speckle reconstruction plug-in – no artifacts should appear in the images.
Quick analysis tools like histogram functions or line cuts, etc. should be available for a first quality
assurance.
The raw frames and reconstructed images should be available on the ATST Data Storage System.
The ‘Quiet Sun’ near disk center should be observed with moderate seeing corresponding to Fried
parameters ≈ 10 cm, in increasing duration intervals of 1h, 2h, 4h, and 8h. Between data acquisition
intervals the VBI Control System should be shut down to power-off and restarted for further testing of
usability and reliability.
10.3.3.1.4 Step 4: Acquired data will appear in the ATST Data Storage System; these data can be transferred to a removable
device. The ATST instrument scientist will be able to have immediate access to the acquired, unprocessed
data and the calibration data.
The ATST instrument scientist should have access to the VBI calibration software package. Using this
package, the instrument scientist should calibrate the raw data acquired, if necessary. Otherwise, he has
access to the reconstructed, fully calibrated VBI images produced in real-time by the ATST DHS. The
software should be well documented, intuitive and easy to apply to the output raw data. The resulting
images should be properly calibrated for dark current and corrected for gain artifacts.
SPEC-0107 VBI CDD
SPEC-0107, Rev A Page 267 of 267
10.3.3.1.5 Step 5: Following successful calibration of the raw VBI data, and/or using the reconstructed VBI images, the
ATST instrument scientist should perform a scientific analysis of the data.
Within the different duration data cubes, are there systematic trends in the distribution of granular size,
the image contrast or the signal-to-noise ratio (spatial power spectra analysis, computations of the rms
image contrast)? Such trends could occur due to thermal instabilities. Are the results comparable to
literature values?
Is there a systematic trend in the time series of granular flow patterns (local correlation tracking
algorithms)? Are the results comparable to literature values?
Do gain correction errors appear in the images of the time series of different lengths? This indicates
problems with the optical stability.
If available, the data should be compared to the co-spatial and co-temporal data acquired using a well-
known secondary telescope.
The above steps address all of the points mentioned at the beginning of this section and provide an end-to-
end test of the most-used elements of the VBI.
10.4 TRANSPORTATION PLAN
The VBI will be assembled, tested, and verified in Sunspot, NM. A reason for building the VBI in
Sunspot is the availability of the Dunn Solar Telescope which will be used to test and verify components
of the VBI and the availability of optic benches.
The items to be transported from Sunspot to Maui are the following: Optics bench – the optics benches
already in Sunspot will be used for test and development, and the final optics bench will be shipped from
the manufacturer directly to Maui.
Mounts, stages, electronics – the mounts, stages, and electronics will be shipped via air freight to prevent
possible damage or corrosion during transport. The electronics will be packaged in ESD safe packaging.
Optical elements – the lenses, mirrors, and filters will be carefully packed and hand carried to the ATST
site to prevent the possibility of mishandling by freight carriers.
10.5 IT&C SUPPORT PLAN
The Integration Test and Commission Support plan schedule can be seen in the last section of the
integrated project schedule in Figure 50. The VBI will be fully verified and tested prior to shipment to
Hawaii. The VBI will then be transported, re-assembled in the ATST instrument lab at the ATST site,
and fully verified again, to ensure that no damage occurred during shipment. The VBI will then be
moved into the Coudé Lab and will be assembled and tested in place. When the telescope and adaptive
optics systems are aligned and able to send a light feed to the VBI, the VBI will be aligned to the
telescope and be made operational. Other instruments will be installed and verified during this period and
the facility will provide exclusive use of the telescope to instruments using an allocation plan yet to be
developed but that will divide telescope time fairly and in time blocks coordinated with instrument
partner travel schedules.
The final step in integration and test is to test the operation of multiple simultaneous instruments.