23
American Institute of Aeronautics and Astronautics 1 Traffic Aware Planner (TAP) Flight Evaluation John M. Maris 1 and Mark A. Haynes. 2 Advanced Aerospace Solutions, LLC. Raleigh, NC, 27614-7212 David J. Wing 3 and Dr. Kelly A. Burke 4 NASA Langley Research Center, Hampton, VA, 23681 and Dr. Jeff Henderson 5 and Sharon E. Woods 6 Engility Corporation, Billerica MA, 01821 NASA’s Traffic Aware Planner (TAP) is a cockpit decision support tool that has the potential to achieve significant fuel and time savings when it is embedded in the data-rich Next Generation Air Transportation System (NextGen) airspace. To address a key step towards the operational deployment of TAP and the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR), a system evaluation was conducted in a representative flight environment in November, 2013. Numerous challenges were overcome to achieve this goal, including the porting of the foundational Autonomous Operations Planner (AOP) software from its original simulation-based, avionics-embedded environment to an Electronic Flight Bag (EFB) platform. A flight-test aircraft was modified to host the EFB, the TAP application, an Automatic Dependent Surveillance Broadcast (ADS-B) processor, and a satellite broadband datalink. Nine Evaluation Pilots conducted 26 hours of TAP assessments using four route profiles in the complex eastern and north-eastern United States airspace. Extensive avionics and video data were collected, supplemented by comprehensive inflight and post-flight questionnaires. TAP was verified to function properly in the live avionics and ADS-B environment, characterized by recorded data dropouts, latency, and ADS-B message fluctuations. Twelve TAP-generated optimization requests were submitted to ATC, of which nine were approved, and all of which resulted in fuel and/or time savings. Analysis of subjective workload data indicated that pilot interaction with TAP during flight operations did not induce additional cognitive loading. Additionally, analyses of post-flight questionnaire data showed that the pilots perceived TAP to be useful, understandable, intuitive, and easy to use. All program objectives were met, and the next phase of TAP development and evaluations with partner airlines is in planning for 2015. Nomenclature ADC = Air Data Computer ADS-B = Automatic Dependent Surveillance Broadcast ADIF = Aircraft Data Interface Function AFS = FAA Flight Standards Service 1 Flight Test Principal Investigator, 1425 Norwood Crest Ct., Raleigh NC. AIAA Associate Fellow. 2 Flight Test Director, 1425 Norwood Crest Ct., Raleigh NC, AIAA Member. 3 ATM Research Engineer, Mail Stop 152, [email protected], AIAA Member. 4 Human Factors Specialist, Mail Stop 152, [email protected]. 5 Senior Research Engineer, 900 Technology Park Drive, Suite 201, AIAA Member. 6 Principal Software Engineer, 900 Technology Park Drive, Suite 201, AIAA Member.

Traffic Aware Planner (TAP) Flight Evaluation

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

_1
2
David J. Wing 3 and Dr. Kelly A. Burke
4
and
6
Engility Corporation, Billerica MA, 01821
NASA’s Traffic Aware Planner (TAP) is a cockpit decision support tool that has the
potential to achieve significant fuel and time savings when it is embedded in the data-rich
Next Generation Air Transportation System (NextGen) airspace. To address a key step
towards the operational deployment of TAP and the NASA concept of Traffic Aware
Strategic Aircrew Requests (TASAR), a system evaluation was conducted in a representative
flight environment in November, 2013. Numerous challenges were overcome to achieve this
goal, including the porting of the foundational Autonomous Operations Planner (AOP)
software from its original simulation-based, avionics-embedded environment to an
Electronic Flight Bag (EFB) platform. A flight-test aircraft was modified to host the EFB,
the TAP application, an Automatic Dependent Surveillance Broadcast (ADS-B) processor,
and a satellite broadband datalink. Nine Evaluation Pilots conducted 26 hours of TAP
assessments using four route profiles in the complex eastern and north-eastern United States
airspace. Extensive avionics and video data were collected, supplemented by comprehensive
inflight and post-flight questionnaires. TAP was verified to function properly in the live
avionics and ADS-B environment, characterized by recorded data dropouts, latency, and
ADS-B message fluctuations. Twelve TAP-generated optimization requests were submitted
to ATC, of which nine were approved, and all of which resulted in fuel and/or time savings.
Analysis of subjective workload data indicated that pilot interaction with TAP during flight
operations did not induce additional cognitive loading. Additionally, analyses of post-flight
questionnaire data showed that the pilots perceived TAP to be useful, understandable,
intuitive, and easy to use. All program objectives were met, and the next phase of TAP
development and evaluations with partner airlines is in planning for 2015.
Nomenclature
ADS-B = Automatic Dependent Surveillance – Broadcast
ADIF = Aircraft Data Interface Function
AFS = FAA Flight Standards Service
1 Flight Test Principal Investigator, 1425 Norwood Crest Ct., Raleigh NC. AIAA Associate Fellow.
2 Flight Test Director, 1425 Norwood Crest Ct., Raleigh NC, AIAA Member.
3 ATM Research Engineer, Mail Stop 152, [email protected], AIAA Member.
4 Human Factors Specialist, Mail Stop 152, [email protected].
5 Senior Research Engineer, 900 Technology Park Drive, Suite 201, AIAA Member.
6 Principal Software Engineer, 900 Technology Park Drive, Suite 201, AIAA Member.
American Institute of Aeronautics and Astronautics
2
ATC = Air Traffic Control
CCU = Communications Convergence Unit
DTIF = Display Traffic Information File
EDS = External Data Server
EFB = Electronic Flight Bag
FDR = Flight Data Recorder
FMS = Flight Management System
FPM = Feet Per Minute
FTOSR = Flight Test Operations and Safety Report
FTRR = Flight Test Readiness Review
GAMA = General Aviation Manufacturers Association
GPS = Global Positioning System
IFR = Instrument Flight Rules
INS = Inertial Navigation System
IRB = Institutional Review Board
MFD = Multi Function Display
MSL = Mean Sea Level
NAS = National Airspace System
OS = Operating System
PF = Pilot Flying
STC = Supplemental Type Certificate
SUA = Special Use Airspace
SUS = System Usability Scale
TAP = Traffic Aware Planner
TCAS = Traffic (Alert and) Collision Avoidance System
TRL = Technology Readiness Level
UAT = Universal Access Transceiver
WAAS = Wide Area Augmentation System
I. Introduction
Administration’s Next Generation Air Transportation System (NextGen), 1 and ADS-B Out (transmission)
capability has been mandated in the United States (U.S.) for installation in aircraft by 2020. In order to help aircraft
operators achieve near-term operational benefits from ADS-B, NASA developed the Traffic Aware Strategic
Aircrew Requests (TASAR) concept 2 , which uses ADS-B In (reception) data from other aircraft and ground stations
to provide pilot advisories of traffic-compatible lateral and vertical trajectory changes that would be beneficial to the
flight. To achieve operational deployment of TASAR, NASA developed the Traffic Aware Planner (TAP) 3 , a
cockpit-based software application that enables TASAR operations. A flight trial program was conducted to
evaluate TAP in a representative airborne environment. This paper addresses the findings from that flight trial.
A
3
A. Flight Trial Rationale
The primary purpose for flight-testing TAP was to bridge the critical divide between Technology Readiness
Level (TRL) 4 (testing in a laboratory environment) and TRL 6 (testing in a relevant environment), as a stepping-
stone to full operational deployment (TRL 9). Since the TASAR concept is intended to provide near-term
operational benefits, NASA has undertaken a number of activities to accelerate its adoption by the operator
community: NASA has analyzed the potential user benefits 4 , the operational hazards
5 , and the Federal Aviation
Administration (FAA) certification and operational approval requirements 5 for TASAR. NASA has developed the
TAP software application and has evaluated the human-machine interface (HMI) in a high-fidelity fixed-base flight
simulator during a human-in-the-loop experiment with active airline pilots 6 . To further prepare TAP for operational
readiness, a flight trial was conducted to assess the software in a representative airborne environment.
TAP’s architecture and algorithms were derived from the NASA Autonomous Operations Planner (AOP) 7 , a
self-separation research prototype system developed for simulation-based experiments in NASA Langley’s Air
Traffic Operations Laboratory (ATOL). The flight-evaluation program entailed the migration of TAP from a
simulation environment to a live-avionics airborne operational environment, so that factors unique to the flight
environment could be assessed. This paper addresses the culminating phase of the 18-month development program,
the TAP flight evaluations.
B. TASAR Benefits Analysis
NASA developed TASAR as a cockpit-based automation capability that enables aircraft operators to benefit
from real-time advisories of traffic-compatible lateral and vertical trajectory changes that save flight time and/or
fuel. A preliminary benefits analysis 4 indicated that TAP-equipped aircraft could achieve average time savings of
approximately one to four minutes per operation, and fuel savings between 50 to 550 lbs. per operation depending
on the objective of the aircrew, class of airspace user, and aircraft type. Benefits should increase with longer stage
lengths, since beneficial trajectory changes can be applied over a longer distance. The onboard automation leverages
ADS-B surveillance information to increase the likelihood of ATC approval of pilot-initiated trajectory change
requests, thereby increasing the portion of the flight flown on or near a desired trajectory. All automation and pilot
procedures are fully dedicated to a single aircraft that allows tailoring of the optimization criteria to the specific
objectives of each flight, and provides for timely responses to changing situations. TAP can use traffic and weather
information derived from multiple sources, including ADS-B and internet-based data available through broadband
Internet connectivity, to de-conflict its optimized flight path recommendations from other traffic, Special Use
Airspace (SUA), and significant weather systems. This capability should maximize the chances of an aircrew-
initiated rerouting request being accepted by Air Traffic Control (ATC), increasing the flight time spent on the
desired trajectory, and reducing radio transmissions and the associated frequency congestion.
C. TAP Functionality
The target user group for TAP comprises Flight Management System (FMS)-equipped civil aircraft operating in
the continental U.S. under Instrument Flight Rules (IFR). The TAP software application is designed to run on a
Class 2 EFB computing platform installed on the aircraft's flight deck, with data connections to aircraft avionics and
the inflight Internet. In its automatic mode, TAP monitors the aircraft's trajectory and operating environment (e.g.,
weather information) and periodically scans for lateral and vertical changes that produce time or fuel savings (Figure
1). The solutions are displayed in textual form to the flight crew and in a “selected optimization” graphical form
(Figure 2), for the crew’s optional use in requesting in-flight trajectory changes from ATC.
American Institute of Aeronautics and Astronautics
4
TAP leverages ADS-B In information to check trajectory changes for potential traffic conflicts, filtering these
out to improve likelihood of ATC approval and thereby increasing user benefits. TAP also enables pilots to
manually enter potential trajectory changes for evaluation by the TAP software (Figures 3 and 4).
D. TASAR Certification and Operational Approval Assessment
One of the primary objectives of the TASAR project was to identify and address the certification and operational
approval considerations for the eventual operational deployment of TAP. As part of a detailed regulatory analysis 5 ,
preliminary discussions were held with representatives from the FAA Aircraft Certification Service, Systems &
Equipment Standards Branch (AIR-130) and the Flight Standards Service, Flight Technology Requirements Branch
(AFS-430) to determine the likely design assurance levels for the TAP application. TAP was categorized as an
ADS-B enabled EFB Type B 8 performance/planning software application, rather than a dedicated ADS-B In
application. On this basis, a Minor failure effect classification was established for the flight trial, which will also
apply to an operationally deployed system, subject to a prohibition of the display of ownship position in flight. From
these preliminary discussions, it appears that the TAP software will not require RTCA DO-178B software
certification, and the only necessary airworthiness approvals will be for the EFB mounting and data interface
provisions. The TASAR safety and hazard evaluations were based on an analysis of the TASAR Concept of
Operations 9 and associated use cases, and a determination of the attendant operational hazards.
10 These analyses
.
Figure 1. TAP Auto mode – main screen.
Figure 2. TAP Auto mode: selected optimization.
American Institute of Aeronautics and Astronautics
5
confirmed that the use of TAP represents a Minor risk, as defined by FAA AC23.1309-1E, 11
both for the flight trial
and for subsequent deployments to potential end-users.
Customer operational approvals will be determined by the Carrier’s Principal Operations Inspector (POI), using
the established methodology for the issuance of Operational Specification A061 Use of Electronic Flight Bag.
Collectively, these requirements will impose minimal additional burden on prospective TAP users, if they already
have airworthiness and operational approvals for the use of EFB equipment in their fleets.
In order to accelerate the operational deployment of TAP, an early decision was made for the flight trial to avoid
the use of an Experimental Category flight permit, and to progress directly to a flight evaluation on an aircraft with a
Normal category Certificate of Airworthiness operating under IFR in the National Airspace System (NAS). With
this approach, all of the technical, operational, safety, and certification considerations related to the flight trial will
be reusable for subsequent deployments of the system. Airworthiness approvals for the hardware 12
and software 13
were obtained by the issuance of Supplemental Type Certificates (STC) for all TAP-related equipment. This
achieved the dual goals of hosting TAP in a representative flight environment, while avoiding the geographic,
weather, and essential-crew-only restrictions associated with experimental flight operations.
II. Flight Trial Design
A. Flight Trial Objectives
One of NASA’s primary objectives for the TASAR program is to reduce obstacles for users to achieve near-term
direct benefits of ADS-B. 14
A crucial step towards achieving this outcome is the successful in-flight validation of
TASAR functionality in the National Airspace System (NAS), using the TAP software platform and the ambient
live-avionics data infrastructure. This was the primary goal of the flight trial, with a secondary goal of reducing the
risks for subsequent operational implementations of TAP. In an effort to achieve these goals, the following
incremental objectives were defined:
3) Pilot assessment of TAP usability in an operational environment.
4) Opportunity-based TAP optimization requests to ATC.
B. Test Plan
A 10-flight 30-hour flight-evaluation program was planned, including four hours of shakedown testing and 26
hours of dedicated TAP evaluations. An incremental flight test approach was used to minimize the program risks
associated with the porting of the complex TAP software to a new hardware platform in the highly regulated
environment of an aircraft with a Normal Category Certificate of Airworthiness. Avoiding the use of an
Experimental Flight Permit will help accelerate TAP’s transition from the simulation laboratory to the NAS by
forcing a substantially higher level of maturity on the system and its integration with the aircraft. Key steps towards
achieving this goal included rapid-prototyping usability studies with interactive HMI mockups, followed by
comprehensive evaluations in a high-fidelity flight simulator. These evaluations, conducted in the University of
Iowa’s Operator Performance Lab (OPL), were used to refine the TAP HMI, test the pilot procedures developed
from the TAP use cases, and assess pilot workload and situation awareness impacts under nominal and off-nominal
conditions while using TAP. The flight trial test plan was developed in parallel with that for the OPL evaluations,
and the flight trial followed the OPL data collection within three months. The test plan detailed these activities, as
well as the flight profiles, test matrix, Evaluation Pilot preparations, and flight procedures for the flight evaluations
that followed the OPL trials, as discussed below.
1. Flight Profiles
The trials were conducted in the NAS along the U.S. eastern seaboard, with all flights originating and
terminating at the Newport News / Williamsburg International Airport (KPHF). A number of “round-robin” (out and
back) candidate routes were developed in the test plan, based on published high-altitude IFR navigation aids and
fixes. Four flight profiles were selected to exercise TAP’s route optimization functionality in the presence of
potential SUA and traffic conflicts (Figure 5 – Figure 7, screen shots from Jeppesen Mobile FliteDeck). Wind-field
data derived from live broadband connectivity to the National Oceanographic and Atmospheric Administration
(NOAA) Rapid Refresh system were incorporated in the TAP optimizations. The evaluations proceeded in four
increasingly challenging steps, starting with the establishment of basic data connectivity, proceeding through system
functionality verifications, and culminating in the operational use of TAP to generate trajectory optimizations.
American Institute of Aeronautics and Astronautics
6
.
7
2. Test Matrix
Table 1 shows the test matrix for the planned evaluation flights. Initial cruising altitudes were as shown in the
table, but the TAP optimizations led to final altitudes ranging from 17,000 ft. MSL to FL370. Weather avoidance
was not evaluated in the initial flight trials, and SUA boundaries were hard-coded into a database available to TAP
and treated as “permanently hot” airspace. As shown in the table, the routes were designed to control the impact of
traffic and SUA on the route optimization process. Due to ATC workload, the final three flights initially planned for
Route 6 (low altitude) were replaced with Route 5 (high altitude). The final two flights also used a slightly modified
version of Route 5 to eliminate the overlap of outbound and inbound legs.
American Institute of Aeronautics and Astronautics
8
Flight # Route # Direction Initial
1 1 CCW a
5 5 CCW FL340 High No
6 5 CW FL340 High No
7 6 CCW FL210 High Yes
8 5 d CW FL310 High No
9 5 d,e
a Clockwise.
b Flight aborted due to an aircraft equipment malfunction unrelated to TAP.
c Counterclockwise.
e Included minor revision to route to eliminate overlap.
3. Flight Crew
The flight crew consisted of a Pilot in Command (PIC) / Safety Pilot and an Evaluation Pilot. Nine senior IFR-
rated Evaluation Pilots were employed in the study, consisting of six senior airline captains, one senior first officer,
and two highly experienced aviators. Operator communities represented included major, regional, and low-cost
airlines, and high-end General Aviation. The NASA TASAR team Evaluation Pilots conducted the end-to-end
systems checkouts and procedure rehearsals on flights 1 & 2, and the remaining flights were performed with the
eight independent external Evaluation Pilots. The Evaluation Pilots were mailed comprehensive briefing packages
one week before their flight trial. They received approximately two hours of dedicated TAP training on the day of
their flight, followed by comprehensive pre-flight mission and aircraft safety briefings. Each Evaluation Pilot was
thoroughly debriefed post-flight.
The sole function of the Safety Pilot was to assure safety of flight. The Safety Pilot played no role in the
Evaluation Pilot’s use of TAP, except for providing assistance in the configuration of the aircraft’s displays and
interacting with the FMS as requested. The Safety Pilot performed all normal aircraft operations during the
outbound legs of each flight, but assigned radio communications duties to the Evaluation Pilot during the inbound
legs in order to provide representative Pilot Monitoring (PM) workload for this phase.
The cabin crew typically comprised five personnel: a Test Director, a Flight Test Engineer, one or two TAP
Software Engineers, and one or two NASA Researchers. The Test Director orchestrated and performed all aspects of
evaluation-pilot data collection during the flights and was the conduit for all TAP-related internal aircraft
communications with the Evaluation Pilot and Safety Pilot. The Test Director was also responsible for the optimized
sequencing and conduct of all test points. The aircraft Flight Test Engineer was responsible for the aircraft’s data
systems and TAP interfaces. The TAP Software Engineers exercised the software independently from the Evaluation
Pilot, and performed any necessary real-time troubleshooting. The NASA Researchers also operated independent
instances of the TAP application for real-time monitoring and testing.
American Institute of Aeronautics and Astronautics
9
C. Data Collection
1. Flight Procedure
Table 2 shows the planned sequence of events for each test flight. The Safety Pilot performed the takeoff and
climb-out to 10,000 ft., at which time the Evaluation Pilot unstowed the EFB and mounted it on the windshield. The
Test Director then orchestrated a highly scripted scenario that guided the Evaluation Pilot to methodically evaluate
TAP’s features and controls. Six phases of TAP usage were evaluated: two involving the startup and shutdown of
the TAP EFB, two using the TAP Auto mode, and two using the Manual mode. As shown in the table, the
Evaluation Pilot concentrated solely on TAP operations on the outbound legs and transitioned to additional PM
duties on the inbound segments. The latter entailed responsibility for ATC communications, weather avoidance, and
the planning and execution of a TAP optimization maneuver. The Evaluation Pilot interacted with both the TAP
Auto and Manual modes during the outbound and inbound legs, and each Auto/Manual mode segment was followed
.
Flight
Leg
Tot. Elapsed
Time (hh.min.)
On ground Power up EFB, initialize TAP TAP only – 5 5
Outbound Departure and climb through 10,000 ft. – – 10 15
Outbound Phase 1: EFB unstow, mount TAP only – 5 20
Outbound Phase 2: Auto mode scenarios TAP only No 20 40
Outbound Phase 3: Manual mode scenarios TAP only No 20 1:00
Transition Reconfigure for inbound leg – – 10 1:10
Inbound Phase 4: Auto mode scenarios PM a
Yes 20 1:30
Inbound Phase 5: Manual mode scenarios PM Yes 20 1:50
Inbound Phase 6: TAP shutdown TAP Only – 10 2:00
Inbound EFB stowage (approx. 10,000 ft.) PM – 5 2:05
Inbound Descent and landing – – 15 2:20
a Pilot Monitoring duties
ADS-B installation, and the broadband system installed in the aircraft.
1. Aircraft
A Piaggio P180 Avanti, operated by Advanced Aerospace Solutions (AdvAero), served as the test aircraft for
the flight trial. Four characteristics of this test platform directly supported the objective to accelerate the operational
readiness of TAP: the aircraft has a Normal Category airworthiness category; it is certified for single-pilot
operations; it has a large cabin; and it has a broad flight envelope. Retention of a Normal Category Certificate of
Airworthiness will significantly streamline the reuse of the TAP software for subsequent deployments of the system.
Avoidance of an Experimental Category classification was also important for the reasons listed in the flight trial
rational section above. The single-pilot certification eliminates restrictions regarding the occupancy of the copilot’s
seat, and it also enables the use of uncertified software on the non-handling side of the cockpit. The Avanti’s large
cabin accommodated a seven-person test-crew comprising: a Safety Pilot, an Evaluation Pilot, the Test Director, a
Flight Test Engineer, a Data Engineer, a TAP Software Engineer, and a NASA Researcher. A broad flight envelope
American Institute of Aeronautics and Astronautics
10
was deemed essential to achieving the objective of hosting the TAP prototype in a representative operational
environment. The Avanti is certified for all-weather single-pilot operations. It has a cruise speed of approximately
375 knots (0.65M) at 28,000 ft. and a ceiling of 41,000 ft. The aircraft is very fuel-efficient, allowing for longer test
missions, and the cabin is large enough to accommodate the full test crew with their laptops and recording
equipment.
2. Traffic Processor
To ensure the transferability of the TAP installation to future operational platforms, a field-proven hardware
design was adopted for the flight trials, comprising a third-generation ACSS TCAS 3000SP ADS-B In/Out capable
Traffic Collision Alerting System (TCAS), paired with a United Technologies Corporation Aerospace Systems
(UTAS - formerly Goodrich) SmartDisplay™ Class 2 EFB 16
. The high cruising altitude of the Avanti excluded the
use of Universal Access Transceiver (UAT) technology for the ADS-B installation, because UAT is constrained to
altitudes below 18,000 feet. Accordingly, a 1090 MHz extended squitter (1090ES) solution was employed. The
latter is also the standard adopted outside the U.S. and is therefore more suitable for downstream installations for
operators engaged in international operations.
3. Aircraft Interface Device
A prerequisite for the successful TAP deployment to the user community will be its ease of installation. TAP
must interface to a number of aircraft data sources, which could vary widely from aircraft to aircraft. A decision was
made to standardize all TAP-to-aircraft interfaces using a certified Aircraft Interface Device (AID) that embodies all
of the required connectivity. Future TAP installations based on the industry standard Simple Text Avionics Protocol
(STAP) feed, as provided by the AID, will substantially reduce the technical and certification risks, while facilitating
the configuration management task across fleets. A compact UTAS AID was chosen for the flight trial. The unit
incorporates ten ARINC 429 input channels, six 10/100 Base-T ports, and eight discrete inputs. It connects to one or
two EFBs using an ARINC 828 EFB standard interface 17
. The AID’s avionics bus interfaces include: primary
Global Positioning System (GPS) with Wide Area Augmentation System (WAAS), Inertial Navigation System
(INS), Flight Management System (FMS), Air Data Computer (ADC), and the Flight Data Recorder (FDR) bus. The
AID packages the incoming ARINC 429 data using STAP, as defined by the ARINC 834 Aircraft Data Interface
Function (ADIF) standard 18
, for onward transmission to the EFBs and other research processors connected via the
local area network. Use of these industry-standard protocols will minimize the technical risk for future TAP
installations because the TAP software remains agnostic to the aircraft interfaces upstream of the AID. In addition,
this approach will allow for low-cost, one-time, data-concentrator installation that will remain permanently installed
in the target aircraft, minimizing recurring expenses and downtime associated with continual installation and
removal of the system.
selected for the flight trials (Figure 8). For risk
mitigation purposes, a requirement for the chosen
system was for it to support a certified operating
system (OS), should it eventually be required in
future testing. For example, if a Cockpit Display
of Traffic Information (CDTI) with ownship
position were found necessary for TAP
operation, this would entail the use of a certified
OS because CDTI requires a certified platform.
Physical size and processor performance
requirements were the primary considerations
leading to the selection of the SmartDisplay,
which also supports an alternate-boot option for
the DEOS™ operating system and therefore
meets the certification requirement. The
SmartDisplay™ is a tablet EFB with a 1024 x
768 pixel resistive touch screen. The unit
incorporates an Intel Core Duo 1.66 GHz
processor with 2GB RAM and 16 GB of
Figure 8. EFB installation in the Avanti Cockpit.
American Institute of Aeronautics and Astronautics
11
removable memory on a solid-state compact flash card. The TAP application was hosted locally on the
SmartDisplay, which communicated with all the aircraft data systems via the AID. The Evaluation Pilot’s EFB was
mounted on the copilot’s right-side windshield using dual suction cups, at a location of the pilot’s choosing.
Additional instances of TAP were hosted on a second SmartDisplay™ EFB and two engineering laptops
available to crewmembers in the aft cabin, for independent TAP evaluations and debugging purposes. A final
instance of TAP was installed on each of the test-bed’s two flight-test computers, primarily for gathering data for
post-flight analysis pertaining to aircraft integration issues.
5. Broadband Airborne Internet
One of the objectives of the flight trials was to integrate broadband Internet-sourced wind-field data into TAP’s
trajectory processing and optimization functions. A certified, single-channel, Inmarsat broadband link was
incorporated to allow TAP to access real-time, hourly updates of such information. Principal system components
included a High Speed Data Unit (HDU) Transceiver, a Communications Convergence Unit (CCU), and a power
amplifier. The system bandwidth was limited to 200 kbps, due to size constraints of the installed low profile low-
gain blade antenna. The CCU acted as a full-service router for the aircraft, with eight LAN-10/100 ports and Wi-Fi ®
connectivity for a maximum of 54 concurrent clients. The CCU enabled the engineers in the cabin to run TAP on an
additional EFB and several laptops, independent of the Evaluation Pilot’s EFB, using wireless access to the AID
data. This feature proved important for independent data logging and real-time software troubleshooting during the
trials. An External Data Server (EDS) application was developed to allow TAP clients to access the wind data via an
Internet connection to the NOAA website. The EDS will support possible expanded data sources for future trials,
such as convective weather hazards, dynamically updated SUA boundaries, and supplemental traffic state and/or
intent data not currently available via the ADS-B In channel.
E. Flight Test Data
Flight test data were collected using a combination of in-flight hand-recorded questionnaires, digital and video
data, and post-flight questionnaires and debriefings.
1. In-Flight Questionnaires
The Test Director administered a questionnaire to the Evaluation Pilots during each of the two outbound and two
inbound flight legs, following each scripted evaluation of TAP operating modes. This questionnaire addressed items
such as TAP trajectory-optimization solutions, time/fuel outcomes, traffic/SUA impacts, anomalous behaviors, and
included scenario-specific probes. Additionally, each Evaluation Pilot completed the Bedford Workload Scale, a
uni-dimensional rating scale designed to assess the pilot’s spare mental capacity while completing a task. The single
dimension is assessed using a hierarchical decision tree that guides the pilot through a ten-point rating scale, each
point of which is accompanied by a descriptor of the associated level of workload. All questionnaire data were hand-
recorded in flight by the Evaluation Pilot and entered into a database after landing for post-processing and analysis.
2. Digital & Video Data
The ten ARINC 429 data-busses interfaced to the AID were automatically recorded during each flight. These
busses included the FMS flight plan data and the ADS-B In information used by TAP. Each TAP client also
generated a complete log file that was downloaded after each flight. The recorded aircraft data had sufficient fidelity
and extent to support real-time playback of the entire flight using AdvAero’s engineering simulator. In addition,
digital (de-identified) cockpit video was captured showing the Evaluation Pilot’s interactions with TAP for post-
flight analysis and ambiguity resolution.
3. Post-Flight Questionnaire and Debriefings
Following their flight, the Evaluation Pilots completed a comprehensive online post-flight debriefing
questionnaire, including the System Usability Scale (SUS), which took approximately an hour to complete. The SUS
is a short, 10-item Likert scale designed to measure a user’s perceptions of system usability. SUS yields a single
number representing a composite measure of the overall usability of the system being studied. Note that scores for
individual statements are not meant to be interpreted individually as is prescribed by Brooke 19
. The SUS score is
calculated by first summing the score contributions of each item which range from 0 to 4 for all statements. For
statements 1, 3, 5, 7, and 9, the score contribution was the scale position minus 1. For statements 2, 4, 6, 8, and 10,
the contribution was 5 minus the scale position. The final SUS score was calculated by multiplying the sum of the
scores by 2.5 to obtain the overall value of system usability. SUS scores had a range of 0 to 100.
American Institute of Aeronautics and Astronautics
12
After each questionnaire had been scored, the Evaluation Pilot, Test Director, TAP Engineer, NASA
Researcher, and Safety Pilot participated in a group debriefing to review all of the in-flight and post-flight
questionnaires and to capture any lessons learned for the next flight. The questions addressed five evaluation
categories: TAP Operational Usage, TAP Performance, TAP HMI, TAP Benefits, and an overall assessment of the
test program. This final debriefing typically lasted two-hours and was used to consolidate and clarify all pertinent
information gained from each flight-test, including the questionnaire findings.
III. Flight Trial Results
The results of the data collected during nine flights are presented in the following section. One flight was
curtailed just after takeoff because of an aircraft unserviceability unrelated to TAP. The results are presented in four
sections, corresponding to the four flight trial objectives listed in Section II. A: 1) TAP Data Interface Verification;
2) TAP Functionality Verification; 3) Subjective Measures; and 4) TAP Operational Use.
A. TAP Data Interface Verification
The initial phases of the TAP integration into the test platform posed a number of challenges related to the
aircraft-side data sources. Chief among these was the General Aviation Manufacturers Association (GAMA)
General Aviation A429 subset output by the Avanti FMS, which differed from the enhanced A429 structure used in
the NASA ATOL. GAMA data are usually used for the display of route and waypoint symbology on Electronic
Flight Instrumentation System (EFIS), Multi Function Display (MFD), or weather radar displays. These data had to
be supplemented by other sources, such as the FDR bus, to meet TAP data requirements.
The flight-trial version of the TAP software was developed in iteration with the aircraft integration activity.
Three data-related problems hampered the pre-test integration efforts on the ground. TAP required the aircraft to
provide valid air data in order to function, which entailed the use of an unwieldy ADC test set for all ground tests to
stimulate an in-flight condition. Similarly, a complex procedure was required to force the Inertial Navigation
Systems into a simulation mode that would allow the aircraft to generate usable groundspeed data for TAP. In
addition, limited satellite signal reception in the airport environment compromised EDS connectivity during ground
testing. Due to these issues, the integration logistics were more challenging than originally anticipated. Advantage
was taken of additional data collection tools in the TAP software, FMS, and aircraft computer servers that enabled
the unattended collection of TAP data while the aircraft was engaged in flights unrelated to TAP. These data could
then be fed through a playback capability of the TAP software to conduct integration testing. Almost 40 hours of
these in-flight opportunity data were collected, reducing the dedicated TAP-integration flight time requirement to
3.2 hours. After the troubleshooting was completed, a successful end-to-end system test was performed immediately
prior to the shakedown flight. The TAP data interfaces functioned satisfactorily thereafter, allowing TAP to perform
as designed and to successfully download the updated wind data to the EDS. Nevertheless, a number of data
problems persisted during the flight trials, including data dropouts, latency, and vertical speed noise. These artifacts
are inherent to a live avionics and ADS-B environment, though some were artifacts of the flight testing, as described
in the following sections.
1. Data Dropouts and Latency
The AID was designed to serve only two clients: a pilot EFB and a copilot EFB. Accordingly, the AID may have
been overloaded with up to five EFBs and laptops in the flight-test aircraft subscribed to AID data. AID loading is a
function of the number of ARINC data labels (items) per message, the message rate, and the number of clients. The
limiting case was found to be the transfer of a full flight plan, which required multiple messages from the FMS to
fully define the route. This was the only data label with inter-arrival times that exceeded one second for a significant
percentage of time (96%). When thus overloaded, the AID exhibited an increase in dropped messages. Figure 9
contrasts the differing data dropouts for the receipt of a full flight plan on the Evaluation Pilot’s EFB, the TAP
engineer’s laptop, and the second EFB in the aft cabin (dropouts are not shown for up to two additional laptops in
the aft cabin). Longer times between receipt of a full flight plan are due to data dropouts. The approximately 27-
second dropout during Flight 4 coincided with an additional laptop (not shown in Figure 9) subscribing to the AID.
This resulted in dropped ARINC data labels from the FMS and GPS, but not from the INS, ADC, flight control
computer, or TCAS. The engineering laptop experienced more dropouts than the EFBs across all flights, even
though the highest dropout time duration was experienced on the Evaluation Pilot’s EFB during Flight 4. The
manufacturer of the AID indicated that AID data dropout behavior is unpredictable when overloaded, and it may not
always be the case that laptops experience higher dropouts than EFBs.
American Institute of Aeronautics and Astronautics
13
None of the flight plan dropouts impacted the flight-test TAP advisories, since these dropouts occurred at times
when the flight plan was not changing. Furthermore, the AID loading of up to five devices is not representative of
typical operations, where one or two EFBs subscribe to AID data. Nevertheless, flight plan dropouts have been
identified as a potential area of concern, and future work may be needed to determine how often these dropouts
coincide with flight plan changes that could result in TAP optimizing against an out-of-date flight plan.
2. Vertical Speed Noise
Examination of vertical speed data for both for the ownship and ADS-B traffic indicated vertical speed variations of
up to ±4000 feet per minute (FPM). This phenomenon was particularly noticeable on Flight 4, which experienced
the largest altitude rate change for the ownship during a 5-second window of clear air turbulence beginning at
approximately 4200 seconds, as shown in Figure 10. The 27-second data dropout experienced during Flight 4
occurred about 50 minutes before this turbulence, and did not impact the vertical speed coming from the air data
system.
Figure 9. Maximum observed time between TAP receiving full flight plan.
Turbulence
American Institute of Aeronautics and Astronautics
14
TAP does not use vertical speed for ownship predictions, but there is a concern about the stability of traffic
ADS-B vertical speed values and the impact they may have on traffic trajectory prediction. Vertical speed is used by
TAP to predict the vertical component of the ADS-B traffic trajectory since the target altitude is not part of the
ADS-B message set received by TAP. In a simulation environment, the vertical speed value is generally stable and
accurate. In flight, turbulence may result in brief accelerations that, in turn, can result in temporarily large vertical
speed being broadcast.
The vertical speed noise presented in this subsection could potentially cause TAP to generate advisories that are
traffic-incompatible and/or less beneficial, if not otherwise mitigated. More work is needed to quantify how often
vertical speed noise impacts TAP advisories and to determine if methods, such as filtering algorithms, could
increase TAP performance. No significant outliers or noise was observed for any other ARINC data labels across the
nine flights.
B. TAP Functionality
In almost every case, the TAP software passed the full set of initialization-sequence connection tests, entered the
online mode at 10,000 ft. as designed, and generally operated stably throughout the flight. Two aspects of the
software caused occasional problems during the trials: the aircraft performance model and the ADS-B vertical speed
fluctuations discussed above. (The TAP optimization outcomes are discussed separately in Section D below).
TAP trajectory generation and optimizations rely on an accurate digital aircraft performance model, which was
unavailable for the Avanti test-bed aircraft. Development of such digital models from aircraft performance
handbooks is time-consuming and expensive, and the magnitude of the task was underestimated. As a compromise,
a performance model of a generic twin-engine, medium-sized jet was scaled to approximate the en-route
performance of the Avanti (a high-performance turboprop). This approximation impacted TAP’s predictive
capability and curtailed all route optimization computations while the aircraft was climbing or descending. Use of
this model also introduced errors in the fuel and time calculations that made precise quantification of the
optimizations difficult. An accurate Avanti model is in development for the next phase of flight trials, and a new
trajectory generator is being developed that will be more flexible to variations in available performance data for
other aircraft types that will be used in future trials or TAP operational deployments.
C. Subjective Measures
In an effort to assess pilot workload and usability, and to better understand how the Evaluation Pilots perceived
their interaction with the TAP software application in terms of intuitiveness, understandability, and usefulness,
several subjective measures were administered both during and after their flight.
Generally, the Evaluation Pilots rated usability of TAP as high, and their perceived workload was low. In the
post-flight questionnaire and debrief, the Evaluation Pilots reported that operating the TAP software application was
relatively easy and that the formal training provided pre-flight was adequate. The following are examples of
comments from the Evaluation Pilots during the post-flight debriefing:
"Given the time we had to prepare - I felt comfortable using TAP in the airplane with no prior real-life
experience,” “Very well thought out implementation,” and “Intuitive and easy to understand.”
The results of the analyses of the subjective measures administered during the flight trial are presented below
under the following categories: Bedford Workload Scale, System Usability Scale (SUS), Post-Flight Questionnaire,
and TAP enhancement requests.
1. Bedford Workload Scale
The average Bedford Workload Scale ratings for each segment of the flight are depicted in Figure 11. Generally,
across all flight segments including all TAP Mode Conditions (Auto Mode, Auto Mode Pilot Monitoring, Manual
Mode, Manual Mode Pilot Monitoring), the Evaluation Pilots’ reported ratings of perceived workload were very
low.
There was a small increase in average workload between Auto and Manual Modes, and also between no-PM and
PM conditions. Because TAP Manual Mode inherently involves more interaction from the pilots as they are entering
their own trajectories, this increase in perceived workload was not unexpected. When the pilots assumed the
additional PM duties, the overall workload in the cockpit increased. Consequently, higher ratings of perceived
workload were expected in that condition. However, it is important to note that, as depicted in Figure 11, the
Evaluation Pilots’ ratings of perceived workload were almost all below three. According to the descriptors on the
Bedford Scale, shown in Figure 12, ratings below three indicate that the pilots typically believed that workload was
tolerable for the task, and they had enough spare mental capacity for all desirable additional tasks during flight
operations.
15
Figure 11. Average Bedford Workload Scale ratings for TAP Mode Conditions.
Figure 12. Bedford Workload Scale.
American Institute of Aeronautics and Astronautics
16
2. System Usability Scale
The Evaluation Pilots were administered the System Usability Scale (SUS) 19
post-flight, and as can be seen in
Figure 13, the pilots rated the usability of TAP as high (M = 81.3, SD = 14.8). This result is consistent with the
results from the SUS data collected during the HITL-I experiment conducted in the Operator Performance
Laboratory (OPL) at the University of Iowa 6 . During this experiment, the participant pilots were administered the
SUS after each use case and the data was collapsed across use cases for Manual (M = 87.36, SD = 12.23) and Auto
Modes (M = 87.08, SD = 12.46). As Figure 13 illustrates, the mean SUS scores from the flight trial and simulation
experiment were consistent indicating that Evaluation Pilots perceived TAP to be easy to use in both environments.
Figure 13. System Usability Scale (SUS) calculated scores.
3. Post-Flight Questionnaire
In addition to the SUS, an online post-flight questionnaire was administered to all Evaluation Pilots after
completion of their flight. The questions can be grouped into three categories: a) overall understandability/usability
findings, (b) specific HMI design changes, and (c) general feelings about TASAR’s utility and effect. The results for
each category of questions are presented in this section.
a) Overall Understandability/Usability Findings
Figure 14 depicts the results of selected questions in the post-flight questionnaire that attempted to gain an
understanding of TAP usability and the extent to which the pilots were able to understand the interface. For
example, “How difficult or easy was it to use the overall Auto Mode interface?” Specifically, these questions
attempted to assess how well the pilots were able to use and understand the overall Auto and Manual Modes. The
pilots responses to these questions were collected via a 7-point Likert scale with the anchors 7 = Very Easy and 1 =
Very Difficult. Generally, the Evaluation Pilots rated the TAP interface, in both Auto and Manual Modes, as very
easy to use and understand, with average scores on the Likert scale all above 5.
b) Specific HMI Design Changes
Figure 15 depicts the results of selected questions that assessed the extent to which it was easy or difficult to
use specific HMI design characteristics. For example, “How difficult or easy was it to use the Auto Mode route
visualization screen, including the ATC buttons?” The pilots responses were collected as described in a) above.
Generally, the Evaluation Pilots found the features of the HMI display design relatively easy to use with average
scores on the Likert scale all above 4.
American Institute of Aeronautics and Astronautics
17
Figure 14. Post-flight questionnaire items related to TAP understandability and usability.
Figure 15. Post-flight questionnaire items related to specific TAP HMI design changes.
American Institute of Aeronautics and Astronautics
18
c) General Feelings about TAP’s Utility and Effect
The Evaluation Pilots were asked to select from a list of 28 items those they consider to be the main benefits
of TASAR and the TAP application. No limit was placed on the quantity they could select. Figure 16 presents the
set of benefits selected by at least 50 percent of the eight independent Evaluation Pilots (excluding the TASAR team
pilots), with the percentage of pilots selecting each benefit indicated. Most prevalent in this set were benefits directly
related to the flight itself, including fuel and time savings, optimal routing, and airspace hazard avoidance. TASAR
was also viewed by most Evaluation Pilots as providing equipage justification for ADS-B IN, EFB, and cockpit
Internet access. Completing the set was workload reduction for pilots in assessing route-change impacts and
workload reduction for air traffic controllers in flight optimization. Not selected by the majority were workload
reductions for pilots and controllers in other areas (e.g., communication, coordination, maintaining situation
awareness). Interestingly, no Evaluation Pilot identified dispatcher workload reduction in post-departure flight
optimization as a main benefit, possibly indicating a gap in current operations that TASAR may be well-positioned
to fill.
Figure 16. Main benefits of TAP reported by at least 50% of the pilots.
4. TAP Enhancement Recommendations
The Evaluation Pilots emphasized the importance of the consistency of TAP’s findings, both between the two
modes, and over time: “Solutions presented by TAP in Auto mode should be consistent between updates. Otherwise
the pilots perceive it as inconsistent.” These comments were elicited when TAP presented fluctuating optimizations
over relatively short timespans. Based on detailed post-flight analysis, these apparent fluctuations were generally
valid outcomes resulting from changing trajectory dynamics or new intruder traffic, making optimal routes
temporarily unavailable. Nevertheless, the Evaluation Pilots were clearly more comfortable when TAP settled on a
single solution for an extended period. Methods are being investigated in which new optimization searches are
seeded with current solutions to minimize the occurrence of inconsequential solution fluctuations.
The Evaluation Pilots also provided a wealth of additional suggestions for TAP HMI and functionality
enhancements. The most common feedback concerned the Auto Mode “Selected Optimization” Screen. This screen
displays a graphical representation of the lateral portion of the route change selected by the pilot, in addition to
American Institute of Aeronautics and Astronautics
19
regularly updated predictions of the route change and its fuel and time outcomes. Many of the Evaluation Pilots
desired a more detailed depiction of the current route to assist in correlating TAP recommendations with other
aircraft displays (e.g., such as navigation and weather radar displays). Some pilots requested greater use of the
EFB’s touch-screen interface (e.g., for entering waypoints). Suggestions were also made regarding enhancing TAP
functionality, including the incorporation of new airspace hazard classes into the optimization (e.g., volcanic ash,
icing), and to account for preferred routings and ATC sector boundaries.
D. TAP Operational Use Outcomes
The TAP operational use outcomes are addressed under the follow headings: ADS-B traffic counts;
optimization outcomes; computed time and fuel savings validation; trajectory geometry effects; and ATC handling.
1. ADS-B Traffic Counts
TAP processed 710 unique ADS-B targets (i.e., traffic aircraft) during 19.5 hours of data collection (which
excluded the departure and arrival flight phases). Analysis of ground-recorded traffic data indicates this was
approximately 12 percent of the total traffic (non-ADS-B and ADS–B) within ADS-B detection range. More ADS-B
traffic was detected on northbound Flights 5 through 10, which traversed New York’s airspace, than on Flight 1 to
the west, and Flights 3 and 4 to the south. Figure 17 illustrates traffic counts by flight (left) and the distribution of
ADS–B detection range over the 19.5 hours (right). Flight 1 had a higher traffic count (67) than was expected due to
the ownship detecting ADS-B traffic at Washington Dulles (IAD) and significant ADS-B traffic on north-south jet
routes through Washington center (ZDC). About 85% of ADS-B traffic was first detected between 50 and 100 NM
of the ownship. The ADS-B traffic aircraft generally cruised at higher altitudes than the ownship (FL320+) with a
significant percentage of these ADS-B traffic aircraft first detected when either climbing or descending through the
ownship altitude. These climbing and descending aircraft accounted for 12 of the 15 ADS-B traffic aircraft first
detected less than 50 NM from the ownship.
0
20
40
60
80
100
120
To ta
Flight
0
20
40
60
80
100
120
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150
N u
m b
e r
o f
A ir
cr af
ADS-B Detection Range Measured at First Detection (nm)
Figure 17: Total detected ADS-B targets by flight (left) and ADS-B detection range (right)
2. Optimization Outcomes
Figure 18 illustrates a sample optimization outcome from Flight 3 in relation to the original route and the SUA
areas to be avoided, shown in Google Earth™. The red polygons represent SUA that TAP was designed to avoid.
The green route indicates the original flight plan, and the yellow route indicates the “as flown” trajectory. The route
was flown counter-clockwise. During the outbound leg to CRE and the initial inbound leg just past ILM, the
Evaluation Pilot performed extensive testing of the TAP functions and HMI while the aircraft purposely remained
on the planned route. Approximately halfway between ILM and FAY, the Evaluation Pilot consulted TAP for an
optimization solution and made a route change request to ATC, receiving approval shortly before reaching FAY.
The solution consisted of Direct LANHO (waypoint chosen by TAP from its navigation database), Direct CVI (the
“rejoin” waypoint), bypassing FAY, RDU, and LVL from the original route. This solution passed just barely north
of the Seymour Johnson Echo Military Operations Area, illustrating TAP’s function to avoid restricted airspace.
American Institute of Aeronautics and Astronautics
20
Figure 18. Sample operational outcome from Flight 3.
Figure 19 shows a second example of a TAP Auto-mode trajectory optimization, from Flight 9 that was also
proximate to an SUA (R-4001A). The orange route in the figure indicates TAP’s original solution, which was
subsequently modified by ATC (yellow route) to provide extra separation from the airspace. An interesting aspect of
both flight paths was that TAP’s avoidance solutions did not incorporate any buffers around the SUA. The TAP
solutions of Flights 3 and 9 passed approximately 0.35 nm and 0.80 nm outside their respective SUAs, illustrating
the accuracy of TAP’s algorithms. ATC’s modification of Flight 9’s requested route to reduce SUA proximity
indicates a probable need to include SUA buffers in generating route-change requests. Additional buffering can be
easily addressed in TAP calculations, though accounting for local ATC variations such as those encountered here
would be more challenging. This unexpected observation serves as a good example of the value of flight-trial
findings that might not have surfaced from a pure simulation experiment.
Figure 19. Flight 9 Auto mode optimization.
American Institute of Aeronautics and Astronautics
21
Table 3 summarizes key parameters for all the TAP-inspired route optimizations requested of ATC during the
flight trials. The Combo optimization comprised a combined lateral and vertical optimization.
Table 3. Route optimization requests made to ATC and the outcome in time and fuel savings.
Flt
b
4 (2) Lateral n/a c Invalid
d n/a Invalid d n/a
4 (1) Combo Denied 10:50 - 156 -
4 (2) Lateral Deferred 7:14 - 104 -
4 (1) Lateral Approved n/a e 12:00 n/a
e 120
7 (1) Combo Approved 13:31 n/a b 175 n/a
b
7 (1) Lateral Approved 4:13 4:00 50 70
8 ATC unable to address requests due to workload and/or frequency congestion
9 (2) Combo Denied - 1:00 - 14
9 (1) Combo Approved n/a f 4:00 n/a
f 32
10 (1) Combo Approved 1:26 4:00 14 32
a Numbers in parentheses indicate the number of off-route waypoints in the selected optimization. Zero indicates a
direct routing to a downstream waypoint already on the flight-planned route. b FMS values not recorded.
c ATC
unable to respond due to workload. d
Outlier TAP values (71:25 minutes, 1032 lbs. fuel savings). e No TAP
solution (hybrid of previous request). f Unable to compare TAP and FMS computed savings due to
difference in request and ATC clearance.
In total, 12 TAP-inspired ATC requests were made: two were denied, nine were approved, and ATC was unable
to directly respond to the request in one case due to workload and/or frequency congestion. Some factors unique to
the test design may have affected the ATC responses. These included the unusual round-robin nature of the flight
paths, and the special treatment of the test aircraft by ATC discussed in subsections 4 and 5, respectively, below.
3. Computed Time and Fuel Saving Validation
Verification of the TAP time/fuel outcome predictions was not a specific flight test objective, but there was good
agreement on three of the flights between the time and fuel savings computed by TAP and by the FMS. The
discrepancies on the other flights are attributed to the lack of a high-fidelity Avanti performance model available to
TAP for predicting ownship trajectories. It should be noted that the observed fuel and time savings recorded from
the FMS were not dependent on TAP’s Avanti surrogate performance model, and represented real savings achieved
by TAP.
4. Trajectory Geometry Effects
The nature of round-robin flights presented an optimization dilemma: how to optimize a flight whose destination
is also the departure airport, albeit with intermediate trajectories of almost a thousand miles before returning to the
American Institute of Aeronautics and Astronautics
22
start point? In the case of flight plans five and six, the outbound and inbound legs also crossed each other and shared
a common waypoint. While expedient from a planning perspective, these attributes caused some difficulties in
TAP’s real-time modeling of the aircraft’s intended trajectory. This was compounded by the FMS design limitation
of not outputting full route data on its GAMA A429 bus. The problem was partially mitigated by careful use of
TAP’s optimization limit function, which defined the termination point for all optimization activity. Properly used,
this allowed TAP to calculate sensible optimizations, even for flights returning to their start point. This problem was
an artifact of the flight test design, and normal operational flights with differing origins and destinations would not
encounter this issue.
5. ATC Handling
Although not a TAP issue, the nature of the ATC interactions with the flight-test aircraft on some of the flights
limited the ability to fully test TAP capabilities. Prior to the test flights, the TASAR flight-trial test plan was
coordinated with each ATC facility affected by the four intended routes. The purpose of the test was explained, as
was the need to file non-standard and not-optimal routing and altitudes (i.e., to provide routes which TAP could
optimize). Adjustments in some routings and altitudes were also made to accommodate local ATC restrictions, as
requested. In most of the ATC facilities, bulletins were issued to the area supervisors each day that a test flight was
planned through their airspace. Each evaluation flight included a task to request ATC approval of at least one TAP-
derived optimization of the route. Given the unusual nature of the flight plan (i.e., round-robin and using non-
standard routing), two unintended situations were occasionally experienced. The first was an eagerness by the
controllers to accommodate the special needs of this pre-briefed NASA flight. In one case, ATC was very willing to
accommodate a TAP request, and relayed it across several sector boundaries, causing a significant delay in request
approval. The downstream controller eventually cleared the flight to the optimization fix, but by this time the chosen
waypoint was passing abeam the aircraft, and it was impractical to complete the original optimization. It is unlikely
that routine flights would have been treated in this privileged and somewhat unconventional manner. The second,
less frequent, eventuality was an inability to accommodate any “special” requests due to ATC workload and
frequency congestion. These issues will be addressed in future flights by avoiding round-robin flight plans and
eliminating the need to coordinate the special routing in advance.
IV. TAP Future Plans
This first TAP flight trial provided an invaluable stepping-stone towards an operational deployment of the
system using target customer equipment. A follow-on flight trial is in the planning stage for the late 2014 / early
2015 timeframe, in preparation for operational-use testing of TASAR with one or two Part 121 partner air carriers
later that year. The flight trial will endeavor to match the equipment and operations of the partner airlines as closely
as possible, following the model established in the 2013 trials. This will involve some additional test-bed aircraft
modifications, an updated TAP HMI, and porting of the TAP software into the new target environments. With the
basic operation and avionics communications already established, there will be an increased focus on new user-
driven TAP functionalities and further HMI enhancements. At the successful conclusion of these upcoming trials,
the TAP system should have reached TRL 8 (mission qualified), and be ready for operational deployment to
industry.
V. Conclusion
The TAP flight evaluations met or exceeded all of the objectives set for the program. TAP was found to operate
successfully using the data available from commercially available ADS-B and broadband equipment. The software
loaded and executed normally, and pilot usability and workload feedback was positive. The software generated
usable lateral and vertical optimizations that were approved by ATC 75 percent of the time, and resulted in actual
fuel and flight-time savings. These accomplishments were demonstrated in a TRL 6 relevant, semi-operational,
environment with software that was successfully ported from a TRL 4 simulation platform. In the process, a number
of valuable lessons were learned about the software, and the majority of the practical challenges to TAP’s
deployment were overcome. Future research will build on this important foundation and is expected to lead to
operational deployment of TAP on the revenue flights of one or two partner airlines.
American Institute of Aeronautics and Astronautics
Acknowledgments
This project was funded by NASA under contract NNL12AA06C. Development of the TAP software system
was performed by Engility Corporation.
References
DC, 2013.
2Ballin, M., & Wing, D. (2012). Traffic Aware Strategic Aircrew Requests (TASAR). American Institute of Aeronautics and
Astronautics, AIAA-2012-5623. doi:doi:10.2514/6.2012-5623.
3Woods, S.E., Vivona, R.A., Roscoe, D.A., LeFebvre, B.C., Wing, D.J., & Ballin, M.G.. A Cockpit-based Application for
Traffic Aware Trajectory Optimization. AIAA Guidance, Navigation, and Control Conference, Boston, MA, 2013.
4Henderson, J., Idris, H., & Wing, D. (2012). Preliminary Benefits Assessment of Traffic Aware Strategic Aircrew Requests
(TASAR). American Institute of Aeronautics and Astronautics, AIAA-2012-5684. doi:doi:10.2514/6.2012-5684.
5Koczo, S., & Wing, D. (2013). An Operational Safety and Certification Assessment of a TASAR EFB Application. Paper
presented at the Digital Avionics Systems Conference (DASC), 2013 IEEE/AIAA 32nd, 2A1-1-2A1-18.
doi:10.1109/DASC.2013.6712530.
6Schnell T., Cover M., Engler J., Koczo S. (2014). Traffic Aware Strategic Aircrew Requests (TASAR) Human-in-the-Loop
Assessment. NASA technical report No. NASA/CR-2014-000000. Manuscript submitted for publication.
7Karr, D., Vivona, R., Roscoe, D., Depascale, S., & Wing, D. (2012). Autonomous Operations Planner: A Flexible Platform
for Research in Flight-Deck Support for Airborne Self-Separation. American Institute of Aeronautics and Astronautics, AIAA-
2012-5417. doi:doi:10.2514/6.2012-5417.
Electronic Flight Bags," AFS-400, AC 120-76B, Washington, DC, 2012.
9Henderson, J. (2013). Traffic Aware Strategic Aircrew Requests (TASAR) Concept of Operations. NASA technical report
No. NASA/CR-2013-218001. Hampton, VA: NASA Langley Research Center.
10Koczo, S., "Analysis of Operational Hazards and Safety Requirements for Traffic Aware Strategic Aircrew Requests
(TASAR)," NASA technical report No. NASA/CR-2013-218002, 2013.
11Federal Aviation Administration, "System Safety Analysis and Assessment for Part 23 Airplanes." Federal Aviation
Administration, AC 23.1309-1E, Washington, DC, 2011.
12RTCA SC-135, "Environmental Conditions and Test Procedures for Airborne Equipment," RTCA, Inc., RTCA/DO-160G,
Washington, D.C., 2010.
13RTCA SC-205, "DO-178C Software Consideration in Airborne Systems and Equipment Certification," RTCA, Inc.,
Washington, D.C., 2011.
14Wing, D., Ballin, M., Koczo, S., "Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term
Low-Cost Implementation," Aviation 2013, AIAA-2013-4231, AIAA, Washington, DC, 2013.
15Roscoe, A., and Ellis, G., "A Subjective Rating Scale for Assessing Pilot Workload in Flight: A Decade of Practical Use,"
Royal Aerospace Establishment, ADA227864, Farnborough, U.K., 1990.
16Federal Aviation Administration, "Guidelines for the Certification, Airworthiness, and Operational Use of Portable
Electronic Flight Bags," AC 120-76B, Washington, DC, 2012.
17ARINC, "828-3 Electronic Flight Bag (EFB) Standard Interface," ARINC Incorporated, 828-3, Annapolis MD, 2012.
18ARINC, "834-3 Aircraft Data Interface Function (ADIF)," ARINC Incorporated, 828-3, Annapolis MD, 2012.