AoC number

97

Primary domain

ANS

Secondary domain

T

Description

Future ATM tools may achieve enhanced functionality using software intelligent agents or adaptive automation. The characteristics of these agents can differ significantly from most software tools in use today. They may be very complex in function, and may include intent and reasoning systems not well understood by the controller. They may approach a semi-autonomous status in the eyes of those interacting with them. They may have unique, unfamiliar, and unanticipated characteristics and interfaces. Rather than “weak” AI, which is programmed to operate a single task intelligently, many systems will use “strong” AI, which use neural networks and deep learning to emulate the cognitive capacity of the human brain.

Neural networks can be used for data compression and rapid processing, integrating data from aircraft sensors to provide for an optimal course of action. NASA plans to route controller flight information, along with other variables and environmental factors, through a centralized computer system to coordinate flights automatically, with testing occurring until 2020. Meanwhile, a Stanford paper postulated an ATC system called ACAS X, which could compress a 2 GB database by a factor of 1,000 using neural networks and optimize the data in real-time.
Many safety-related and critical systems warn of potentially dangerous events; for example the Short Term Conflict Alert (STCA) system warns of airspace infractions between aircraft. Although installed with current technology, such critical systems may become out of date due to changes in the circumstances in which they function, operational procedures and the regulatory environment. The User Request Evaluation Toll (URET) utilizes flight plan data, forecast winds, aircraft performance characteristics, and track data to derive expected aircraft trajectories. URET then predicts conflict between aircraft and between aircraft and special use or designate airspace. It can provide the controller with a tool to test potential amendments to an aircraft’s route and/or altitude prior to issuance by the controller. The scope and scale of these new ATM systems will require new approaches for software certification, as current validation procedures were not meant to contend with deep learning and nondeterministic behavior (see AoC_013). Furthermore, reluctance to take over after automation has already failed is already a problem in aviation, so new approaches must be designed to avoid similar mistakes in aerospace.

Potential hazard

  1. Potential for controller error if these systems are given limited control of ATM functions such as separation assurance independent of the human.
  2. Actual or potential loss of separation where alerts and additional warning times are inadequate due to computational delay.
  3. Garbling is sometimes seen in non-Mode S radars when the oblique distances between each aircraft and the respective radars are very similar. Labels for the one aircraft can temporarily disappearing from the radar screen and be replaced by two labels, one showing the correct flight level of the aircraft and another showing a different level. Similarities in the distance between two close aircraft and the radar can result in an overlap occurring in the reply received by the radars from the aircraft, resulting in new tracks appearing on the screen and in the label for one aircraft splitting from the corresponding aircraft symbol.
  4. The problem with autonomous vehicles is not just that they’re incredibly advanced pieces of hardware, or that they have to operate in the chaotic environment of the streets, they also have to interact with the driver. That wouldn’t be much of an issue if it involved a switch that you flip between manual and automatic, but Volkswagen points out that autonomous driving involves several possible stages, plus the swap over point between mainly manual drive and mainly automatic drive. These vary from low-speed chores, such as parking assist, to taking over full control while driving at high speed on the motorway. According to Volkswagen, the simplest and currently the most widely used stage is assisted driving. This is where the driver retains permanent control over the car with the automated system helping for tasks such as parking or reversing. The next level up is the partly automated stage, where the system monitors the driver and takes over only when needed, such as applying brakes when a pedestrian steps into the road, or preventing a dangerous lane change. In the highly automated stage, the system takes over the actual driving, but the driver still has to remain alert because he has to be ready to reclaim control when requested. Then there’s the highest stage, which it fully automated. In this, the system drives the car and if the driver fails to retake control when requested, the system carries on by itself.

Corroborating sources and comments

http://www.sita.aero/file/2951/New_generation_cockpit_IT_integration_position_paper.pdf

http://download.intel.com/research/share/UAI03_workshop/Kipersztok/UAI-KipersztokO.doc

Next Generation Air Transportation System, Human Factors Research Status Report, May 1, 2012; http://www.jpdo.gov/library/2012_Human_Factors_Research_Status_v2.0.pdf
November 2013 – Testing Innovative Autopilot on F/A-18 Jet for NASA’s Space Launch System.

For NASA, this is the first application of an adaptive control concept to launch vehicles, adding the ability for an autonomous flight computer system to retune itself — within limits — while it’s flying the rocket. The system, called the Adaptive Augmenting Controller, learns and responds to unexpected differences in the actual flight versus preflight predictions. This ability to react to unknown scenarios that might occur during flight and make real-time adjustments to the autopilot system provides system performance and flexibility, as well as increased safety for the crew.

http://www.nasa.gov/centers/marshall/news/news/releases/2013/13-123.html#.Ux9DhqU410w

Jan, 2011 – Mahadevan, Nagabhushan, Abhishek, Dubey and Karsai, Gabor, A Case Study On The Application of Software Health Management Techniques, TECHNICAL REPORT, SIS-11-101

Self-adaptive systems, while in operation, must be able to adapt to latent faults in their implementation, in the computing and non-computing hardware; even if they appear simultaneously. Software Health Management (SHM) is a systematic extension of classical software fault tolerance techniques that aims at implementing the vision of self-adaptive software using techniques borrowed from system health management. SHM is predicated on the assumptions that (1) specifications for nominal behavior are available for a system, (2) a monitoring system can be automatically constructed from these specifications that detects anomalies, (3) a systematic design method and a generic architecture can be used to engineer systems that implement SHM. The verification of such adaptive systems is a major challenge for the research community

Gosling, G.D. & Hockaday, S.L.M. (1984). Identification and Assessment of Artificial Intelligence Techniques in Air Traffic Control (Research Report UCB-ITS-RR-84-14). Institute for Transportation Studies, University of California: Berkeley, California.

D.A. Spencer, Applying Artificial Intelligence Techniques to Air Traffic Control Automation (1989)

Fieldsend, J. E. and Everson, R. M. (2004). ROC Optimisation of Safety Related Systems. Proceedings of ROCAI 2004, part of the 16th European Conference on Artificial Intelligence, 22nd August, Valencia, pages 37–44; http://www.dcs.ex.ac.uk/people/jefields/JF_17.pdf

B752/B752, en route, north of Tenerife Spain 2011 (Loss of Separation Human Factors)

http://www.skybrary.aero/index.php/B752/B752,_en_route,_north_of_Tenerife_Spain_2011_%28LOS_HF%29?utm_source=SKYbrary&utm_campaign=adbdbffac3-SKYbrary_Highlight_04_07_2013&utm_medium=email&utm_term=0_e405169b04-adbdbffac3-276463842
https://www.techopedia.com/definition/31621/weak-artificial-intelligence-weak-ai (Clarification of the difference between weak and strong A.I. While weak A.I. is built for a single task, which it performs intelligently, a strong A.I. has the same cognitive capacity as a human brain, and all that entails. Siri counts as a weak A.I., since she can only handle things for which she was programmed.)

http://www.dailymail.co.uk/sciencetech/article-3662607/The-AI-air-traffic-control-eliminate-wasted-time-tarmac-travellers.html (NASA plans to route controller flight information, along with other variables, through a centralized computer system to coordinate flights. Testing period will last until 2020.)

https://engineering.stanford.edu/news/can-neural-networks-help-make-air-traffic-control-safer (A research paper from Stanford argued for using neural networks similar to Siri to improve air traffic control. The system in question, ACAS X, uses sensors to track and optimize data in real-time. Neural networks were utilized to streamline that data, compressing it by a factor of 1,000 from an initial 2-gigabyte database. As of November 2016, the research is being followed upon.)

https://engineering.stanford.edu/news/can-neural-networks-help-make-air-traffic-control-safer (AI researcher Stuart Russell’s take on the Artificial Intelligence boom writ large. He emphasizes the goal of keeping AI “beneficial”, rather than just “powerful”, and making sure it aligns with human values. Translation to ATC: ability to make judgment calls?)

Last update

2017-08-28