Augmented Reality in the Aerodrome Tower

In recent years, Augmented Reality has become one of the major focus points of user interface development. With a rapid increase in computing power and developments in software and hardware applications, it has moved from theoretical approaches towards industry-wide application and mass production.

Augmented Reality combines virtual elements generated by a computer with the real world. Early developments from the military  resulted in Head-up Displays (HUD) and Head-mounted Displays (HMD) for piloting purposes. More complex interface developments based on  HMDs were taken up by the gaming industry some twenty years ago, but lacked the technological sophistication of current devices. Today, devices are portable, lightweight and able to integrate complex 3D-renderings into the real-world view.

Real-time simulation

Royal NLR has been working on applications for the use of HMDs that provide an AR experience in the aerodrome tower ATC environment for more than a decade now. The first device of such kind was tested in 2010 on the NARSIM Tower platform, the NLR in-house developed environment for highly realistic real-time simulations of tower operations.

More recently, NLR participated in a consortium for a project that is part of the SESAR 2020 Programme called Digital Technologies for Tower (DTT) conducting real-time simulation experiments with a focus on the use of an AR device for Attention Capturing and Attention Guidance for tower controllers.

During the experiments we used two Microsoft HoloLens 2™ devices in a simulated aerodrome control tower environment for Amsterdam Airport Schiphol (EHAM), and we demonstrated an operational concept for tower controller attention capturing and guidance based on visual and auditory cues. Existing Schiphol runway controller alerting systems triggered the attention capturing and guidance process by providing alerts with different priorities and in different operational contexts. Potentially, these alerts could occur at the same time.

A team of Simulation and Human Performance experts at NLR elaborated the basic sequence of operational steps for guidance of controller attention during the safety-critical events and designed the necessary cues inside the AR device. The presentation of the cues, in combination with aircraft labels containing information provided by the surveillance and flight data processing system, increased the Situational Awareness of tower controllers.

In a typical sequence of attention capturing and guidance, the safety net tools detected an event, relayed that information to an attention guidance logic component, and a non-intrusive element was displayed in the centre of the Tower controllers field-of-view indicating the type of alert and the most relevant information, including a pointer symbol towards the location where the event took place. This attention capturing activity had to be acknowledged by the user. Also the callsigns of the aircraft or vehicles were highlighted. They resembled radar labels and appeared as cues inside the field-of-view of the AR device once the user did not look in the desired direction. They would cling and snap to the actual labels of the relevant aircraft and vehicles when the area of the safety-critical event was in view. Thus, they also guided the user towards the area of interest. 3D spatial auditory cues (a voice indicating the type of event) were added as another guidance element.

The test programme consisted of different events and combinations of events that happened while two experienced tower controllers carried out routine work in the NARSIM environment for Schiphol airport. Pseudo-pilots were in control of aircraft movements and communicated with the tower controllers. Similar traffic scenarios were used to compare situations of working with and without the AR device. Results were gathered by using questionnaires after each test run and performing dedicated debriefing sessions.

Experiment Results

Our experiment showed that the developed operational concept for capturing and guiding the attention of aerodrome control towers with an AR device can be considered feasible, despite a reduced operational scope and the fact that feedback for improvement of elements of the chosen concept was given. These improvements mainly concerned the symbology and timing of attention guidance cues. In general, though, this result also means that we can continue working on the concept from a solid basis.

The information in the AR device correlated accurately with the objects in the simulated outside view and tracking labels followed the aircraft without noticeable deviations. We assume that this may be different in a real tower environment with less perfect surveillance information, but we also know from our experience with the device that there are methods to improve such imperfections. Further, visibility of the symbology was sometimes competing with reflections of light coming from the surroundings, but it was also considered that such issues might be more prominent in a simulator due to the low light intensity and contrast in the out-the-window view. Finally, the AR attention guidance module received information from the alerting system inside the NARSIM environment and communicated with the AR device as expected.

The two experienced tower controllers participating in our experiment described the device in combination with the concept as a favourable addition to the controller working environment. While desired technical performance improvements (mostly related to user comfort and general adjustments) will depend on vendor development, the Microsoft HoloLens 2™ used was considered a technically useful device for implementing prototypes for Attention Capturing and Guidance with aural and visual cues. A major recommendation was to elaborate detailed guidelines for the use the system and to adapt it to the different roles in the control tower.

Our vision

For a future vision regarding the use of AR devices in the control tower environment, several possible development paths for aerodrome tower control should be discerned. There are clearly very different existing approaches and applications of technology that would reduce the potential of AR or even eliminate its use in tower operations in the first place.

The latter would lead us towards full automation of tower controller tasks and so-called AI algorithms that will completely eliminate the human operator from the tower environment and thus the need for an out-the-window view. While such developments are taking place, we would see them in the context of automation work that still needs to progress in several areas of human perception and cognition.

The impact of further digitalization and remote operations, however, is not really under development anymore and is here to stay. It still requires human operators, but already includes different means for visualization of the outside world and integration of relevant information for tower controllers as part of the visualization concept. Set-ups obviously vary in scope and size, depending on the complexity of the operation that is carried out. For smaller airports or operations, the additional system support and information provision will already lead to benefits reducing the number of controller roles or to improvements regarding work efficiency. However, if more than one controller is supposed to work with the outside view and if that controller would require to see different information content in the outside view, AR devices could help and offer alternatives to additional screens on the remote tower working position or on additional smaller working positions added to the remote set-up (such as can be seen at the Budapest or Fort Collins remote tower centres). Advantages might even become bigger if attention capturing and guidance mechanisms are added, not only for single remote tower set-ups, but to a perhaps larger degree for multiple remote tower set-ups, where one or more controllers need to maintain a mental picture of the operational situation at two different airports. Clearly, AR technology alone would not be enough to improve the situation, but it will certainly offer additional possibilities in combination with planning and alerting features.

If visual operations from existing tower buildings will continue to exist and need to be improved, AR will certainly play a big role in these environments as it offers the possibility to add relevant information to the outside view of a particular controller without adding further equipment to, or forcing the controller to look down at the working position. Furthermore, it could lead to a new definition of controller roles and responsibilities, where the AR logic determines (or is fed with) the sequence of operations and the course of actions that need to be carried out by a particular individual in the tower. Obviously, such novel arrangements would require a high degree of automation and a clear delegation of authority, particularly in system failure situations. Nevertheless, one of the next steps in the development could be an operational situation in which all current working arrangements are re-defined.

It should be the task of ATM research organizations to look into such novel concepts without being restrained by ANSP structures or industry limitations. This would at least lead to new insights regarding the use of technology and would be an instigation for rather conservative developments started in the past, such as the introduction of electronic flight strips (EFS). Many EFS implementations we have seen were built on existing operations, roles, structures, and responsibility and authority rules and they simply replaced a paper strip with an electronic representation. Of course, the mere fact that these strips could show additional information, can be seen as a revolution, but the actual paradigm-shifting question for the airport operation is seldom asked: why do we use strips and bays as structural support aids for controlling aircraft on airports?

An elimination of such structures could direct the focus on what is needed in the real world view to better plan and execute departure sequences and to better determine the current clearance status of an aircraft. Or do we even need to know a clearance status? It could be sufficient to use attention capturing and guidance methods to advise the controller to carry out certain actions that will make sure that both the departure sequence can be realized and that aircraft do not have to wait for a required clearance to be given.

This leads us back to the topic of automation and task delegation. AR in combination with attention guidance will clearly be a helpful means to more efficiently carry out any control and monitoring tasks, simply due to the fact that a mental picture can be superimposed onto the real world view. Symbols, colours and sounds can be used to tell controllers what is happening and what needs to be done. Going a step further, automation could be used to already carry out some of the basic tasks, such as start-up, pushback, and landing clearances, via speech recognition while attention capturing and guidance is simply used to keep up controller Situational Awareness when controllers mainly have a monitoring task. Even smarter automation could receive information from the AR device sensors about the current controller workload and stress levels and could thus take over some of the tasks in an adaptive manner to reduce the workload. This opens up many new fields of study.

Another aspect, of course, is whether additional features could be integrated into the AR device view, such as video streams from cameras at gate positions that cannot be seen very well from the tower or video that zooms in on certain aspects of the operation at the gate to give an indication of the statuses for boarding and de-boarding, fuelling, catering and baggage handling. For some areas, it might be useful to offer detailed views, e.g. for runways where thresholds are far away from the tower or where part of the runway cannot fully be seen (gap fillers). Eventually, a complete overview of the surveillance picture could be added as well. Or the controller could be immersed into a mixed virtual and augmented reality world and could choose views of a particular situation from each desired position or angle. The possibilities seem endless.

While we can see that some of these ideas still have a number of obstacles to overcome, both technically and operationally, and will not instantly lead to enthusiastic reactions across the aviation sector, we still have to be prepared that the technologies are coming and that we need to ask the right questions to be prepared for such a future. Last but not least we need to ensure that we have investigated the full potential of new technologies that might offer clear improvements if applied in the appropriate context. At Royal NLR, we are prepared to work with such technology and we will embrace the innovative potential it provides.

Want to know more?

About the author Jürgen Teutsch:

With a background in space engineering and sector knowledge gathered at Airbus, Jürgen Teutsch joined NLR in 2000 as manager for ATM and airport simulation projects. His work is well-recognized by industry peers and has a strong focus on innovative solutions for air traffic controller support tools.

Keep exploring