Dr Paul Robertson | Artificial Intelligence in the Cockpit: New Systems Could Help Prevent Aviation Accidents

Feb 20, 2025 | Engineering & Computer Science

Despite significant advances in aviation safety over recent decades, accidents still occur that could potentially be prevented with better warning systems. Dr Paul Robertson of Dynamic Object Language Labs, Inc. (DOLL) is leading groundbreaking research into how artificial intelligence could help pilots avoid dangerous situations. His team’s work reveals promising developments and important cautions about implementing AI in aircraft cockpits, with implications for the future of aviation safety.

The Complex Challenge of Modern Aviation Safety

While commercial aviation has become remarkably safe, the sheer complexity of modern aircraft creates new challenges for pilots. Today’s cockpits feature sophisticated computers that monitor everything from engine performance to weather conditions. When problems arise, these systems generate alerts to warn pilots. However, in critical situations, pilots are sometimes overwhelmed by a rare situation with disastrous consequences.  Even in the presence of a human first officer, serious problems can be overlooked that evolve over long periods. Pilot error is attributed to the overwhelming majority of fatal aviation accidents.

This challenge became tragically apparent in the 2009 Air France Flight 447 crash, where a relatively simple problem – blocked airspeed sensors – triggered a cascade of confusing alerts that contributed to the loss of all 228 people aboard. Similar incidents continue to occur, highlighting the need for better ways to help pilots quickly understand and respond to developing problems.

The Hidden Patterns in Aviation Accidents

While studying aviation accidents, Dr Paul Robertson noticed a concerning pattern: many crashes shared similar characteristics that might have been recognisable beforehand. Working with colleagues at DOLL and the Massachusetts Institute of Technology (MIT), Dr Robertson began investigating whether artificial intelligence could help identify these patterns and warn pilots before situations become critical.

The team’s analysis revealed that while individual accidents might seem unique, they often followed predictable sequences of events. Even more importantly, many accidents resulted not from catastrophic aircraft failures but from a series of smaller issues that pilots might have handled differently if they had recognised the developing pattern. The key is seeing the big picture that individual systems may be describing.

Rethinking How Warning Systems Work

Traditional aircraft warning systems operate on a ‘bottom-up’ principle – individual components monitor specific parameters and generate alerts when something goes wrong. While this approach works well for clear-cut problems like engine failures, it can become problematic in complex situations involving multiple systems.

Dr Robertson and his team proposed a radical shift to a ‘top-down’ approach. Rather than focusing on individual system warnings, they developed an AI system that looks at the overall flight situation and compares it to patterns from previous accidents. This system, called Lightweight Interaction and Storytelling Archive (LISA), nicknamed ‘First Officer’, sees the story the sensors are telling and provides pilots with clear and timely actionable information before problems become critical.

Building a New Kind of Safety System

LISA represents a significant departure from traditional aviation safety systems. Instead of waiting for specific parameters to exceed predetermined limits, it continuously analyses the aircraft’s current state – including factors like airspeed, altitude, configuration, and pilot inputs – and compares this information to a database of historical accidents and near-misses.

The system’s design prioritises simplicity and clarity. Rather than generating multiple technical alerts, LISA provides succinct, context-aware warnings that help pilots understand not just what’s happening but why it matters. For example, instead of showing multiple system warnings during a developing situation, it might tell pilots, ‘Indicated airspeed unreliable, establish level flight.’ Followed by other guidance if necessary.

Testing in Real-World Conditions

Dr Robertson’s team conducted extensive testing using instrumented flight simulators to evaluate LISA’s effectiveness. They recruited 23 experienced pilots with various experience levels, from private pilots to airline captains. The experiments recreated conditions similar to two real-world accident scenarios that had resulted in fatalities.

For comparison, some pilots used LISA while others worked with an AI assistant based on a large language model (similar to ChatGPT but specialised for aviation). This baseline system had access to extensive aviation knowledge and could engage in natural conversations with pilots about any aspect of flight operations.

Credit. Paul Robertson.

Surprising Results Challenge Common Assumptions

The testing revealed several unexpected findings. In the first scenario, which involved an engine malfunction during take-off, LISA successfully aided 80% of the pilots in recognising the problem and avoiding an accident. In contrast, none of the pilots using the baseline AI assistant recognised the developing danger in time to prevent a crash.

The results from the second scenario, which involved challenging terrain navigation in high-altitude conditions, were even more striking. All pilots using LISA completed the flight safely, while 64% of those using the baseline assistant experienced fatal accidents. The research team found that pilot experience level had surprisingly little impact on these outcomes – both novice and veteran pilots showed similar patterns of success or failure depending on which system they used.

The Dangers of AI Hallucinations in Aviation

Perhaps the most concerning finding involved the baseline AI assistant’s tendency to sometimes provide dangerously incorrect information. In at least two cases, the system’s ‘hallucinations’ – plausible but incorrect responses – directly contributed to accidents. This occurred because the AI would occasionally generate confident but wrong answers about aircraft operations, which some pilots trusted due to the system’s otherwise knowledgeable responses.

One specific example involved advice about flap settings during take-off. The AI incorrectly suggested using partial flaps for take-off in high-altitude conditions based on its knowledge of general aviation practices. However, this advice was wrong for the specific aircraft being flown and contributed to several accidents during testing.

Critical Lessons for AI Implementation

Dr Robertson’s research revealed several crucial insights about implementing AI in aviation safety systems. First, AI that can invent plausible but wrong advice has no place in a cockpit. The focused, specialised LISA system consistently outperformed the LLM-based AI assistant by providing advice that is rooted in aircraft and aviation documents, suggesting that traceable knowledge and reliability matter more than conversational ability.

Second, the way information is presented proves crucial. LISA’s success stemmed partly from its ability to provide clear, actionable warnings without overwhelming pilots with technical details. This matches well with how pilots work in high-stress situations, where clear, simple information often proves more valuable than comprehensive but complex data. The pilot does not have time to engage an AI assistant in conversation when the situation is critical. The AI assistant must speak up when needed, not when asked.

Looking Beyond Technical Solutions

The human element remains crucial in aviation safety. Dr Robertson emphasises that the goal isn’t to replace pilot judgment but to provide better information for decision-making. This philosophy guided LISA’s development, ensuring the system serves as an aid to pilot judgment by telling the big-picture story of what is happening and, when necessary, things that must be done immediately to stabilise the situation.

The research also highlighted the importance of pilot trust in automated systems. Some pilots initially distrusted or ignored LISA’s warnings, while others placed too much faith in the baseline AI assistant’s capabilities. Finding the right balance between trust and healthy scepticism remains an ongoing challenge.

Dr Robertson and his team are exploring how the ‘First Officer’ AI assistant products can reduce pilot stress and bring greater safety to all aircraft, from single-engine general aviation aircraft to the most complex aircraft. When we know how to avoid deaths, it would be a crime not to pursue it.

The research also has implications beyond aviation. The principles developed for LISA – focusing on pattern recognition, clear communication, and support for human decision-making – could apply to any complex system where operators must make quick decisions based on multiple inputs.

Broader Implications for AI Safety

This work offers valuable lessons about implementing AI in safety-critical environments. The contrast between LISA’s success and the potential dangers of statistical AI systems suggests that careful, focused knowledge-based implementations may prove more valuable than attempting to use unreliable statistical generative AI systems in safety critical systems.

As aviation continues to evolve, Dr Robertson’s research provides essential guidance for integrating AI into safety systems. His team’s work suggests that success lies in developing focused systems that provide reliable, actionable information when it matters most.

The research team is transitioning the LISA system to a future line of ‘First Officer’ products while the core research team continues to refine their approach, working to identify other areas where targeted AI assistance could improve safety in complex systems such as power distribution systems, nuclear reactors, and massive oil refinement systems. The ‘First Officer’ products may help shape the future of aviation safety systems, potentially saving lives by helping pilots avoid dangerous situations before they become critical.

SHARE

DOWNLOAD E-BOOK

WATCH THE ANIMATION

REFERENCE

https://doi.org/10.33548/SCIENTIA1226

MEET THE RESEARCHER


Dr Paul Robertson
Dynamic Object Language Labs Inc., Haverhill, Massachusetts, USA

Dr Paul Robertson obtained his BA in Computer Science from the University of Essex in 1977 and his DPhil in Engineering Science from the University of Oxford in 2001. He is currently the Chief Scientist and President of Dynamic Object Language Labs, Inc. (DOLL Inc.). Dr Robertson has over 30 years of experience leading research in areas including self-adaptive software architectures, symbolic learning systems, computer vision, robotics, planning, artificial intelligence, and artificial social intelligence (ASI). Some of his key accomplishments include developing DOLL’s Context-driven Active-sensing for Repair Learning (CARL) system for robotics, Pamela (a probabilistic modelling language for autonomous systems), DMCP (a Monte-Carlo generative planner), and serving as Principal Investigator on multiple DARPA programs. Previously, Dr Robertson held roles as a Senior Scientist at BBN Technologies, Research Scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Assistant Professor at the University of Texas at Dallas. He has authored numerous publications.

CONTACT

E: paulr@dollabs.com

W: https://www.dollabs.com/

X: https://www.linkedin.com/in/probertson/

https://www.instagram.com/dr_paul_robertson/

https://x.com/paulrdollabs327

KEY COLLABORATORS

This work was developed in collaboration with the Massachusetts Institute of Technology (MIT) CSAIL.

FUNDING

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. 140D0423C0108.

REPUBLISH OUR ARTICLES

We encourage all formats of sharing and republishing of our articles. Whether you want to host on your website, publication or blog, we welcome this. Find out more

Creative Commons Licence (CC BY 4.0)

This work is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License

What does this mean?

Share: You can copy and redistribute the material in any medium or format

Adapt: You can change, and build upon the material for any purpose, even commercially.

Credit: You must give appropriate credit, provide a link to the license, and indicate if changes were made.

SUBSCRIBE NOW


Follow Us

MORE ARTICLES YOU MAY LIKE

Putting AI in your Ears with 3D Neural Networks

Putting AI in your Ears with 3D Neural Networks

It’s difficult to communicate with someone when neither of you speak the same language; apps and online tools like Google Translate can help to bridge the gap, but they hardly make for a natural conversation. However, the rise of artificial intelligence (AI) has opened the door for the development of speech-to-speech technology, where spoken language is translated instantly in real time. To bring this idea closer to reality, a collaboration of European researchers led by Professor Cristell Maneux at the University of Bordeaux have proposed concepts for a 3D artificial neural network accelerator. This ultra-compact, highly efficient design could enable the construction of standalone earpieces capable of translating spoken language instantly, with no need for internet access.
Ongoing research by Professor Han Lamers (University of Oslo) and Professor Bettina Reitz-Joosse (University of Groningen) reveals how Fascist Italy weaponized ancient Rome’s language to legitimise its power and connect Mussolini’s regime to Italy’s imperial past. Their projects involve collaboration with an international team of mostly junior researchers based in Norway, the Netherlands, Austria, and Italy.

Gopal Ranganathan | The Use of Artificial Intelligence in Personalised Airline Services

Gopal Ranganathan | The Use of Artificial Intelligence in Personalised Airline Services

In today’s competitive airline industry, providing personalised services to passengers is becoming increasingly important for customer satisfaction and business success. Gopal Ranganathan from Quad Optima Analytics has developed an innovative artificial intelligence system to help airline executives implement and govern personalisation programmes. This cutting-edge technology aims to increase profits by tailoring services to individual customers while maintaining sound revenue management principles.

Professor Nancy Burnham | Imaging on the Nanoscale: Improving Techniques in Atomic Force Microscopy

Professor Nancy Burnham | Imaging on the Nanoscale: Improving Techniques in Atomic Force Microscopy

Atomic force microscopy (AFM) provides the means to image surfaces with nanometre resolution, allowing scientists to look at the individual building blocks and forces that make up the world around us. Professor Nancy Burnham of Worcester Polytechnic Institute and her colleagues Lei Lyu and Lily Poulikakos at the Swiss Federal Laboratories for Materials Science and Technology (Empa) have worked on how we can reduce artefacts in these images and ensure they are accurately interpreted. By considering and applying these techniques, high-quality AFM research can be produced.

Dr Sebastian Rabien | Making Membrane Mirrors for Future Space Telescopes

Dr Sebastian Rabien | Making Membrane Mirrors for Future Space Telescopes

Mirrors play a key role in space telescopes, but to keep increasing the scale of this technology, mirrors need to be light and compact, so they can be transported in spacecraft, but also able to be adaptively corrected and controlled to ensure their accuracy. Dr Rabien and his colleagues from the Max Planck Institute for Extraterrestrial Physics, in Germany, have developed a technique to make extremely thin and lightweight mirrors, which can then be controlled with adaptive optics, making them a potential solution for larger space telescopes.