Flying is safe thanks to data and cooperation – here’s what the AI industry could learn from airlin
Data analytics, putting safety out of bounds for competition, and collaboration among industry, labor and government are key to reducing a technology’s risks.

Approximately 185,000 people have died in civilian aviation accidents since the advent of powered flight over a century ago. However, over the past five years among the U.S. airlines, the risk of dying was almost zero. In fact, you have a much better chance of winning most lotteries than you do of dying as a passenger on a U.S. air carrier.
How did flying get so safe? And can we apply the hard-earned safety lessons from aviation to artificial intelligence?
When humanity introduces a new paradigm-shifting technology and that technology is rapidly adopted globally, the future consequences are unknown and often collectively feared. The introduction of powered flight in 1903 by the Wright brothers was no exception. There were many objections to this new technology, including religious, political and technical concerns.
It wasn’t long after powered flight was introduced that the first airplane accident occurred – and by not long I mean the same day. It happened on the Wright brothers’ fourth flight. The first person to die in an aircraft accident was killed five years later in 1908. Since then, there have been over 89,000 airplane accidents globally.
I’m a researcher who studies air travel safety, and I see how today’s AI industry resembles the early – and decidedly less safe – years of the aviation industry.
From studying accidents to predicting them
Although tragic, each accident and each fatality represented a moment for reflection and learning. Accident investigators attempted to recreate every accident and identify accident precursors and root causes. Once investigators identified what led up to each crash, aircraft makers and operators put safety measures into effect in hopes of preventing additional accidents.
For example, if a pilot in the earlier era of flight forgot to lower the landing gear prior to landing, a landing accident was the likely result. So the industry figured out to install warning systems that would alert pilots about the unsafe state of the landing gear – a lesson learned only after accidents. This reactive process, while necessary, is a heavy price to pay to learn how to improve safety.
Over the course of the 20th century, the aviation world organized and standardized its operations, procedures and processes. In 1938, President Franklin Roosevelt signed the Civil Aeronautics Act, which established the Civil Aeronautics Authority. This precursor to the Federal Aviation Administration included an Air Safety Board.
The fully reactive safety paradigm shifted over time to proactive and eventually predictive. In 1997, a group of industry, labor and government aviation organizations formed a group called the Commercial Aviation Safety Team. They started to look at the data and attempted to find trends and analyze user reports to identify risks and hazards before they became full-blown accidents.
The group, which includes the FAA and NASA, decided early on that there would be no competition among airlines when it came to safety. The industry would openly share safety data. When was the last time you saw an airline advertising campaign claiming “our airline is safer than theirs”?
It’s down to data
The Commercial Aviation Safety Team helped the industry transition from reactive to predictive by adopting a data-driven, systemic approach to tackling safety issues. It generated this data using reports from people and data from aircraft.
Every day, millions of flights occur worldwide, and on every single one of those flights, thousands of data points are recorded. Aviation safety professionals now use Flight Data Recorders – long used to investigate accidents after the fact – to analyze data from every flight. By closely examining all this data, safety analysts can spot emerging and troublesome events and trends. For example, by analyzing the data, a trained safety scientist can spot if certain aircraft approaches to runways are becoming riskier due to factors like excessive airspeed and poor alignment – before a landing accident occurs.

To further increase proactive and predictive capabilities, anyone who operates within the aviation system can submit anonymous and nonpunitive safety reports. Without guarantees of anonymity, people might hesitate to report issues, and the aviation industry would miss crucial safety-related information.
All of this data is stored, aggregated and analyzed by safety scientists, who look at the overall system and try to find accident precursors before they lead to accidents. The risk of dying as a passenger onboard a U.S. airline is now less than 1 in 98 million. You are more likely to die on your drive to the airport than in an aircraft accident. Now, more than 100 years since the advent of powered flight, the aviation industry – after learning hard lessons – has become extremely safe.
A model for AI
AI is rapidly permeating many facets of life, from self-driving cars to criminal justice actions and hiring and loan decisions. The technology is far from foolproof, however, and errors attributable to AI have had life-altering – and in some cases even life-and-death – consequences.
Nearly all AI companies are trying to implement some safety measures. But they appear to be making these efforts individually, just like the early players in the aviation field did. And these efforts are largely reactive, waiting for AI to make a mistake and then acting.
What if there was a group like the Commercial Aviation Safety Team where all AI companies, regulators, academia and other interested parties convened to start the proactive and predictive processes of ensuring AI doesn’t lead to calamities?
From a reporting perspective, imagine if every AI interface had a report button that a user could click to not only report potentially hallucinated and unsafe results to each company, but also report the same to an AI organization modeled on the Commercial Aviation Safety Team. In addition, data generated by AI systems, much like we see in aviation, could also be collected, aggregated and analyzed for safety threats.
Although this approach may not be the ultimate solution to preventing harm from AI, if Big Tech adopts lessons learned from other high-consequence industries like aviation, it just might learn to regulate, control and, yes, make AI safer for all to use.
James Higgins receives funding from the FAA to conduct research regarding flight safety topics. He is also the co-founder of two companies, one is HubEdge, which is a company that helps airlines optimize their ground operations. The other is Thread, which helps utilities operate drones to collect information about their assets.
Read These Next
King, pope, Jedi, Superman: Trump’s social media images exclusively target his base and try to blur
Campy, exaggerated and partisan, many White House social media posts target the president’s base.…
When it comes to Ukraine peace negotiations, it’s all over the map
Donald Trump is reportedly ‘sick’ of being presented with maps of Ukraine’s battlefronts. The…
Even before they can read, young children are visualizing letters and other objects with the same st
Recent research shows that young children typically manipulate and rotate entire objects in their minds…