Federal regulators are once again turning their attention toward autonomous vehicle safety, this time focusing on Waymo after one of its driverless taxis allegedly committed a serious traffic violation involving a stopped school bus. The National Highway Traffic Safety Administration (NHTSA) has opened a formal investigation into the incident, which occurred in Atlanta, Georgia, on September 22, 2025.

According to the agency, the self-driving car, operating without a human safety driver, reportedly maneuvered around a school bus that had stopped to let children disembark — an act that is illegal in every U.S. state. The vehicle, controlled entirely by Waymo’s fifth-generation automated driving system (ADS), initially came to a halt but then steered around the bus, passing both its extended stop arm and crossing guard arm while students were nearby. This event has reignited debate about whether autonomous vehicles are ready to safely navigate complex real-world traffic scenarios that involve unpredictable human behavior, such as children crossing the street.
The NHTSA’s Office of Defects Investigation (ODI) has classified the probe as a Preliminary Evaluation, covering approximately 2,000 Waymo vehicles that share the same software configuration. Investigators are seeking to determine whether the issue reflects an isolated system failure or a broader safety flaw in the logic guiding how Waymo’s cars respond to stopped school buses. Federal regulators have highlighted that incidents involving AVs and buses are particularly concerning because these situations require quick interpretation of visual cues — flashing red lights, extended arms, and the presence of pedestrians — which autonomous systems may still struggle to interpret consistently.
Waymo, which surpassed 100 million miles of autonomous driving earlier in 2025, currently logs about two million additional miles each week across its fleet. Despite that impressive milestone, the high-mileage record also raises the statistical likelihood that more safety-critical events could emerge. The company has stated that it has already updated the software to prevent similar misinterpretations, but NHTSA officials remain cautious, emphasizing that updates alone do not absolve companies from accountability when public safety is at stake.
A Persistent Challenge for Autonomous Vehicles
The school bus incident underscores one of the most difficult challenges facing the autonomous driving industry: how to safely interact with vehicles and pedestrians in unpredictable, high-risk environments. These moments — when laws demand absolute compliance and awareness — expose the limits of even the most advanced algorithms.
Historically, AV developers have encountered recurring problems when dealing with emergency vehicles, construction zones, and school transportation. Even though machine learning systems can process vast datasets and simulate millions of potential driving situations, they still rely on sensors and object-recognition systems that can misread or overlook critical visual cues. Factors like lighting conditions, unusual vehicle positions, or obstructed signage can all contribute to misjudgments that a human driver might instinctively avoid.
Public Trust and Regulatory Oversight
This latest investigation comes amid growing skepticism about the readiness of self-driving technology. After years of hype surrounding “robotaxis,” many Americans remain wary, particularly following previous high-profile crashes involving autonomous vehicles. Although regulators have been slow to impose strict national standards, the increasing number of investigations signals a gradual shift toward more oversight.
At present, most of the safety testing for autonomous cars remains self-reported by the manufacturers. Companies must comply with state-level rules, but these often vary widely. Some states, such as California, require transparency reports on disengagements and collisions. Others impose minimal restrictions, allowing companies to test driverless cars with limited external auditing. This fragmented approach has led to growing calls for a unified federal safety framework that mandates consistent testing procedures, real-world validation, and public disclosure of incidents.
The Broader Decline of Full Autonomy Hype
The Waymo case also highlights a larger industry trend. Several major automakers have already retreated from pursuing fully autonomous systems. Ford and Volkswagen shuttered their joint venture, Argo AI, in 2022 after failing to achieve profitability. General Motors followed suit in 2024, winding down its Cruise robotaxi division after a series of safety controversies and intense local backlash. The reality is that scaling autonomous ride-hailing fleets has proven far more difficult — and far less popular — than expected.
In contrast, manufacturers are refocusing on advanced driver assistance systems (ADAS), such as adaptive cruise control, lane centering, and automatic emergency braking. These technologies, while useful, have also faced criticism for inconsistency and excessive repair costs. For example, the radar and camera arrays that enable these features can cost thousands to replace after even a minor collision. Furthermore, privacy advocates have raised alarms over how much data modern vehicles collect, including location tracking, driver habits, and even in-cabin footage.
Are We Ready for the Next Step?
Despite Waymo’s relatively strong safety record compared to competitors, the current investigation may represent a turning point for how self-driving technology is perceived and regulated. The incident serves as a reminder that automation cannot yet replace human judgment in all driving conditions — especially those involving vulnerable road users like children.

Even as developers celebrate technical achievements, the central question persists: Is society gaining real safety benefits from this technology? The answer remains uncertain. While AVs have the potential to reduce human error, they also introduce new forms of risk, from software bugs to misinterpreted sensor data. Until these challenges are resolved, and until oversight catches up with innovation, the path to a fully driverless future will remain bumpy.
In the end, the Waymo investigation isn’t just about one car’s decision at a bus stop — it’s about whether the promise of autonomy can truly coexist with the complexity of human life on public roads. The outcome will not only determine Waymo’s reputation but could shape the entire trajectory of the self-driving vehicle industry.