Home / News / Technology / Tesla’s Project Rodeo Uses High-Risk Autonomous Vehicle Testing
Technology
3 min read

Tesla’s Project Rodeo Uses High-Risk Autonomous Vehicle Testing

Published
James Morales
Published

Key Takeaways

  • Tesla’s “Project Rodeo” involves intentionally pushing the firm’s self-driving software to its limit.
  • Drivers involved in the program are encouraged not to intervene when the software makes a mistake.
  • Test drivers have reported feeling pressured to let vehicles veer into dangerous situations.

 Tesla has a little-known autonomous vehicle safety testing program—Project Rodeo.

According to a Business Insider  exposé, Project Rodeo encouraged test drivers to intentionally let Tesla’s self-driving software place cars in dangerous situations to see how it would respond.

Tesla’s Risky Safety Testing 

The most controversial aspect of Project Rodeo concerns the work of “critical-intervention” test drivers, whose job involves letting the software continue driving even after it makes a mistake and waiting until the last moment to stage an intervention.

According to the report, several critical intervention drivers said they felt pressured to place themselves in dangerous situations and worried their jobs would be at risk if they intervened too soon.

Incidents that reportedly occurred during the testing program include cars running through red lights, swerving into other lanes, and failing to follow speed limits. 

Test Vehicles Commit Traffic Safety Violations

According to the report, one critical-intervention driver said they let their vehicle speed through yellow lights and drive 35 mph under the speed limit on an expressway without disengaging the autonomous driving software.

Another said: “It felt like the goal was almost to simulate a hit-or-miss accident and then prevent it at the last second.”

Five drivers who worked for Tesla in 2024 said they narrowly avoided collisions, including almost hitting a group of pedestrians.

Autonomous Driving Data Collection

Tesla’s safety program is designed to collect real-world data about high-risk situations to train the company’s self-driving AI models.

Other autonomous vehicle firms have used similar approaches. For example, Waymo calculates and analyzes “counterfactual disengagement,” i.e., the simulations of what might have happened had a driver not intervened. 

“If the simulation outcome reveals an opportunity to improve the behavior of the [autonomous driving system], then the simulation is used to develop and test changes to software algorithms,” Waymo researchers explained  in a 2020 paper.

However, there is no indication that Waymo deliberately encouraged drivers not to intervene in dangerous situations.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more