Years after Elon Musk promised to deliver self-driving cars, the automaker is expected to unveil a prototype called a “Cybercab” rather than a road-ready self-driving taxi.

Elon Musk and Cybertaxi of the future

Convincing regulators and passengers that the car is safe could prove far more difficult and take far longer.
Years after Elon Musk promised to deliver self-driving cars, the automaker is expected to unveil a prototype called a “Cybercab” rather than a road-ready self-driving taxi.

Convincing regulators and passengers that the car is safe could prove far more difficult and take far longer. Meanwhile, its main competitors, like Waymo, are expanding the robo-taxis they already operate in some cities today.

Tesla has so far followed a different technological path than all of its major competitors in the self-driving space — one that is potentially more rewarding, but also riskier.

Tesla’s strategy relies solely on combining “computer vision,” which aims to use cameras the same way humans use their eyes, with an AI technology called end-to-end machine learning that instantly translates images into driving decisions.

In fact, the technology already powers a driver-assistance feature called “Full Self-Driving,” which, despite its name, can’t operate safely without a human driver. Musk has said Tesla is using the same approach to develop a fully autonomous robotaxis.

Tesla’s rivals, including Waymo, Amazon Zoox, General Motors Cruise, and a host of Chinese firms, use the same technology, but typically layer it on redundant systems and sensors like radar, lidar, and sophisticated mapping to ensure safety and win regulatory approval for their self-driving cars.

Tesla’s strategy, it turns out, is simpler and much cheaper, but it has two critical flaws, industry executives, autonomous car experts, and a Tesla engineer told Reuters.

Without the layered technologies used by its peers, Tesla’s system struggles more with so-called “edge cases” — rare driving scenarios that self-driving systems and their human engineers have a hard time anticipating.

Another major problem: End-to-end AI technology is a “black box,” a Tesla engineer said, making it “virtually impossible” to “see what’s wrong when it misbehaves and causes a crash.”

The inability to pinpoint such failures, he said, makes it difficult to protect against them.

Tesla did not respond to a request for comment about its technology.

End-to-end AI involves training a computer to make decisions directly from raw data, without intermediate steps that require additional engineering or programming.

Nvidia, the world’s leading maker of AI chips, is also using end-to-end technology in the autonomous driving systems it is developing and plans to sell to automakers.

But Nvidia combines the approach with more traditional computing systems and additional sensors like radar and lidar, an executive told Reuters.

End-to-end technology usually — but not always — makes the best driving decisions, he explained, so Nvidia is taking a more conservative approach.

“We must build the future step by step. We cannot go into the future directly. It is too unsafe.”

Yuri Chekalin

Yuri Chekalin is a Professor of Tokyo University, History Department, and a Political Analyst.


Leave a Reply

Your email address will not be published. Required fields are marked *