A number of companies are on the track to producing a fully autonomous driving car and one of the major players in this race is Toyota Motor Corporation’s subsidiary Woven Planet Holdings Inc. The Tokyo based technology company recently revealed that it will be utilizing a camera-only approach for its self-driving project development like the electric and self-driving car giant Tesla Inc.
During an interview with a media outlet, Woven Planet stated that it will be able to utilise cameras to collect data and efficiently train its self-driving system. The tech firm believes that the vision-based technology is a “breakthrough” and it will help drive down prices and scale out the technology.
The company added that using a large fleet of cars to collect diverse driving data is crucial to developing a reliable self-driving car system, but testing autonomous vehicles alone with pricey sensors is expensive and not scalable. Additionally, it was also stated that despite adopting the new vision-based approach, Woven Planet will still use data collected from other sensors like radars and lidars for training as well as long-term deployment.
Michael Benisch, VP of Engineering at Woven Planet, said, “We need a lot of data. And it’s not sufficient to just have a small amount of data that can be collected from a small fleet of very expensive autonomous vehicles,” He further added, “Rather, we’re trying to demonstrate that we can unlock the advantage that Toyota and a large automaker would have, which is access to a huge corpus of data, but with a much lower fidelity.”
Tesla Inc. which is currently one of the front runners in electric mobility and self-driving technology has been deploying the camera systems in its vehicles for a long time. The American automaker has been relying on cameras to collect data from over 1 million vehicles on the road to build its automated driving technology.
During an earnings call last year, Musk stated that a camera-only approach to self-driving cars would help Tesla. He said, “When your vision works, it works better than the best human because it’s like having eight cameras, it’s like having eyes in the back of your head, beside your head, and has three eyes of different focal distances looking forward. This is — and processing it at a speed that is superhuman. There’s no question in my mind that with a pure vision solution, we can make a car that is dramatically safer than the average person,” He also once said that LiDAR was a “crutch”.
Woven Planet employs cameras that are 90% less expensive than the sensors it previously used and can be simply put in passenger car fleets. The Tokyo-based company claimed that incorporating a majority of data from low-cost cameras improved the system’s performance to the same level as when it was trained purely on high-cost sensor data.
However, Benisch added that, for robotaxis and other autonomous cars to be deployed on the road, Toyota will continue to employ numerous sensors such as lidars and radars, since this appeared to be the best and safest strategy at the time.