Some time ago, Elon Musk expressed the opinion that if humans can drive a car with just their eyes, self-driving cars need just that – a powerful enough camera system.
Because of this view, today’s Tesla models have had the forward RADAR sensor removed, older models are updated with software so that the car’s computer ignores this part. Most experts in research and development of self-driving technology oppose this view of Elon Musk.
However, recently, Woven Planet (Toyota’s self-driving technology research branch) believes that Elon Musk’s approach can lead to success.
Location of sensors on Tesla Model 3. Photo: Michael Simari / Car and Driver. Transliteration: Minh Duc
The reason for this can be mentioned that Toyota will still deploy the robotaxi system with a full range of sensors and many other vehicles on public streets. Woven Planet thinks that with this large number of vehicles, they can collect a lot of data from these models with just the camera system. Because compared to the cost of other sensors, the camera is much cheaper and can be easily retrofitted.
For companies researching self-driving systems, real data, like human drinking water, is an indispensable resource. Indeed, Woven Planet’s vice president of engineering, Michael Benisch, said: “We need a lot of data. If the data comes from a small number of cars on the road, that’s a lot. not enough.”
VinAI’s self-driving technology test vehicle (under Vingroup) uses LiDAR sensor.
Mr. Michael Benisch added that the type of camera that Woven Planet uses is 90% more cost-effective than using RADAR or LiDAR that most other brands are using. The idea is that they can put the right type of camera on the current number of cars.
Using data from these cameras is considered to bring in more data, in faster time. The consequence of this is that the artificial intelligence system behind the self-driving technology will only need to rely on data from RADAR when the source data is also from the camera.
There have been many times when Tesla car owners activated self-driving mode and had an accident.
In fact, this approach that Tesla pioneered has received a lot of criticism, mainly about the fact that the camera is not really a good solution. Usually, if the data from the camera is not good, making it difficult to analyze, engineers will install the car’s computer system to use data from other sensors to be able to make decisions. history of when to steer, when to brake and to what extent.
Recently, many Tesla users have complained about virtual braking when they activate self-driving mode using only the company’s camera. Many judgments also suggest that the origin of the incident comes from Tesla only using the camera.
Phil Koopman, an expert at Carnegie Mellon University, who studies the safety of autonomous vehicles, said: “Virtual braking occurs when the system programmer does not properly set the vehicle’s actions when determining that there is an object. interference or when the input is incorrect. Other manufacturers often use multiple types of sensors, and these sensors compensate for each other.”
However, Michael Benisch at Woven Planet still believes that at some point in the future, the camera will be good enough for us to rely on it without other sensors.
He added: “In many, many years, self-driving technology using pure data from cameras can completely grow to match and surpass other sensors. The question will only be when and how long it will take to do so. camera can reach a level of safety and reliability. I really don’t know the answer yet.”
at Blogtuan.info – Source: Soha.vn – Read the original article here