Why do we think that there is no future in using lidars for self-driving?

mediumThis post was originally published by Artem Krivich at Medium [AI]

It’s expensive, needs constant maintenance, and increases vehicle cost. Are you ready to buy Ford Focus for the price of Porsche Panamera? I think, No.

There are technologies and products which were ahead of its time and using them was not justified, the market was not ready. Lidars in self-driving is not this case. It’s more like electricity was first used with direct current. After rethinking and calculations alternating current started to be used which radically changed the possibility of energy transfer and completely turned the entire previously calculated model. AC has its drawbacks relative to DC, but they are not so important, because AC is cheap to scale.

The story of the Ralient self-driving is similar. It’s about scalability. Driving using lidars with HD maps is working, safe, and relatively easy, but expensive, and it is just impossible in some countries and some areas of these countries. We are not talking just about Russia, we are talking about Some USA regions, Canada, Japanese China, and Germany.

How do we drive? The general idea behind the Sol SDS system is almost the same as everyone else:

The fusion of Computer Vision Module and Radar Data Module, Navigation, Localization, Prediction, Control.

The difference is in the details. Our localization module relies on GNSS+IMU+Odometry and computer vision, we can plan a route using ordinary maps, with only necessary signs and traffic lights. Our computer vision module is drastically different from other companies. It’s responsible for detecting road markings, other objects, traffic lights, signs, distances to all objects, and their dimensions. We are replacing very precise lidars with less precise computer vision module, but we don’t need to know that the object is in 34338 millimeters, it is only necessary to know that it is 34 meters away. We have many neural networks responsible for solving different problems by working in parallel and coherently which in turn allows us to get the results we want. Our self-driving vehicle will drive as a professional driver who will not be tired, drunk, and don’t eat.

Now we are testing different modules, one of the tests we are conducting together with partners from StarLine and Taxovichkof. The objectives of the test were to demonstrate the precise movement along the trajectory without using ultra-precise GNSS systems and data from signal correction stations, to demonstrate the detection and recognition of an object, and determine the distance to it and its dimensions, using a single camera.

Image for post

Maybe you noticed big boxes on tops of the famous self-driving companies. If the automaker wants to integrate the system into its car, it will have to agree to restrict usage for some areas and integrate all sensors in their cars.

What about us? For Ralient, it’s only necessary to provide a camera with a 360-degree view and one radar just in case, that’s all.

Lastly, some people ask “Google uses lidars and you can’t be smarter” your comments are awaited at Tesla and MobilEye, they are using a similar approach as we are. Google just believes and uses a different approach.

Spread the word

This post was originally published by Artem Krivich at Medium [AI]

Related posts