New Event-based Tech Brings Advantages to Autonomous Driving Market

2394
current autonomous driving systems with LiDAR and camera-based sensors

Lidar vs. VoxelFlow: What’s the Difference?

Over the past decade, Lidar and camera-based technologies have been at the helm of current autonomous driving (AV) and systems paving the way for safe driverless cars. But as smart cities and the autonomous driving market become a fast-approaching reality, new technologies are finding ways to improve industry standards, ensuring we have a realistic and, most importantly, safe path to the future of AV.

Though Lidar has become the standard for AV companies like Alphabet’s Waymo, its low latency has a long way to go. If the AV industry wants to make true autonomous driving a reality, updates to safety standards will need to be prioritized. That’s where VoxelFlow comes into play, an event-based 3D sensor system designed specifically with autonomous driving in mind.

The Old Industry Standard: Too Little Too Late

Lidar sensors have been popping up across a multitude of industries, using Light Detection and Ranging to make its mark on agriculture, weather machines and even iPhones. Its sensors use invisible laser beams to detect objects within its range; and in the case of AV, Lidar is currently used to detect obstacles in front of moving vehicles. Originally designed for satellite tracking, Lidar excels at distances beyond 30 to 40 meters and is extremely fast when compared with the human eye. But when operating within range, or crashing distance, of an approaching object, Lidar faces limitations in speed and distance. Similarly, automotive camera systems (often used in conjunction with Lidar systems) operate at 30 frames per second (fps), which results in 33-millisecond process delay per frame. These series of delays can add hundreds of milliseconds to a vehicle’s detection and reaction times, having potentially disastrous effects in dense urban environments.

Lidar’s and camera-based technologies’ parallel, frame-based approach to detection allow the systems to fall victim to the speed limits of perception and reaction time, with the limited ability to recognize only 33,000 light points per frame, a mere fraction of what’s possible with new technology. In short, this approach is nowhere near fast enough, and we need faster sensors if we want to stand a chance at realistically tackling the constant variables drivers contend with on a day-to-day basis.

Where VoxelFlow’s Motion Perception Tech Excels

Though Lidar shines at greater distances, VoxelFlow’s low latency is designed to cater to object detection within crash-range. Where a human driver sees an oncoming object approaching and is able to gradually slow down their approach, VoxelFlow will be able to react to sudden, abrupt obstacles with a speed humans and Lidar are simply incapable of. Meanwhile, if weather conditions aren’t optimal or the object’s color doesn’t contrast enough against its background, a camera system may not even detect the obstacle at all.

When you consider Lidar’s broad origins in government and enterprise spaces, it’s not surprising that its accompanying cost is higher, as well. The systems can cost anywhere from a couple thousand to hundreds of thousands of dollars, most of which is fronted by the end user. And with new premium models constantly hitting the market, it can be hard for users to keep up. Simply put, Lidar was not designed with the average consumer in mind. We must keep safety first in the autonomous driving market.

Lidar can be bulky. There’s no way around it. The system needs to be placed somewhere on the vehicle where it can have a clear sight line. And though some luxury vehicle owners might insist they don’t mind a rather large box being placed on the roof of their BMW, it definitely isn’t going to be the best look in the long term.

VoxelFlow to Drive the Future of Autonomous Vehicles with Precision & Speed

VoxelFlow’s revolutionary technology will enable vehicles to classify moving objects at extremely low latency using very low computational power. The tech can produce 10 million 3D points per second, as opposed to only 33,000, and the result is rapid edge detection without motion blur. With VoxelFlow, AV will be able to navigate the road with more precision than a human, as its ultra-low latency allows it to accelerate, brake or steer around objects that appear abruptly in incomparable time. Safety will be paramount in the autonomous driving market.

VoxelFlow will empower the next general level 1-3 advanced driver-assistance systems (ADAS) safety performance while also realizing the promise of truly autonomous L4-L5 AV systems. This critical tech will consist of three event image sensors distributed throughout the vehicle and a centrally located continuous laser scanner that provides a dense 3D map that is able to tell the difference between a stationary mailbox and a puppy running across the road. And with its significantly reduced process delay, it is geared to meet the actual real time constraints of autonomous systems.

VoxelFlow’s sensor system will automatically and continuously calibrate to handle shock, vibration and blur resistance while also providing the required angular and range resolution needed for ADAS and AV systems. It will also perform well in adverse weather conditions compared to lidar systems that degrade in these conditions due to excessive backscatter.

Autonomous Driving Market

Where Lidar works well at ranges beyond 40 meters – even though it costs a fortune – VoxelFlow is the complementary technology Lidar systems need. When used together, we can significantly improve detection and most importantly, enhance safety and reduce collisions, paving the way for a future where drivers can have confidence in the autonomous vehicles on the road. New technology will bring a safer autonomous driving market.

Subscribe

* indicates required