Nvidia’s Self-Driving Tech Powers LA Test Drive

Nvidia's L2++ autonomous driving platform was tested in a real-world LA drive, showcasing its sensor fusion and decision-making capabilities. The company plans a nationwide rollout and Level 4 robo-taxi services, signaling strong progress in the autonomous vehicle market.

3 hours ago
4 min read

Nvidia’s Self-Driving Tech Powers LA Test Drive

Nvidia is pushing the boundaries of autonomous driving with its L2++ platform, recently showcased in a real-world, unedited one-hour drive through downtown Los Angeles. The test, conducted in a Mercedes equipped with Nvidia’s technology, navigated the everyday chaos of LA traffic, including lane merges, sudden cut-ins, construction zones, and unpredictable pedestrians. This demonstration offers a glimpse into the progress of Nvidia’s automotive solutions and its standing in the competitive autonomous mobility market.

Inside the L2++ System

The system operates on Nvidia’s Hyperion architecture, utilizing 10 cameras, 5 radar units, and 12 ultrasonic sensors for parking. Notably, this L2++ configuration does not include lidar, as explained by Armen Connie, senior product manager for autonomous vehicles user experience at NVIDIA. “For a level two plus product, we felt that we can achieve that with just the 10 cameras and the five radar with ultrasonics as well,” Connie stated. “But for our level three and level four initiatives, that’s when we’ll add the additional LAR to it.”

The L2++ system is designed for driver collaboration. While the car handles tasks like following speed limits and initiating lane changes via turn signals, the driver can intervene by adjusting steering or braking. “So, the car will follow all the speed limits. If he wanted to increase the speed, he can do that from pressing the steering wheel button,” Connie explained. The system also demonstrated its ability to recognize stop signs and follow right-of-way rules.

Sensor Fusion: Building a World Model

Nvidia’s approach integrates data from multiple sensors to create a comprehensive “world model.” Cameras identify lane markings and identify objects, while radar provides crucial range and speed information. Ultrasonic sensors are used for close-range detection, like parking near curbs. “It can take input from all those sensors and it creates what we call the world model,” said Connie. This model allows the car to understand object velocities, identify drivable lanes, and even interpret right-of-way at intersections by observing when other vehicles arrive.

The system also showed its capability in complex urban scenarios. It successfully navigated around a scooter in a bike lane, maintained its lane, and reacted to a pedestrian in a crosswalk. Connie highlighted the synergy between the end-to-end model, which uses camera input, and the world model, providing a 360-degree view. “So the end to end model is using kind of the front camera where it can see, and then it’s receiving inputs from that world model to see what’s also going on behind it as well,” he added.

Computing Power and Decision Making

The L2++ system runs on Nvidia’s Orin computing chip. For higher levels of autonomy, like L3 and L4, Nvidia plans to use its more powerful Thor chip. “The THOR has more computing power than the Orin. So with additional computing power, you can use bigger end-to-end models. We can take inputs from more signals,” Connie noted.

The system’s decision-making process was illustrated during a yellow light scenario. The car calculated its distance to the stop line and its speed to decide whether to stop or proceed, mimicking human judgment. “It’s calculating the distance between us and the stop line, how fast we’re moving, to make some of those decisions as well,” Connie explained.

Handling Edge Cases and Future Rollout

The demonstration included navigating challenging situations, such as construction zones with lane closures and unexpected interventions from construction workers. In one instance, a worker threw a cone in front of the car to stop it for unloading. The system recognized the object and halted, demonstrating its ability to react to novel situations. “The car sees this object. So then it stops, right? But you’re like, ‘Okay, wait, what? I’ve never seen someone do that before,'” Connie recalled.

Nvidia plans a beta release of this technology in Q2 of the current year, with a nationwide rollout for customers by the end of the year. This expansion will allow for data collection from a wider range of real-world driving conditions, which will be used to further enhance the autonomous driving models.

Looking ahead, Nvidia is collaborating with Uber to launch a Level 4 robo-taxi service in Los Angeles and San Francisco starting next year. The company envisions a future with various autonomous vehicle designs, from dedicated robo-taxis without steering wheels to consumer vehicles that can switch between manual and autonomous driving modes.

Market Impact and Investor Insights

Nvidia’s demonstration underscores its significant advancements in the autonomous driving sector. The company’s layered approach, utilizing different computing platforms like Orin and Thor, and its flexible sensor integration strategy, positions it well to address various levels of autonomy.

For investors, the progress in L2++ systems and the planned L4 robo-taxi deployments signal strong potential for Nvidia’s automotive division. The company’s ability to scale its technology from driver-assist features to fully autonomous systems, coupled with strategic partnerships like the one with Uber, are key factors to watch.

The emphasis on sensor fusion, robust decision-making logic, and continuous model improvement through real-world data collection are critical differentiators. As the industry moves towards higher levels of autonomy, Nvidia’s comprehensive platform and commitment to safety and user experience are likely to be significant drivers of its market position.


Source: I Tested NVIDIA's Self Driving Car… Is Tesla In Trouble? (YouTube)

Written by

Joshua D. Ovidiu

I enjoy writing.

13,068 articles published
Leave a Comment