Nvidia uses neural networks to make a vehicle learn
Drive PX2 platform from Nvidia showcased at CES 2016
Autonomous driving, while still many years away from being viable, let alone commonplace, is becoming incrementally more viable through advances in processing power. This was a major point of emphasis for auto and chipmakers during the 2016 Consumer Electronics Show.
In the crowded space, Nvidia stood out from competitors in that the company puts all of the computational power – in the case of Drive PX2, eight teraflops, roughly the equivalent of 150 Macbook Pros – in the vehicle.
Dave Anderson, Nvidia senior manager of automotive integration, explained the onboard compute power is “fully integrated into the vehicle. You really need to make real time decisions for the guidance of your vehicle, so you need to have all that processing power on board.”
Here’s a overview of Drive PX2 filmed on location in Las Vegas during CES.
“This is a brand new supercomputer for automotive applications and this supercomputer is powering the next generation of autonomous vehicle applications in vehicle,” Anderson told RCR Wireless News. “This platform is enabling…artificial intelligence in vehicle. The way that we’re doing that is we’re using deep learning and creating neural networks that will run on our Drive PX2 platform.”
At a high level, deep learning and neural networks allow a computer to think and learn in the same way that a human brain does.
In this clip, filmed during CES 2015, Jonathan Cohen, Nvidia’s former senior manager for CUDA libraries and algorithms, explains deep learning.