Nvidia GPU-powered autonomous car teaches itself to see and steer

Nvidia’s breakthrough is the vehicle automatically taught itself to drive by watching how a human drove

During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.

The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.

Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.

Mapping human driving patterns

The Nvidia team trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. Nvidia’s breakthrough is the autonomous vehicle automatically taught itself by watching how a human drove, the internal representations of the processing steps of seeing the road ahead and steering the autonomous vehicle without explicitly training it to detect features such as roads and lanes.

Although in operation the system uses one camera and one Drive-PX embedded computer, the training system used three cameras and two computers to acquire three-dimensional video images and steering angels from the vehicle driven by a human that were used to train the system to see and drive.

nvidia autonomous vehicle Nvidia

Nvidia monitored changes in the steering angle as the training signal that mapped the human driving patterns into bitmap images recorded by the cameras. The system learned using the CNN to create the internal representations of the processing steps of driving, such as detecting useful road features like lanes, cars and road outlines.

The open-source machine learning system Torch 7 was used to render the learning into the processing steps that autonomously perceived the road, other vehicles and obstacles to steer the test vehicles. The actual training occurred at 10 frames per second (fps) because there wasn’t enough differentiation in adjacent frames at 30 fps to make learning valuable. The test vehicles were a 2016 Lincoln MKZ and a 2013 Ford Focus.

The core of the machine-learning process was the simulated steering by the CNN using Torch 7. The steering commands the CNN executed in a simulated response to the 10 fps images taken from a car driven by a human were compared to the human steering angles. The analysis of the difference between the human steering angles and the CNN-simulated steering commands taught the system to see and steer. The test data used in simulation was based on the video recording of three hours of driving over test routes, amounting to a total distance of 100 miles.

On-road testing

When the CNN driving simulation performed well, further machine learning and testing was stepped up to test vehicles on the road. On-road testing improved the system, with a human driver supervising the autonomous car and intervening when the autonomous system erred. Each correction was fed to the machine-learning system to improve the accuracy of the steering process. In the first 10 miles of driving on the New Jersey Turnpike, the vehicle operated 100 percent autonomously. Overall in early testing, the vehicle operated 98 percent autonomously.

Nvidia demonstrated that CNNs can learn the entire task of lane detection and road following without manually and explicitly decomposing and classifying road or lane markings, semantic abstractions, path planning and control. This was learned using Torch 7 to render fewer than 100 hours of training data to create the internal process to operate a vehicle autonomously in diverse weather and lighting conditions, on highways and side roads. Nvidia released a video with its paper that shows examples of the system autonomously steering the test vehicles:

The Nvidia team indicated that the system is not yet ready for production by stating in its paper:

“More work is needed to improve the robustness of the network, to find methods to verify the robustness, and to improve visualization of the network-internal processing steps.”

Based on the video, it’s fairly certain that the engineering team at every company building or planning to build an autonomous vehicle is reading this paper right now and discussing the results. Building this autonomous vehicle prototype could put Nvidia in the position to be a leading supplier of massively parallel GPU systems to all of the autonomous car manufacturers.

Join the Good Gear Guide newsletter!

Error: Please check your email address.

Tags nvidia

Our Back to Business guide highlights the best products for you to boost your productivity at home, on the road, at the office, or in the classroom.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Steven Max Patterson

Network World
Show Comments

Most Popular Reviews

Latest News Articles


PCW Evaluation Team

Azadeh Williams

HP OfficeJet Pro 8730

A smarter way to print for busy small business owners, combining speedy printing with scanning and copying, making it easier to produce high quality documents and images at a touch of a button.

Andrew Grant

HP OfficeJet Pro 8730

I've had a multifunction printer in the office going on 10 years now. It was a neat bit of kit back in the day -- print, copy, scan, fax -- when printing over WiFi felt a bit like magic. It’s seen better days though and an upgrade’s well overdue. This HP OfficeJet Pro 8730 looks like it ticks all the same boxes: print, copy, scan, and fax. (Really? Does anyone fax anything any more? I guess it's good to know the facility’s there, just in case.) Printing over WiFi is more-or- less standard these days.

Ed Dawson

HP OfficeJet Pro 8730

As a freelance writer who is always on the go, I like my technology to be both efficient and effective so I can do my job well. The HP OfficeJet Pro 8730 Inkjet Printer ticks all the boxes in terms of form factor, performance and user interface.

Michael Hargreaves

Windows 10 for Business / Dell XPS 13

I’d happily recommend this touchscreen laptop and Windows 10 as a great way to get serious work done at a desk or on the road.

Aysha Strobbe

Windows 10 / HP Spectre x360

Ultimately, I think the Windows 10 environment is excellent for me as it caters for so many different uses. The inclusion of the Xbox app is also great for when you need some downtime too!

Mark Escubio

Windows 10 / Lenovo Yoga 910

For me, the Xbox Play Anywhere is a great new feature as it allows you to play your current Xbox games with higher resolutions and better graphics without forking out extra cash for another copy. Although available titles are still scarce, but I’m sure it will grow in time.

Featured Content

Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?