Oleg Zabluda's blog
Friday, November 04, 2016
In his keynote at GTC, NVIDIA CEO Jensen Huang showed a video of BB8, a self-driving car that uses "DaveNet" a convolutional net trained end to end to drive a car. The inputs are images from regular cameras, and the output is a steering angle.

After 3000 miles of watching over the shoulder of a human driver, DaveNet can drive BB8 in the rain, in a country road without marking, on highways, on dirt roads, etc.

DaveNet was developed by the NVIDIA autonomous driving lab in Holmdel, NJ. The team is led by my friend and long-time collaborator Urs Muller (whom you see in the video), and advised by my former Bell Labs department head Larry Jackel. You can hear Larry scream "holy cow!" when the barely-trained BB8 hits a road cone or misses a turn.

Urs and his team are actually based in Holmdel, NJ, in the very building he, Larry and I were working at Bell Labs in the late 80s and early 90s.

Before joining NVIDIA, Urs was the founder and CEO of a company called Net-Scale Technologies that has worked with my NYU lab on a number of federally-funded mobile robot and deep learning projects.
One of these project took place in 2003, and was called "Dave". It was the first mobile robot that used a convolutional net trained end to end to drive itself from visual input.

GTC 2016: NVIDIA Self-Driving Car Demo, (part 11)

End to End Learning for Self-Driving Cars (2016) Nvidia [DaveNet]

CES 2016: NVIDIA DRIVENet Demo [car detection]


Powered by Blogger