Oleg Zabluda's blog
Monday, August 20, 2012
 
This week, we were able to sit down with Professor Geoffrey Hinton, who is spending time at Google through our...

This week, we were able to sit down with Professor Geoffrey Hinton, who is spending time at Google through our Visiting Faculty Program. Professor Hinton hails from the University of Toronto and is especially interested in neural networks. In this interview, he describes the challenges involved in figuring out how to train large neural networks efficiently, and describes how his time here has contributed to his work:

Research at Google: Can you tell us a little bit about your research background and how you got started in Machine Learning?

Prof. Hinton: Ever since high school, I’ve been interested in how the brain works. At university, I studied psychology and physiology, but that didn’t get me close enough. I completed my PhD in AI and while doing that, I started making neural network models, which is how I got into Machine Learning. Neural networks are exciting but challenging because they have to learn everything; there’s no one to program knowledge into them. We just program the learning algorithm and choose an architecture for the network and then we watch what happens when the learning algorithm and the architecture interact with the data. When we do it right, the neural network discovers all sorts of fascinating structure in the data.

R@G: You’re currently visiting Google from the University of Toronto. What is your role there and what does your work entail?

Prof. Hinton: I’m a professor in the Computer Science Department and teach courses in machine learning and neural networks. I spend most of my time doing research with graduate students focused primarily on trying to come up with better learning algorithms for neural networks.

R@G: What made you pursue visiting Google and how does your academic work align with Google’s research agenda?

Prof. Hinton: Google invited me to come and I’ve been here in something like an Intern capacity-- it’s fun, it makes me feel young again. Google does Machine Learning research similar to that being done at the University of Toronto, but at a much bigger scale. I was attracted by the prospect of testing my research out with much more computational power and massive datasets.

R@G: What have you been working on while at Google and what are the biggest challenges you’ve faced?  

Prof. Hinton: I’ve been working on trying to improve the way neural networks are trained in general but more specifically, how they’re applied in Google’s speech recognition system. Not getting fat on the free food has been the biggest challenge... In all seriousness though, figuring out how to train really big neural networks quickly is a significant challenge. For speech, an ongoing challenge is to develop systems that train really big neural networks efficiently on lots of cores. The same issues also apply in computer vision.

R@G: What did you hope to accomplish while here, what progress has been made, and will you continue to expand on your work upon returning to the University of Toronto?

Prof. Hinton: I arrived here hoping to make our learning algorithms run in much bigger networks on much bigger datasets. It’s been a great opportunity to take curiosity-driven research work done at a University and turn it into something really useful by leveraging Google’s infrastructure. An example of this is the speech recognition algorithm used in Android 4.1, which was first developed at the University of Toronto and then run on bigger systems and made to work much better by Google. It’s crucial that Google has the computational power necessary to train on huge amounts of data. I very much hope to continue my collaboration with Google after returning to academia.

R@G: What is your vision for the future of Machine Learning and specifically, leveraging neural networks?

Prof. Hinton: My vision is that large neural networks will gradually take over. I suspect that in the future, expertise in neural networks will be combined with things like the knowledge graph to make better use of it. Also, neural networks will be used to help understand natural language and thereby better understand and return desired results on queries.

R@G: What has been the biggest benefit of collaborating with Google researchers and engineers and what have you enjoyed most about the experience?

Prof. Hinton: These are one and the same. I’ve been able to interact with lots of smart people, which is beneficial and has made my time here very worthwhile. I also got to experience what the current state-of-the-art is in training big neural networks. I can bring this back to my academic work and share it with my students, some of whom have done internships with Google or might in the future, which is a very important experience for them.

Labels:


| |

Home

Powered by Blogger