Author Topic: On Reddit, Geoff Hinton talks Google and future of deep learning  (Read 169 times)

0 Members and 1 Guest are viewing this topic.

Offline Buster's Uncle

  • With community service, I
  • Ascend
  • *
  • Posts: 49473
  • €263
  • View Inventory
  • Send /Gift
  • Because there are times when people just need a cute puppy  Soft kitty, warm kitty, little ball of fur  A WONDERFUL concept, Unity - & a 1-way trip that cost 400 trillion & 40 yrs.  
  • AC2 is my instrument, my heart, as I play my song.
  • Planet tales writer Smilie Artist Custom Faction Modder AC2 Wiki contributor Downloads Contributor
    • View Profile
    • My Custom Factions
    • Awards
On Reddit, Geoff Hinton talks Google and future of deep learning
« on: November 16, 2014, 05:29:33 PM »
On Reddit, Geoff Hinton talks Google and future of deep learning
Gigaom
by Derrick Harris   Nov. 14, 2014 - 11:20 AM PST 



Geoff Hinton. Source: University of Toronto



Geoff Hinton, one of the godfathers of deep learning and neural network research, did a fascinating Ask Me Anything on Reddit late last week. Hinton, who now splits his time between the University of Toronto and Google, touched on just about everything important in the field: which methods work best for what, whether neural networks can achieve general artificial intelligence, debates over deep learning hype and even his new theory on improving model performance with a new type of neuron that he calls “capsules.”

Here are some highlights focusing on questions about Google’s techniques and the experience of working for Google, as well as general questions about the next big things in the deep learning space. (The questions have been trimmed a bit but link to the full threads.) But seriously, anyone interested in artificial intelligence as a researcher, entrepreneur or even investor should go read the whole thing, which contains a lot more information.


On deep learning research at Google

Do you see more and more breakthroughs coming from industrial labs (e.g. Google, Facebook, etc.) rather than universities?

I think that Google, Facebook, Microsoft Research, and a few other labs are the new Bell Labs. I don’t think it was a big problem that a lot of the most important research half a century ago was done at Bell labs. We got transistors, unix and a lot of other good stuff.



Hi Professor Hinton, Since you joined Google lately, will your research there be proprietary? I’m just worried that the research done by one of the most important researchers in the field is being closed to a specific company.

Actually, Google encourages us to publish. The main thing I have been working on is my capsules theory and I haven’t published because I haven’t got it to work to my satisfaction yet.


Are there diminishing returns for data at Google scale?

It depends how your learning methods scale. For example, if you do phrase-based translation that relies on having seen particular phrases before, you need hugely more data to make a small improvement. If you use recurrent neural nets, however, the marginal effect of extra data is much greater.


Can we ever hope to train a recognizer to a [Google-like] degree of accuracy at home?

In 2012, Alex Krizhevsky trained the system that blew away the computer vision state-of-the-art on two GPUs in his bedroom. Google (with Alex’s help) have now halved the error rate of that system using more computation. But I believe it’s still possible to achieve spectacular new deep learning results with modest resources if you have a radically new idea.

I find it likely that there are channels for fairly direct transfer of knowledge from companies like Google to U.S. (and possibly some other) spy agencies.


Do you share my concerns about this, and is it something that people in the machine learning community around you discuss and try to deal with?

Technology is not itself inherently good or bad—the key is ethical deployment. So far as I can tell, Google really cares about ensuring technology is deployed responsibly. That’s why I am happy to work for them but not happy to take money from the “defense” department.


On how the field will evolve over time

In your opinion, which of the following ideas contain the lowest hanging fruit for improving accuracy on today’s typical classification problems: 1) Better hardware and bigger machine clusters; 2) Better algorithm implementations and optimizations; 3) Entirely new ideas and angles of attack?

I think entirely new ideas and approaches are the most important way to make major progress, but they are not low-hanging. They typically involve a lot of work and many disappointments. Better machines, better implementations and better optimization methods are all important and I don’t want to choose between them. I think you left out slightly new ideas which is what leads to a lot of the day to day progress. A bunch of slightly new ideas that play well together can have a big impact.


What frontiers and challenges do you think are the most exciting for researchers in the field of neural networks in the next ten years?

I cannot see ten years into the future. For me, the wall of fog starts at about 5 years. (Progress is exponential and so is the effect of fog so its a very good model for the fact that the next few years are pretty clear and a few years after that things become totally opaque). I think that the most exciting areas over the next five years will be really understanding videos and text. I will be disappointed if in five years time we do not have something that can watch a YouTube video and tell a story about what happened. I have had a lot of disappointments.


What do you believe is the future of the field in 10 to 20 years?

All good researchers will tell you that the most promising direction is the one they are currently pursuing. If they thought something else was more promising, they would be doing that instead.

I think the long-term future is quite likely to be something that most researchers currently regard as utterly ridiculous and would certainly reject as a NIPS paper. But this isn’t much help


https://gigaom.com/2014/11/14/on-reddit-geoff-hinton-talks-google-and-future-of-deep-learning/?utm_medium=content&utm_campaign=syndication&utm_source=yfinance&utm_content=on-reddit-geoff-hinton-talks-google-and-future-of-deep-learning_888925

 

* User

Welcome, Guest. Please login or register.

Login with username, password and session length

Select language:

* Community poll

SMAC v.4 SMAX v.2 (or previous versions)
-=-
24 (7%)
XP Compatibility patch
-=-
9 (2%)
Gog version for Windows
-=-
103 (32%)
Scient (unofficial) patch
-=-
40 (12%)
Kyrub's latest patch
-=-
14 (4%)
Yitzi's latest patch
-=-
89 (28%)
AC for Mac
-=-
3 (0%)
AC for Linux
-=-
6 (1%)
Gog version for Mac
-=-
10 (3%)
No patch
-=-
16 (5%)
Total Members Voted: 314
AC2 Wiki Logo
-click pic for wik-

* Random quote

Resources exist to be consumed. And consumed they will be, if not by this generation then by some future. By what right does this forgotten future seek to deny us our birthright? None I say! Let us take what is ours, chew and eat our fill.
~CEO Nwabudike Morgan 'The Ethics of Greed'

* Select your theme

*
Templates: 5: index (default), PortaMx/Mainindex (default), PortaMx/Frames (default), Display (default), GenericControls (default).
Sub templates: 8: init, html_above, body_above, portamx_above, main, portamx_below, body_below, html_below.
Language files: 4: index+Modifications.english (default), TopicRating/.english (default), PortaMx/PortaMx.english (default), OharaYTEmbed.english (default).
Style sheets: 0: .
Files included: 45 - 1228KB. (show)
Queries used: 39.

[Show Queries]