Saturday, August 16, 2014

Neural Networks and Deep Learning

One of the SCI FOO sessions I enjoyed the most this year was a discussion of deep learning by AI researcher Juergen Schmidhuber. For an overview of recent progress, see this paper. Also of interest: Michael Nielsen's pedagogical book project.

An application which especially caught my attention is described by Schmidhuber here:
Many traditional methods of Evolutionary Computation [15-19] can evolve problem solvers with hundreds of parameters, but not millions. Ours can [1,2], by greatly reducing the search space through evolving compact, compressed descriptions [3-8] of huge solvers. For example, a Recurrent Neural Network [34-36] with over a million synapses or weights learned (without a teacher) to drive a simulated car based on a high-dimensional video-like visual input stream.
More details here. They trained a deep neural net to drive a car using visual input (pixels from the driver's perspective, generated by a video game); output consists of steering orientation and accelerator/brake activation. There was no hard coded structure corresponding to physics -- the neural net optimized a utility function primarily defined by time between crashes. It learned how to drive the car around the track after less than 10k training sessions.

For some earlier discussion of deep neural nets and their application to language translation, see here. Schmidhuber has also worked on Solomonoff universal induction.

These TED videos give you some flavor of Schmidhuber's sense of humor :-) Apparently his younger brother (mentioned in the first video) has transitioned from theoretical physics to algorithmic finance. Schmidhuber on China.


  1. pancakerabbitvirus9:14 AM

    First i've heard of it.. so GoogleCat became self-aware sometime in may of 2012 :-(

  2. dxie481:42 AM

    Not only deep learning independently has discovered internet cats, it has also independently discovered China!

    At one stage, Google Translate (which is known to be relying on deep learning) was reported to have translated the Latin phrase
    "lorem ipsum ipsum ipsum lorem" to "China is the winner"

    Was it due to googlebomb? Or was it that the Vatican made heavy use GT for their internal memos translation? Dont know. The free speech of Google Translate appears to have been silenced :)

    PS. Most probably Google indexing was stuck on the random Chinese lorem ipsum generator

    Thus deep learning has not discovered random lorem ipsum generator.