[Repost] The Future of Deep Learning Research

Source: https://github.com/llSourcell/7_Research_Directions_Deep_Learning

Topics

  • How does Backpropagation work?
  • What are the most popular deep learning algorithms today?
  • 7 Research Directions I've handpicked

In a recent AI conference, Geoffrey Hinton remarked that he was “deeply suspicious” of back-propagation, and said “My view is throw it all away and start again.”

alt text
alt text

The billion dollar question - how does the brain learn so well from sparse, unlabeled data?

Let's first understand how backpropagation works

alt text
alt text

In 1986 Hinton released this paper detailing a new optimization strategy for neural networks called 'backpropagation'. This paper is the reason the current Deep Learning boom is possible.

alt text
alt text
import numpy as np

#nonlinearity
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)

    return 1/(1+np.exp(-x))
    
#input data
X = np.array([[0,0,1],
            [0,1,1],
            [1,0,1],
            [1,1,1]])
         
#output data
y = np.array([[0],
            [1],
            [1],
            [0]])

np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
alt text
alt text

alt text
alt text

alt text
alt text

3 concepts behind Backpropagtion (From Calculus)

  1. Derivative


    alt text
    alt text
  2. Partial Derivative

alt text
alt text
  1. Chain Rule
alt text
alt text
for j in xrange(60000):

    # Feed forward through layers 0, 1, and 2
    k0 = X
    k1 = nonlin(np.dot(k0,syn0))
    k2 = nonlin(np.dot(k1,syn1))

    # how much did we miss the target value?
    k2_error = y - k2

    if (j% 10000) == 0:
        print "Error:" + str(np.mean(np.abs(k2_error)))
        
    # in what direction is the target value?
    # were we really sure? if so, don't change too much.
    k2_delta = k2_error*nonlin(k2,deriv=True)

    # how much did each k1 value contribute to the k2 error (according to the weights)?
    k1_error = k2_delta.dot(syn1.T)
    
    # in what direction is the target k1?
    # were we really sure? if so, don't change too much.
    k1_delta = k1_error * nonlin(k1,deriv=True)

    syn1 += k1.T.dot(k2_delta)
    syn0 += k0.T.dot(k1_delta)

This is the method of choice for all labeled deep learning models

alt text
alt text

How do artificial & biological neural nets compare?

Artificial Neural Networks are inspired by the hierarchial structure of brains neural network

alt text
alt text

The brain has
-100 billion neurons
-- Each neuron has

  • A cell body w/ connections
  • numerous dendrites
  • A single axon
  • Parallel chaining (each neurons connected to 10,000+ others)
  • Great at connecting different concepts

Computers have

  • Not neurons, but transistors made in silicon!
  • Serially chained (each connected to 2-3 others (logic gates))
  • Great at storage and recall

Some key differences

  • All sensory or motor systems in the brain are recurrent
  • Sensory systems tend to have lots of lateral inhibition (neurons inhibiting other neurons in the same layer)
  • There is no such thing as a fully connected layer in the brain, connectivity is usually sparse (though not random).
  • brains are born pre-wired to learn without supervision.
  • The Brain is low power. Alpha GO consumed the power of 1202 CPUs and 176 GPUs, not to train, but just to run. Brain’s power consumption is ~20W.
alt text
alt text

"the brain is not a blank slate of neuronal layers
waiting to be pieced together and wired-up;
we are born with brains already structured
for unsupervised learning in a dozen cognitive
domains, some of which already work pretty well
without any learning at all." - Steven Pinker

Where are we today in unsupervised learning?

For classification

  • Clustering (k-means, dimensionality reduction, anomaly detection)
alt text
alt text
  • Autoencoders


    alt text
    alt text

For Generation

  • Generative Adversarial Networks
alt text
alt text
  • Variational Autoencoders
alt text
alt text
  • Differentiable Neural Computer

https://en.wikipedia.org/wiki/Differentiable_neural_computer

alt text
alt text
alt text
alt text
  • The controller receives external inputs and, based on these, interacts with the memory using read and write operations known as 'heads'.
  • To help the controller navigate the memory, DNC stores 'temporal links' to keep track of the order things were written in, and records the current 'usage' level of each memory location.
  • DNCs were demonstrated, for example, how a DNC can be trained to navigate a variety of rapid transit systems, and then apply what it learned to get around on the London Underground. A neural network without memory would typically have to learn about each different transit system from scratch.

Basically, Many of the best unsupervised methods still require backprop (GANs, autoencoders, language models, word embeddings, etc.

So many GANs (https://deephunt.in/the-gan-zoo-79597dc8c347)

7 Research Directions

Thesis - Unsupervised learning and reinforcement learning must be the primary modes of learning, because labels mean little to a child growing up.

1 Bayesian Deep Learning (smarter backprop)

alt text
alt text
  • Deep learning struggles to model uncertainty.
  • Lets use Smarter weight initialization via Bayes Thereom
  • So in a bayesian setting the weights of your neural network are now random variables (sampled from a distribution
  • The parameters of this distribution are tuned via backpropagation.

2 Spike-Timing-Dependent Plasticity

STDP is a rule that encourages neurons to 'pay more attention' to inputs that predict excitation. Suppose you usually only bring an umbrella if you have reasons to think it will rain (weather report, you see rain outside, etc.). Then you notice that if you see your neighbor carrying an umbrella, even though you haven't seen any rain in the forecast, but sure enough, a few minutes later you see an updated forecast (or it starts raining). This happens a few times, and you get the idea: Your neighbor seems to be getting this information (whether it is going to rain) before your current sources. So in the future, you pay more attention to what your neighbor is doing.

alt text
alt text
  • Suppose we have two neurons, A and B.
  • A synapses onto B ( A->B ).
  • The STDP rule states that if A fires and B fires after a short delay, the synapse will be potentiated (i.e. B will increase the 'weight' assigned to inputs from A in the future).
  • The magnitude of the weight increase is inversely proportional to the delay between A firing and B firing.
  • if A fires and then B fires ten seconds later, the weight change will be essentially zero. - - But if A fires and B fires ten milliseconds later, the weight update will be more substantial.
  • The reverse also applies. If B fires first, then A, then the synapse will weaken, and the size of the change is again inversely proportional to the delay.

TL;DR - You cannot properly backpropagate for weight updates in a graph based network since it's an asynchronous system(there are no layers with activations at fixed times), so you are trusting neurons faster than you at the task.

3 Self Organizing Maps

alt text
alt text
alt text
alt text

4 Synthetic Gradients https://iamtrask.github.io/2017/03/21/synthetic-gradients/

alt text
alt text
alt text
alt text
  • The first layer forward propagates into the Synthetic Gradient generator (M i+1), which then returns a gradient.
  • This gradient is used instead of the real gradient (which would take a full forward propagation and backpropagation to compute).
  • The weights are then updated as normal, pretending that this Synthetic Gradient is the real gradient.

Synthetic Gradient genenerators are nothing other than a neural network that is trained to take the output of a layer and predict the gradient that will likely happen at that layer.

The whole point of this technique was to allow individual neural networks to train without waiting on each other to finish forward and backpropagating.

  • Individual layers make a "best guess" for what they think the data will say
  • then update their weights according to this guess.
  • This "best guess" is called a Synthetic Gradient.
  • The data is only used to help update each layer's "guesser" or Synthetic Gradient generator.
  • This allows for (most of the time), individual layers to learn in isolation, which increases the speed of training.

5 Evolutionary Strategies (https://blog.openai.com/evolution-strategies/)

  1. Create a random, initial brain for the bird (this is the neural network, with 300 neurons in our case)
  2. At every epoch, create a batch of modifications to the bird’s brain (also called “mutations”)
  3. Play the game using each modified brain and calculate the final reward
  4. Update the brain by pushing it towards the mutated brains, proportionate to their relative success in the batch (the more reward a brain has been able to collect during a game, the more it contributes to the update)
  5. Repeat steps 2-4 until a local maximum for rewards is reached.

Code:
https://gist.github.com/karpathy/77fbb6a8dac5395f1b73e7a89300318d

  • Mutation, selection, crossover via a fitness function
  • ES only requires the forward pass of the policy and does not require backpropagation (or value function estimation), which makes the code shorter and between 2-3 times faster in practice.
  • RL is a “guess and check” on actions, while ES is a “guess and check” on parameters.

6 Moar Reinforcement Learning

alt text
alt text
alt text
alt text

7 Better hardware.

  • neuromorphic chips
  • TPUs
  • Wiring up transistors in parallel like the brain!


    alt text
    alt text

    .

My Conclusion? I agree with Andrej Karpathy

Let's create multi-agent simulated enviroments that heavily rely on reinforcement learning + evolutionary strategies

alt text
alt text
alt text
alt text

It's comes down to the exploration-exploitation trade-off. You need exploitation to refine deep learning techniques but without exploration (other technqiues) you will never get the paradigm shift we need to go beyond classifying cat pictures and beating humans in artificial games


最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,772评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,458评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,610评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,640评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,657评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,590评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,962评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,631评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,870评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,611评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,704评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,386评论 4 319
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,969评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,944评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,179评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,742评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,440评论 2 342

推荐阅读更多精彩内容

  • 周末,和先生带着上一年级的女儿去帝都看一场话剧。演出时长100分钟,女儿全程注意力集中,积极与演员互动,散场后还兴...
    洗耳听蝉歌阅读 149评论 0 2
  • asyncio 先看一个小例子 Task对象 用来驱动协程,使用loop.create_task(...)或asy...
    尛白兔阅读 728评论 1 0
  • 上回说到,古典音乐的了解要先掌握一个中心,两个基本点。敲黑板!复习:一个中心,两个基本点,最重要的是德奥音乐,它是...
    漫步云端的wendy阅读 646评论 0 4
  • 又是一个周三日! 怎么说?感觉现在的讲课方式比以前的好多了,每位老师都会说出你的缺点,好让自己知道哪里有需要改进的...
    童心童画可乐老师1553463阅读 138评论 0 0