2018-07-11

[1805.09393] Pouring Sequence Prediction using Recurrent Neural Network
https://arxiv.org/abs/1805.09393

价值传播网络,在更复杂的动态环境中进行规划的方法 | 机器之心
https://www.jiqizhixin.com/articles/2018-06-21

DeepMind提出关系RNN:记忆模块RMC解决关系推理难题 | 机器之心
https://www.jiqizhixin.com/articles/070104

当前训练神经网络最快的方式:AdamW优化算法+超级收敛 | 机器之心
https://www.jiqizhixin.com/articles/2018-07-03-14

《Graph learning》| 图传播算法(下) - 简书
https://www.jianshu.com/p/e7fb897b1d09

论文笔记之Deep Convolutional Networks on Graph-Structured Data - CSDN博客
https://blog.csdn.net/BVL10101111/article/details/53437940

优于VAE,为万能近似器高斯混合模型加入Wasserstein距离 | 机器之心
https://www.jiqizhixin.com/articles/2018-07-07-4

卷积神经网络不能处理“图”结构数据?这篇文章告诉你答案 | 雷锋网
https://www.leiphone.com/news/201706/ppA1Hr0M0fLqm7OP.html

学界 | 神经网络碰上高斯过程,DeepMind连发两篇论文开启深度学习新方向
https://mp.weixin.qq.com/s?__biz=MzA3MzI4MjgzMw==&mid=2650744847&idx=4&sn=6d04d771485c0970742e33b57dc452a9&chksm=871aec71b06d65671e386229eb75641539aef9e1525e45f2c0f70f6fe9f845d088af9c9cd9fa&scene=38#wechat_redirect

[1807.03402] IGLOO: Slicing the Features Space to Represent Long Sequences
https://arxiv.org/abs/1807.03402

量子位 上海交大搞出SRNN,比普通RNN也就快135倍
https://mp.weixin.qq.com/s/wfOzCxe3L2t11VguYLGC9Q

[1807.03379] Online Scoring with Delayed Information: A Convex Optimization Viewpoint
https://arxiv.org/abs/1807.03379

Graph Convolutional Networks (GCNs) 简介 - AHU-WangXiao - 博客园
https://www.cnblogs.com/wangxiaocvpr/p/8059769.html

(1 封私信 / 80 条消息)如何理解 Graph Convolutional Network(GCN)? - 知乎
https://www.zhihu.com/question/54504471

How powerful are Graph Convolutions? (review of Kipf & Welling, 2016)
https://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/

Reinforcement learning’s foundational flaw
https://thegradient.pub/why-rl-is-flawed/

tkipf/gcn: Implementation of Graph Convolutional Networks in TensorFlow
https://github.com/tkipf/gcn

Graph Convolutional Networks | Thomas Kipf | PhD Student @ University of Amsterdam
http://tkipf.github.io/graph-convolutional-networks/

[1807.03379] Online Scoring with Delayed Information: A Convex Optimization Viewpoint https://arxiv.org/abs/1807.03379

We consider a system where agents enter in an online fashion and are evaluated based on their attributes or context vectors. There can be practical situations where this context is partially observed, and the unobserved part comes after some delay. We assume that an agent, once left, cannot re-enter the system. Therefore, the job of the system is to provide an estimated score for the agent based on her instantaneous score and possibly some inference of the instantaneous score over the delayed score. In this paper, we estimate the delayed context via an online convex game between the agent and the system. We argue that the error in the score estimate accumulated over [Math Processing Error] iterations is small if the regret of the online convex game is small. Further, we leverage side information about the delayed context in the form of a correlation function with the known context. We consider the settings where the delay is fixed or arbitrarily chosen by an adversary. Furthermore, we extend the formulation to the setting where the contexts are drawn from some Banach space. Overall, we show that the average penalty for not knowing the delayed context while making a decision scales with [Math Processing Error], where this can be improved to [Math Processing Error] under special setting.
————————————————————

[1807.03402] IGLOO: Slicing the Features Space to Represent Long Sequences
https://arxiv.org/abs/1807.03402

We introduce a new neural network architecture, IGLOO, which aims at providing a representation for long sequences where RNNs fail to converge. The structure uses the relationships between random patches sliced out of the features space of some backbone 1 dimensional CNN to find a representation. This paper explains the implementation of the method and provides benchmark results commonly used for RNNs and compare IGLOO to other structures recently published. It is found that IGLOO can deal with sequences of up to 25,000 time steps. For shorter sequences it is also found to be effective and we find that it achieves the highest score in the literature for the permuted MNIST task. Benchmarks also show that IGLOO can run at the speed of the CuDNN optimised GRU or LSTM without being tied to any specific hardware.
————————————————————

[1807.03523] DLOPT: Deep Learning Optimization Library
https://arxiv.org/abs/1807.03523
Deep learning hyper-parameter optimization is a tough task. Finding an appropriate network configuration is a key to success, however most of the times this labor is roughly done. In this work we introduce a novel library to tackle this problem, the Deep Learning Optimization Library: DLOPT. We briefly describe its architecture and present a set of use examples. This is an open source project developed under the GNU GPL v3 license and it is freely available at this https URL

————————————————————
[1807.03710] Recurrent Auto-Encoder Model for Large-Scale Industrial Sensor Signal Analysis
https://arxiv.org/abs/1807.03710
Recurrent auto-encoder model summarises sequential data through an encoder structure into a fixed-length vector and then reconstructs the original sequence through the decoder structure. The summarised vector can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector therefore represents features in the selected dimensions only. In addition, we propose using rolling fixed window approach to generate training samples from unbounded time series data. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed using additional visualisation and unsupervised clustering techniques. The proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose, where clusters of the vector representations can reflect the operating states of the industrial system.

————————————————————

[1807.03748] Representation Learning with Contrastive Predictive Coding
https://arxiv.org/abs/1807.03748
While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

【(Colab/tf.keras + eager)RNN文本生成实例】“Text Generation using a RNN - end-to-end example of generating Shakespeare-like text using tf.keras + eager” 网页链接 ​​​​

【强化学习之旅——持续控制角度】《A Tour of Reinforcement Learning: The View from Continuous Control》by Benjamin Recht [UC Berkeley] 网页链接

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,761评论 5 460
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,953评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,998评论 0 320
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,248评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,130评论 4 356
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,145评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,550评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,236评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,510评论 1 291
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,601评论 2 310
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,376评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,247评论 3 313
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,613评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,911评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,191评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,532评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,739评论 2 335