如果大家关注CVPR的话,应该知道今年CVPR有一个关于自动驾驶的WorkShop,并且还在Kaggle上举行了一个关于语义分割的比赛,可以说是珍贵的学习和实践资源啦,因此接下来的很长一段时间里,我将简绍2018年CVPR上关于自动驾驶的最新论文,同时利用Kaggle上的数据集进行相关实践(能力有限,尽力吧(′д` )…彡…彡)。
大家登陆WAD官网就可以看到今年CVPR收录的关于自动驾驶的相关论啦,如获至宝啊,有木有,嘿嘿。下面就是我截取的论文题目和作者啦,感兴趣的可以自己下载下来进行仔细研读。
Papers
The ApolloScape Dataset for Autonomous Driving
Xinyu Huang*, Baidu; Xinjing Cheng, Baidu; Qichuan Geng, Baidu; Binbin Cao, Baidu; Dingfu Zhou, Baidu; Peng Wang, Baidu USA LLC; Yuanqing Lin, Baidu; Yang Ruigang, Baidu
Scene Understanding Networks for Autonomous Driving based on Around View Monitoring System
Jeongyeol Baek*, LG Electronics; Ioana Veronica Chelu, Arnia; Livia Iordache, Arnia; Vlad Paunescu, Arnia; HyunJoo Ryu, LG Electronics; Alexandru Ghiuta, Arnia; Andrei Petreanu, Arnia; Yunsung Soh, LG Electronics; Andrei Leica, Arnia; ByeongMoon Jeon, LG Electronics
Jonathan Tremblay*, Nvidia; Aayush Prakash, Nvidia; David Acuna, Nvidia; Mark Brophy, Nvidia; Varun Jampani, Nvidia Research; Cem Anil, Nvidia; Thang To, Nvidia; Eric Cameracci, Nvidia; Shaad Boonchoon, Nvidia; Stan Birchfield, NVIDIA
On the iterative refinement of densely connected representation levels for semantic segmentation
Arantxa Casanova*, MILA; Guillem Cucurull, Computer Vision Center; Michal Drozdzal, Facebook; Adriana Romero, FAIR; Yoshua Bengio, Universite de Montreal
Minimizing Supervision for Free-space Segmentation
Satoshi Tsutsui, Indiana University; Tommi Kerola*, Preferred Networks, Inc.; Shunta Saito, Preferred Networks, Inc.; David Crandall, Indiana University
Error Correction for Dense Semantic Image Labeling
Yu-Hui Huang*, KU Leuven; Xu Jia, KU Leuven; Stamatios Georgoulis, ETH Zurich; Tinne Tuytelaars, K.U. Leuven; Luc Van Gool, ETH Zurich
Nikolai Smolyanskiy, NVIDIA; Alexey Kamenev, NVIDIA; Stan Birchfield*, NVIDIA
Accurate Deep Direct Geo-Localization from Ground Imagery and Phone-Grade GPS
Shaohui Sun*, Lyft; Ramesh Sarukkai, Lyft; Jack Kwok, Lyft; Vinay Shet, Lyft
Efficient and Safe Vehicle Navigation Based on Driver Behavior Classification
Chor Hei Ernest Cheung*, The University of North Carolina at Chapel Hill; Aniket Bera, The University of North Carolina at Chapel Hill; Dinesh Manocha, University of North Carolina
Detection of Distracted Driver using Convolutional Neural Network
Bhakti Baheti*, SGGSIE&T, Nanded, MH; Suhas Gajre, S.G.G.S. Nanded; Sanjay Talbar, SGGSIET Nanded
The ApolloScape Dataset for Autonomous Driving
Xinyu Huang*, Baidu; Xinjing Cheng, Baidu; Qichuan Geng, Baidu; Binbin Cao, Baidu; Dingfu Zhou, Baidu; Peng Wang, Baidu USA LLC; Yuanqing Lin, Baidu; Yang Ruigang, Baidu
Classifying Group Emotions for Socially-Aware Autonomous Vehicle Navigation
Aniket Bera*, The University of North Carolina at Chapel Hill; Tanmay Randhavane, The University of North Carolina at Chapel Hill; Emily Kubin, The University of North Carolina at Chapel Hill; Austin Wang, The University of North Carolina at Chapel Hill; Kurt Gray, The University of North Carolina at Chapel Hill; Dinesh Manocha, University of North Carolina
AutonoVi-Sim: Autonomous Vehicle Simulation Platform with Weather, Sensing, and Traffic Control
Andrew Best*, UNC Chapel Hill; Sahil Narang, UNC Chapel Hill; Lucas Pasqualin, University of Central Florida; Daniel Barber, University of Central Florida; Dinesh Manocha, University of North Carolina
Learning Hierarchical Models for Class-Specific Reconstruction from Natural Data
Arun CS Kumar*, University of Georgia; Suchendra Bhandarkar, University of Georgia; Mukta Prasad, Trinity College, Dublin
Subset Replay based Continual Learning for Scalable Improvement of Autonomous Systems
Pratik Brahma*, Volkswagen Electronics Research Lab; Adrienne Othon, Volkswagen Electronics Research Lab
除了阅读论文以外,实践也很重要,否则就变成了天龙八部里面的王语嫣啦,好的程序员是不能靠脸吃饭的,要靠嘴啊(傲娇脸)。Kaggle就不用我多介绍啦,关于WAD的比赛以及数据大家可以自己去官网上阅读,还有有很多值得学习的代码和看法,是一个很好的学习实践平台。
废话不多说啦,撸起袖子开始读论文啦(数据还在下载中,95.6G,也不知道我的神船能不能飞得起来)。
第一篇:The ApolloScape Dataset for Autonomous Driving
看题目就知道是一篇介绍数据集的论文(应该是Kaggle上的那个数据集),我挑重点阐述一下,感兴趣的可以自行详细阅读,同时在这里向百度这样的无私奉献自己的数据集的公司和实验室表示敬意(手动敬礼)。
相比于其他相关数据集,ApolloScape Dataset主要有五个特点:
1.数据集共包含143906帧驾驶图像数据,并根据场景难易程度(图片中场景的复杂度,包含车辆、行人的数量)将数据集划分为简单、中等、困难三个子数据集。
2.该数据集为第一个包含像素级别的RGB-D户外图像数据集。
3.对车道线进行了精细的纹理标记。
4.设计了一种有效的2D/3D联合标注方法,可以节省70%的标注时间,同时数据集包含3D点云信息,因此是第一个开放的包含3D标注的街景数据集。
5.数据集进行了视频帧实例级别的标注,这意味着使用者可以使用该数据集对视频中的移动物体搭建时间空间模型,例如预测、轨迹跟踪、行为分析等任务。
数据集下载网址,并且后续会持续更新数据集,上面有更详尽的数据集中文介绍,还有百度APOLLO参与的其他活动,例如IV2018。
另外安利一下百度的阿波罗开放平台,我还没有仔细研究过,不过好像很炫酷的样子。
第一篇比较简单,就不做太多介绍啦,另外今天恰巧看到了一篇分析自动驾驶汽车公司发展状况的文章,对于想要找自动驾驶相关工作的伙伴们会有很大帮助,毕竟科研之余还是要吃饭的啊。
基本上国内自动驾驶创业窗口期已过,几家脱颖而出的创业公司也在自己的赛道上奔跑,BAT和华为等巨头也是争相参与,自动驾驶大有可为(最后感慨两句,呜呜呜,找不到工作了)。
最后,祝好!愿与诸君一起进步。