机器之心&ArXiv Weekly Radiostation
参加:杜伟,楚航,罗若天
本周的重要论文有 Bengio 参加、LeCun 点赞的开源图神经网络威望基准,以及 Nature 新研讨中摄像头充任神经网络,速度超传统办法千倍。
目录:
Benchmarking Graph Neural Networks
How Much Can A Retailer Sell? Sales Forecasting on Tmall
SLIDE : IN DEFENSE OF SMART ALGORITHMS OVER HARDWARE ACCELERATION FOR LARGE-SCALE DEEP LEARNING SYSTEMS
Knowledge Graphs
Ultrafast machine vision with 2D material neural network image sensors
DefogGAN: Predicting Hidden Information in the StarCraft Fog of War with Generative Adversarial Nets
Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
ArXiv Weekly Radiostation:NLP、CV、ML更多精选论文(附音频)。
论文 1:Benchmarking Graph Neural Networks
作者:Vijay Prakash Dwivedi、Chaitanya K. Joshi、Yoshua Bengio 等
论文链接:https://arxiv.org/pdf/2003.00982.pdf
摘要:近期的许多研讨现已让我们正真看到了图神经网络模型(GNN)的强壮潜力,许多研讨团队都在不断改进和构建根底模块。但大多数研讨运用的数据集都很小,如 Cora 和 TU。在这种情况下,即便对错图神经网络的功能也是可观的。假如进行进一步的比较,运用中等巨细的数据集,图神经网络的优势才干显现出来。
在斯坦福图神经网络大牛 Jure 等人发布《Open Graph Benchmark》之后,又一个旨在构建「图神经网络的 ImageNet」的研讨呈现了。近来,来自南洋理工大学、洛约拉马利蒙特大学、蒙特利尔大学和 MILA 等组织的论文被提交到了论文预印版平台上,在该研讨中,作者一次引入了六个中等巨细的基准数据集(12k-70k 图,8-500 节点),并对一些有代表性的图神经网络进行了测验。除了只用节点特征的基准线模型之外,图神经网络分红带或不带对边对注意力两大类。GNN 研讨社区一直在寻求一个一起的基准以对新模型的才能进行评测,这一东西或许能够让我们实现目标。
表 1:提议基准数据集的汇总计算信息。
示例图和超像素图。SLIC 的超像素图(其间 MNIST 最多 75 节点,CIFAR10 最多 150 节点)是欧几里得空间中的 8 个最近邻图形,节点色彩表明均匀像素强度。
不同办法在依据 MNIST 和 CI-FAR10 的规范测验集上的测验成果(数值越高越好)。该成果是运用 4 个不同种子运转四次成果的均匀值。赤色为最佳水平,紫色为高水平。粗体则表明残差链接和非残差衔接之间的最佳模型(如两个模型水平相同则皆为粗体显现)。
引荐:这一新的研讨有深度学习前驱 Yoshua Bengio 的参加,也得到了 Yann LeCun 的重视。
论文 2:How Much Can A Retailer Sell? Sales Forecasting on Tmall
作者:Chaochao Chen、Ziqi Liu、Xingyu Zhong 等
论文链接:https://arxiv.org/pdf/2002.11940.pdf
摘要:时刻序列猜测是学界和业界的一项重要任务,能够用于处理实在的日子中的许多猜测问题,如股票、供水和出售猜测等。在本文中,来自蚂蚁金服的研讨者对天猫平台上的零售商出售猜测事例展开了研究。经过数据剖析,他们得出了以下两个调查成果,其一是将不同的零售(商)分组后呈现出的出售季度性,其二是将出售情况转化为猜测值后呈现出的 Tweedie 散布。依据调查成果,研讨者规划了两种出售猜测机制,即时节提取(seasonality extraction)和散布转化。
详细而言,他们首要选用傅里叶分化办法来主动提取不同零售商的季度性,这之后可进一步作为任何已创立回归算法的额定特征。然后他们提出在对数变换后对出售的 Tweedie 丢失进行优化。最终,研讨者将这两种出售猜测机制应用于经典回归模型,即神经网络和梯度提高决策树。
天猫平台上不同零售商的出售呈季度性动摇。
两组零售商的季度性提取成果。
引荐:依据天猫数据集上的试验成果,研讨者表明,他们提出的这两种机制均能够大幅提高零售商的出售猜测作用。
论文 3:SLIDE : IN DEFENSE OF SMART ALGORITHMS OVER HARDWARE ACCELERATION FOR LARGE-SCALE DEEP LEARNING SYSTEMS
作者:Beidi Chen、Tharun Medini、Anshumali Shrivastava 等
论文链接:https://uracy and document Structure for Answer Sentence Selection. (from Daniele Bonadiman, Alessandro Moschitti)
本周 10 篇 CV 精选论文是:
1. Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications. (from Biagio Brattoli, Joe Tighe, Fedor Zhdanov, Pietro Perona, Krzysztof Chalupka)
2. Creating High Resolution Images with a Latent Adversarial Generator. (from David Berthelot, Peyman Milanfar, Ian Goodfellow)
3. A U-Net based Discriminator for Generative Adversarial Networks. (from Edgar Sch nfeld, Bernt Schiele, Anna Khoreva)
4. Towards Noise-resistant Object Detection with Noisy Annotations. (from Junnan Li, Caiming Xiong, Richard Socher, Steven Hoi)
5. Image Matching across Wide baselines: From Paper to Practice. (from Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri Matas, Pascal Fua, Kwang Moo Yi, Eduard Trulls)
6. Holistically-Attracted Wireframe Parsing. (from Nan Xue, Tianfu Wu, Song Bai, Fu-Dong Wang, Gui-Song Xia, Liangpei Zhang, Philip H.S. Torr)
7. Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data. (from Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, Nate Kushman)
8. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep. (from Behnood Rasti, Danfeng Hong, Renlong Hang, Pedram Ghamisi, Xudong Kang, Jocelyn Chanussot, Jon Atli Benediktsson)
9. Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields. (from Michael Ramamonjisoa, Yuming Du, Vincent Lepetit)
10. Adversarial Deepfakes: evaluating Vulnerability of Deepfake Detectors to Adversarial Examples. (from Paarth Neekhara, Shehzeen Hussain, Malhar Jere, Farinaz Koushanfar, Julian McAuley)
本周 10 篇 ML 精选论文是:
1. SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for Gaussian Process Regression with Derivatives. (from Emmanouil Angelis, Philippe Wenk, Bernhard Sch lkopf, Stefan Bauer, Andreas Krause)
2. Fuzzy k-Nearest Neighbors with monotonicity constraints: Moving towards the robustness of monotonic noise. (from Sergio González, Salvador García, Sheng-Tun Li, Robert John, Francisco Herrera)
3. Correlated Feature Selection with Extended Exclusive Group Lasso. (from Yuxin Sun, Benny Chain, Samuel Kaski, John Shawe-Taylor)
4. Decentralized gradient methods: does topology matter?. (from Giovanni Neglia, Chuan Xu, Don Towsley, Gianmarco Calbi)
5. Adversarial Robustness Through Local Lipschitzness. (from Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri)
6. Self-Supervised Object-Level Deep Reinforcement Learning. (from William Agnew, Pedro Domingos)
7. BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward. (from Florian Schmidt, Thomas Hofmann)
8. Analyzing Accuracy Loss in Randomized Smoothing Defenses. (from Yue Gao, Harrison Rosenberg, Kassem Fawaz, Somesh Jha, Justin Hsu)
9. Hierarchically Decoupled Imitation for Morphological Transfer. (from Donald J. Hejna III, Pieter Abbeel, Lerrel Pinto)
10. Curriculum By Texture. (from Samarth Sinha, Animesh Garg, Hugo Larochelle)