ICLR 2017

ICLR 2015

Basic Information

View of the hotel

When

May 7 - 9, 2015

Where

The Hilton San Diego Resort & Spa

There is a negotiated room rate for ICLR 2015. Please use this link for reservations. If you have difficulty with the booking site, please call the Hilton San Diego's in-house reservation team directly at +1-619-276-4010 ext. 1.

Registration

Anyone registering after April 29, 2015 will need to see Karen Smith at the registration desk for a badge.

Late registration regular $800
Late registration student $600

Note that the registration fee includes breakfast, coffee breaks, dinner, and the joint ICLR/AISTATS reception. See the conference schedule for the timing of these events.

Online Registration Form

Important Dates

19 Dec. 2014 Authors submit papers to ICLR 2015 via CMT before 11:59 pm PST
26 Dec. 2014 Authors update their submissions with the arXiv number and URL if they were not available on 19 Dec. 2014.
02 Jan. 2015 Reviewers receive their assignments.
09 Feb. 2015 Reviewers submit their reviews.
27 Feb. 2015 Authors post their initial responses to the reviews.
09 Mar. 2015 End of discussion period for papers.
20 Mar. 2015 Decisions sent to authors.
06 Apr. 2015 Deadline for early registration and to register for the hotel at the conference rate.

Committee

General Chairs

Yoshua Bengio, Université de Montreal
Yann LeCun, New York University and Facebook

Program Chairs

Brian Kingsbury, IBM Research
Samy Bengio, Google
Nando de Freitas, University of Oxford
Hugo Larochelle, Université de Sherbrooke

Contact

iclr2015.programchairs@gmail.com

Discussion, Forum, Pictures on the ICLR Facebook Page

Sponsors

ICLR 2015 gratefully acknowledges the support of its sponsors.


Gold


facebook-logo.jpg

logoRGB


Silver



Bronze

necla-logo.jpg
imagia-logo.jpg


Conference Wireless Access

network: Hilton Resort
username: iclr2015
password: deeplearning

Conference Schedule

Date Start End Event Details
May 7 0730 0900 breakfast South Poolside – Sponsored by Baidu
0900 1230 Oral Session – International Ballroom
0900 0940 keynote Antoine Bordes (Facebook), Artificial Tasks for Artificial Intelligence (slides) Video1 Video2
0940 1000 oral Word Representations via Gaussian Embedding by Luke Vilnis and Andrew McCallum (Brown University) (slides) Video
1000 1020 oral Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) by Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille (Baidu and UCLA) (slides) Video
1020 1050 coffee break
1050 1130 keynote David Silver (Google DeepMind), Deep Reinforcement Learning (slides) Video1 Video2
1130 1150 oral Deep Structured Output Learning for Unconstrained Text Recognition by Text Recognition” by Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (Oxford University and Google DeepMind) (slides) Video
1150 1210 oral Very Deep Convolutional Networks for Large-Scale Image Recognition by Karen Simonyan, Andrew Zisserman (Oxford) (slides) Video
1210 1230 oral Fast Convolutional Nets With fbfft: A GPU Performance Evaluation by Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun (Facebook AI Research) (slides) Video
1230 1400 lunch On your own
1400 1700 posters Workshop Poster Session 1 – The Pavilion
1730 1900 dinner South Poolside – Sponsored by Google
May 8 0730 0900 breakfast South Poolside – Sponsored by Facebook
0900 1230 Oral Session – International Ballroom
0900 0940 keynote Terrence Sejnowski (Salk Institute), Beyond Representation Learning Video1 Video2
0940 1000 oral Reweighted Wake-Sleep (slides) Video
1000 1020 oral The local low-dimensionality of natural images (slides) Video
1020 1050 coffee break
1050 1130 keynote Percy Liang (Stanford), Learning Latent Programs for Question Answering (slides) Video1 Video2
1130 1150 oral Memory Networks (slides) Video
1150 1210 oral Object detectors emerge in Deep Scene CNNs (slides) Video
1210 1230 oral Qualitatively characterizing neural network optimization problems (slides) Video
1230 1400 lunch On your own
1400 1700 posters Workshop Poster Session 2 – The Pavilion
1730 1900 dinner South Poolside – Sponsored by IBM Watson
May 9 0730 0900 breakfast South Poolside – Sponsored by Qualcomm
0900 0940 keynote Hal Daumé III (U. Maryland), Algorithms that Learn to Think on their Feet (slides) Video
0940 1000 oral Neural Machine Translation by Jointly Learning to Align and Translate (slides) Video
1000 1030 coffee break
1030 1330 posters Conference Poster Session – The Pavilion (AISTATS attendees are invited to this poster session)
1330 1700 lunch and break On your own
1700 1800 ICLR/AISTATS Oral Session – International Ballroom
1700 1800 keynote Pierre Baldi (UC Irvine), The Ebb and Flow of Deep Learning: a Theory of Local Learning Video
1800 2000 ICLR/AISTATS reception Fresco's (near the pool)

Keynote Talks

Antoine Bordes

Artificial Tasks for Artificial Intelligence

Despite great recent advances, the road towards intelligent machines able to reason and adapt in real-time in multimodal environments remains long and uncertain. This final goal is so complex and further away that it is impossible to perform experiments and research directly in the desired final conditions, so one has to use intermediate and/or proxy tasks as midway goals. Some of those tasks like object detection in computer vision, or machine translation in natural language processing are very useful on their own and fuel many applications. However, such intermediate tasks are already very difficult and it is not obvious that they are suited testbeds for designing intelligent systems: their inherent complexity makes it hard to precisely interpret the behavior and true capabilities of algorithms, in particular regarding key sophisticated capabilities like reasoning and planning. Hence, in this talk, we advocate the use of controlled artificial environments for developing research in AI, environments in which one can precisely study the behavior of algorithms and unambiguously assess their abilities.

This talk follows from joint work and discussions with Jason Weston, Sumit Chopra, Tomas Mikolov and Leon Bottou, among others.

David Silver

Deep Reinforcement Learning

In this talk I will discuss how reinforcement learning (RL) can be combined with deep learning (DL). There are several ways to combine DL and RL together, including value-based, policy-based, and model-based approaches with planning. Several of these approaches have well-known divergence issues, and I will present simple methods for addressing these instabilities. These methods have achieved notable success in the Atari 2600 domain. I will present recent a selection of recent results that improve on the published state-of-the-art in Atari and other challenging domains. Finally, I will discuss how RL can be used to improve DL, even when the native problem is supervised or unsupervised learning.

Terrence Sejnowski

Beyond Representation Learning

As we build ever deeper networks with ever more sophisticated representations it is a good time to pause and ask ourselves where this will end. Building ever taller skyscrapers gets our heads in the clouds but will it get us to the moon? A good place to look for answers is nature. This lecture will start with a look at the hierarchy of cortical areas where much of our intuition about deep learning came from, and will explore essential brain regions that these cortical areas communicate with that give rise to intelligent behavior.

Percy Liang

Learning Latent Programs for Question Answering

“The first Summer Olympics that had at least 20 nations took place in which city?” We tackle the problem of building a system to answering these questions that involve computing the answer. We propose a methodology based on semantic parsing, where we map a question onto a latent program (logical form), whose execution yields the answer (denotation). To obtain both depth (complexity of the program) and breadth (diversity of the questions/domains), we define a new task of answering a complex question from semi-structured tables on the web. We show promising results on the new dataset and invite the community to take on this challenge.

Hal Daumé III

Algorithms that Learn to Think on their Feet

The classic framework of machine learning is: example in, prediction out. This is great when examples are fully available. But it is very different from how humans reason. We get some information and may make a prediction. Or we may decide to get more information. For us, it's worth spending effort when making hard and important decisions (e.g., foreign policy); it is not on easy or low-cost decisions (e.g., afternoon snacks).

I'll describe our recent work that focuses on information cost, value, and time. I'll show examples from three settings in natural language processing: syntactic parsing, question answering in competitions and simultaneous machine translation. The last is the problem of incrementally producing a translation of a foreign sentence before the entire sentence is “heard” and is challenging even for well-trained humans.

This is joint work with a number of fantastic collaborators: Jordan Boyd-Graber, Leonardo Claudino, Jason Eisner, Lise Getoor, Alvin Grissom II, He He, Mohit Iyyer, John Morgan, Jay Pujara and Richard Socher.

Pierre Baldi

The Ebb and Flow of Deep Learning: a Theory of Local Learning

In a physical neural system, where storage and processing are intertwined, the learning rules for adjusting synaptic weights can only depend on local variables, such as the activity of the pre- and post-synaptic neurons. Thus learning models must specify two things: (1) which variables are to be considered local; and (2) which kind of function combines these local variables into a learning rule. We consider polynomial learning rules and analyze their behavior and capabilities in both linear and non-linear networks. As a byproduct, this framework enables the discovery of new learning rules and important relationships between learning rules and group symmetries.

Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires instead local deep learning, where target information is transmitted to the deep layers, thereby raising two fundamental issues: (1) the nature of the transmission channel; and (2) the nature and amount of information transmitted over this channel. This leads to the class of deep targets learning algorithms, which provide targets for the deep layers, and its stratification along the information spectrum, illuminating the remarkable power and uniqueness of the backpropation algorithm. The theory clarifies the concept of Hebbian learning, what is learnable by Hebbian learning, and explains the sparsity of the space of learning rules discovered so far and the unique role backpropagation plays in this space.

Conference Oral Presentations

May 9 Conference Poster Session

Board Presentation
2 FitNets: Hints for Thin Deep Nets, Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio
3 Techniques for Learning Binary Stochastic Feedforward Neural Networks, Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh
4 Reweighted Wake-Sleep, Jorg Bornschein and Yoshua Bengio
5 Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille
7 Multiple Object Recognition with Visual Attention, Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu
8 Deep Narrow Boltzmann Machines are Universal Approximators, Guido Montufar
9 Transformation Properties of Learned Visual Representations, Taco Cohen and Max Welling
10 Joint RNN-Based Greedy Parsing and Word Composition, Joël Legrand and Ronan Collobert
11 Adam: A Method for Stochastic Optimization, Jimmy Ba and Diederik Kingma
13 Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio
15 Scheduled denoising autoencoders, Krzysztof Geras and Charles Sutton
16 Embedding Entities and Relations for Learning and Inference in Knowledge Bases, Bishan Yang, Scott Yih, Xiaodong He, Jianfeng Gao, and Li Deng
18 The local low-dimensionality of natural images, Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli
20 Explaining and Harnessing Adversarial Examples, Ian Goodfellow, Jon Shlens, and Christian Szegedy
22 Modeling Compositionality with Multiplicative Recurrent Neural Networks, Ozan Irsoy and Claire Cardie
24 Very Deep Convolutional Networks for Large-Scale Image Recognition, Karen Simonyan and Andrew Zisserman
25 Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition, Vadim Lebedev, Yaroslav Ganin, Victor Lempitsky, Maksim Rakhuba, and Ivan Oseledets
27 Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille
28 Deep Structured Output Learning for Unconstrained Text Recognition, Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman
30 Zero-bias autoencoders and the benefits of co-adapting features, Kishore Konda, Roland Memisevic, and David Krueger
31 Automatic Discovery and Optimization of Parts for Image Classification, Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman, and Pedro Felzenszwalb
33 Understanding Locally Competitive Networks, Rupesh Srivastava, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber
35 Leveraging Monolingual Data for Crosslingual Compositional Word Representations, Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa
36 Move Evaluation in Go Using Deep Convolutional Neural Networks, Chris Maddison, Aja Huang, Ilya Sutskever, and David Silver
38 Fast Convolutional Nets With fbfft: A GPU Performance Evaluation, Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun
40 Word Representations via Gaussian Embedding, Luke Vilnis and Andrew McCallum
41 Qualitatively characterizing neural network optimization problems, Ian Goodfellow and Oriol Vinyals
42 Memory Networks, Jason Weston, Sumit Chopra, and Antoine Bordes
43 Generative Modeling of Convolutional Neural Networks, Jifeng Dai, Yang Lu, and Ying-Nian Wu
44 A Unified Perspective on Multi-Domain and Multi-Task Learning, Yongxin Yang and Timothy Hospedales
45 Object detectors emerge in Deep Scene CNNs, Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba

May 7 Workshop Poster Session

Board Presentation
2 Learning Non-deterministic Representations with Energy-based Ensembles, Maruan Al-Shedivat, Emre Neftci, and Gert Cauwenberghs
3 Diverse Embedding Neural Network Language Models, Kartik Audhkhasi, Abhinav Sethy, and Bhuvana Ramabhadran
4 Hot Swapping for Online Adaptation of Optimization Hyperparameters, Kevin Bache, Dennis Decoste, and Padhraic Smyth
5 Representation Learning for cold-start recommendation, Gabriella Contardo, Ludovic Denoyer, and Thierry Artieres
6 Training Convolutional Networks with Noisy Labels, Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus
7 Striving for Simplicity: The All Convolutional Net, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, and Martin Riedmiller
8 Learning linearly separable features for speech recognition using convolutional neural networks, Dimitri Palaz, Mathew Magimai Doss, and Ronan Collobert
9 Training Deep Neural Networks on Noisy Labels with Bootstrapping, Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich
10 On the Stability of Deep Networks, Raja Giryes, Guillermo Sapiro, and Alex Bronstein
11 Audio source separation with Discriminative Scattering Networks , Joan Bruna, Yann LeCun, and Pablo Sprechmann
13 Simple Image Description Generator via a Linear Phrase-Based Model, Pedro Pinheiro, Rémi Lebret, and Ronan Collobert
15 Stochastic Descent Analysis of Representation Learning Algorithms, Richard Golden
16 On Distinguishability Criteria for Estimating Generative Models, Ian Goodfellow
18 Embedding Word Similarity with Neural Machine Translation, Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio
20 Deep metric learning using Triplet network, Elad Hoffer and Nir Ailon
22 Understanding Minimum Probability Flow for RBMs Under Various Kinds of Dynamics, Daniel Jiwoong Im, Ethan Buchman, and Graham Taylor
23 A Group Theoretic Perspective on Unsupervised Deep Learning, Arnab Paul and Suresh Venkatasubramanian
24 Learning Longer Memory in Recurrent Neural Networks, Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato
25 Inducing Semantic Representation from Text by Jointly Predicting and Factorizing Relations, Ivan Titov and Ehsan Khoddam
27 NICE: Non-linear Independent Components Estimation, Laurent Dinh, David Krueger, and Yoshua Bengio
28 Discovering Hidden Factors of Variation in Deep Networks, Brian Cheung, Jesse Livezey, Arjun Bansal, and Bruno Olshausen
29 Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison, Pranava Swaroop Madhyastha, Xavier Carreras, and Ariadna Quattoni
30 On Learning Vector Representations in Hierarchical Label Spaces, Jinseok Nam and Johannes Fürnkranz
31 In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning, Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro
33 Algorithmic Robustness for Semi-Supervised (ϵ, γ, τ)-Good Metric Learning, Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, and Massih-Reza Amini
35 Real-World Font Recognition Using Deep Network and Domain Adaptation, Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jon Brandt, and Thomas Huang
36 Score Function Features for Discriminative Learning, Majid Janzamin, Hanie Sedghi, and Anima Anandkumar
38 Parallel training of DNNs with Natural Gradient and Parameter Averaging, Daniel Povey, Xioahui Zhang, and Sanjeev Khudanpur
40 A Generative Model for Deep Convolutional Learning, Yunchen Pu, Xin Yuan, and Lawrence Carin
41 Random Forests Can Hash, Qiang Qiu, Guillermo Sapiro, and Alex Bronstein
42 Provable Methods for Training Neural Networks with Sparse Connectivity, Hanie Sedghi, and Anima Anandkumar
43 Visual Scene Representations: sufficiency, minimality, invariance and approximation with deep convolutional networks, Stefano Soatto and Alessandro Chiuso
44 Deep learning with Elastic Averaging SGD, Sixin Zhang, Anna Choromanska, and Yann LeCun
45 Example Selection For Dictionary Learning, Tomoki Tsuchida and Garrison Cottrell
46 Permutohedral Lattice CNNs, Martin Kiefel, Varun Jampani, and Peter Gehler
47 Unsupervised Domain Adaptation with Feature Embeddings, Yi Yang and Jacob Eisenstein
49 Weakly Supervised Multi-embeddings Learning of Acoustic Models, Gabriel Synnaeve and Emmanuel Dupoux

May 8 Workshop Poster Session

Board Presentation
2 Learning Activation Functions to Improve Deep Neural Networks, Forest Agostinelli, Matthew Hoffman, Peter Sadowski, and Pierre Baldi
3 Restricted Boltzmann Machine for Classification with Hierarchical Correlated Prior, Gang Chen and Sargur Srihari
4 Learning Deep Structured Models, Liang-Chieh Chen, Alexander Schwing, Alan Yuille, and Raquel Urtasun
5 N-gram-Based Low-Dimensional Representation for Document Classification, Rémi Lebret and Ronan Collobert
6 Low precision arithmetic for deep learning, Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David
7 Theano-based Large-Scale Visual Recognition with Multiple GPUs, Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor
8 Improving zero-shot learning by mitigating the hubness problem, Georgiana Dinu and Marco Baroni
9 Incorporating Both Distributional and Relational Semantics in Word Representations, Daniel Fried and Kevin Duh
10 Variational Recurrent Auto-Encoders, Otto Fabius and Joost van Amersfoort
11 Learning Compact Convolutional Neural Networks with Nested Dropout, Chelsea Finn, Lisa Anne Hendricks, and Trevor Darrell
13 Compact Part-Based Image Representations: Extremal Competition and Overgeneralization, Marc Goessling and Yali Amit
15 Unsupervised Feature Learning from Temporal Data, Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun
16 Classifier with Hierarchical Topographical Maps as Internal Representation, Pitoyo Hartono, Paul Hollensen, and Thomas Trappenberg
18 Entity-Augmented Distributional Semantics for Discourse Relations, Yangfeng Ji and Jacob Eisenstein
20 Flattened Convolutional Neural Networks for Feedforward Acceleration, Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello
22 Gradual Training Method for Denoising Auto Encoders, Alexander Kalmanovich and Gal Chechik
23 Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet, Matthias Kümmerer, Lucas Theis, and Matthias Bethge
24 Difference Target Propagation, Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Antoine Biard, and Yoshua Bengio
25 Predictive encoding of contextual relationships for perceptual inference, interpolation and prediction, Mingmin Zhao, Chengxu Zhuang, Yizhou Wang, and Tai Sing Lee
27 Purine: A Bi-Graph based deep learning framework, Min Lin, Shuo Li, Xuan Luo, and Shuicheng Yan
28 Pixel-wise Deep Learning for Contour Detection, Jyh-Jing Hwang and Tyng-Luh Liu
29 Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews, Grégoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, and Yoshua Bengio
30 Fast Label Embeddings for Extremely Large Output Spaces, Paul Mineiro and Nikos Karampatziakis
31 An Analysis of Unsupervised Pre-training in Light of Recent Advances, Tom Paine, Pooya Khorrami, Wei Han, and Thomas Huang
33 Fully Convolutional Multi-Class Multiple Instance Learning, Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell
35 What Do Deep CNNs Learn About Objects?, Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko
36 Representation using the Weyl Transform, Qiang Qiu, Andrew Thompson, Robert Calderbank, and Guillermo Sapiro
38 Denoising autoencoder with modulated lateral connections learns invariant representations of natural images, Antti Rasmus, Harri Valpola, and Tapani Raiko
40 Towards Deep Neural Network Architectures Robust to Adversarial Examples, Shixiang Gu and Luca Rigazio
41 Explorations on high dimensional landscapes, Levent Sagun, Ugur Guney, and Yann LeCun
42 Generative Class-conditional Autoencoders, Jan Rudy and Graham Taylor
43 Attention for Fine-Grained Categorization, Pierre Sermanet, Andrea Frome, and Esteban Real
44 A Baseline for Visual Instance Retrieval with Deep Convolutional Networks, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson
45 Visual Scene Representation: Scaling and Occlusion, Stefano Soatto, Jingming Dong, and Nikolaos Karianakis
46 Deep networks with large output spaces, Sudheendra Vijayanarasimhan, Jon Shlens, Jay Yagnik, and Rajat Monga
47 Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets, Pascal Vincent
49 Self-informed neural network structure learning, David Warde-Farley, Andrew Rabinovich, and Dragomir Anguelov

Presentation Guidelines

Conference Orals

* Each oral has a 20-minute time slot. Please prepare 15 minutes of material, and plan to use the last 5 minutes for questions and switching between speakers.

Poster Presentations

* The poster boards are 4' high x 8' wide (120 cm high X 240 cm wide).