ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:conference_posters [2017/03/31 16:59]
rnogueira
iclr2017:conference_posters [2017/04/23 09:26] (current)
hugo
Line 2: Line 2:
  
 Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track. Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track.
 +
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
 +
  
 <​html><​div id='​monday_morning'></​div></​html>​ <​html><​div id='​monday_morning'></​div></​html>​
Line 17: Line 22:
 C11: Pruning Filters for Efficient ConvNets\\ C11: Pruning Filters for Efficient ConvNets\\
 C12: Stick-Breaking Variational Autoencoders\\ C12: Stick-Breaking Variational Autoencoders\\
-C13: Understanding deep learning requires rethinking generalization\\+C13: Identity Matters in Deep Learning\\
 C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\ C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\
 C15: Recurrent Hidden Semi-Markov Model\\ C15: Recurrent Hidden Semi-Markov Model\\
Line 53: Line 58:
 C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\ C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\
 C12: Learning to Optimize\\ C12: Learning to Optimize\\
-C13: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\+C13: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\
 C14: Optimal Binary Autoencoding with Pairwise Correlations\\ C14: Optimal Binary Autoencoding with Pairwise Correlations\\
 C15: On the Quantitative Analysis of Decoder-Based Generative Models\\ C15: On the Quantitative Analysis of Decoder-Based Generative Models\\
Line 69: Line 74:
 C27: Data Noising as Smoothing in Neural Network Language Models\\ C27: Data Noising as Smoothing in Neural Network Language Models\\
 C28: Neural Variational Inference For Topic Models\\ C28: Neural Variational Inference For Topic Models\\
-C29: Words or Characters? Fine-grained Gating ​for Reading ​Comprehension\\+C29: Bidirectional Attention Flow for Machine ​Comprehension\\
 C30: Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\\ C30: Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\\
 C31: Stochastic Neural Networks for Hierarchical Reinforcement Learning\\ C31: Stochastic Neural Networks for Hierarchical Reinforcement Learning\\
Line 103: Line 108:
 C25: Query-Reduction Networks for Question Answering\\ C25: Query-Reduction Networks for Question Answering\\
 C26: Machine Comprehension Using Match-LSTM and Answer Pointer\\ C26: Machine Comprehension Using Match-LSTM and Answer Pointer\\
-C27: Bidirectional Attention Flow for Machine ​Comprehension\\+C27: Words or Characters? Fine-grained Gating ​for Reading ​Comprehension\\
 C28: Dynamic Coattention Networks For Question Answering\\ C28: Dynamic Coattention Networks For Question Answering\\
 C29: Multi-view Recurrent Neural Acoustic Word Embeddings\\ C29: Multi-view Recurrent Neural Acoustic Word Embeddings\\
Line 112: Line 117:
  
 <​html><​div id='​tuesday_afternoon'></​div></​html>​ <​html><​div id='​tuesday_afternoon'></​div></​html>​
-====Tuesday Afternoon (April 25th, 2:30pm to 4:30pm)====+====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)====
 C1: Sigma Delta Quantized Networks\\ C1: Sigma Delta Quantized Networks\\
 C2: Paleo: A Performance Model for Deep Neural Networks\\ C2: Paleo: A Performance Model for Deep Neural Networks\\
Line 140: Line 145:
 C26: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency\\ C26: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency\\
 C27: Frustratingly Short Attention Spans in Neural Language Modeling\\ C27: Frustratingly Short Attention Spans in Neural Language Modeling\\
-C28: Offline Bilingual Word Vectors ​Without a Dictionary\\+C28: Offline Bilingual Word Vectors, Orthogonal Transformations and the Inverted Softmax\\
 C29: LEARNING A NATURAL LANGUAGE INTERFACE WITH NEURAL PROGRAMMER\\ C29: LEARNING A NATURAL LANGUAGE INTERFACE WITH NEURAL PROGRAMMER\\
 C30: Designing Neural Network Architectures using Reinforcement Learning\\ C30: Designing Neural Network Architectures using Reinforcement Learning\\
Line 171: Line 176:
 C21: Temporal Ensembling for Semi-Supervised Learning\\ C21: Temporal Ensembling for Semi-Supervised Learning\\
 C22: On Detecting Adversarial Perturbations\\ C22: On Detecting Adversarial Perturbations\\
-C23: Identity Matters in Deep Learning\\+C23: Understanding deep learning requires rethinking generalization\\
 C24: Adversarial Feature Learning\\ C24: Adversarial Feature Learning\\
 C25: Learning through Dialogue Interactions\\ C25: Learning through Dialogue Interactions\\
Line 199: Line 204:
 C13: Support Regularized Sparse Coding and Its Fast Encoder\\ C13: Support Regularized Sparse Coding and Its Fast Encoder\\
 C14: Discrete Variational Autoencoders\\ C14: Discrete Variational Autoencoders\\
-C15: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\+C15: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\
 C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\ C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\
 C17: Semi-Supervised Classification with Graph Convolutional Networks\\ C17: Semi-Supervised Classification with Graph Convolutional Networks\\