ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:conference_posters [2017/03/29 09:23]
hugo
iclr2017:conference_posters [2017/04/23 09:26] (current)
hugo
Line 2: Line 2:
  
 Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track. Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track.
 +
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
 +
  
 <​html><​div id='​monday_morning'></​div></​html>​ <​html><​div id='​monday_morning'></​div></​html>​
Line 16: Line 21:
 C10: Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer\\ C10: Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer\\
 C11: Pruning Filters for Efficient ConvNets\\ C11: Pruning Filters for Efficient ConvNets\\
-C12: Optimization as a Model for Few-Shot Learning\\ +C12: Stick-Breaking Variational Autoencoders\\ 
-C13: Understanding deep learning requires rethinking generalization\\+C13: Identity Matters in Deep Learning\\
 C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\ C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\
 C15: Recurrent Hidden Semi-Markov Model\\ C15: Recurrent Hidden Semi-Markov Model\\
Line 53: Line 58:
 C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\ C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\
 C12: Learning to Optimize\\ C12: Learning to Optimize\\
-C13: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\+C13: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\
 C14: Optimal Binary Autoencoding with Pairwise Correlations\\ C14: Optimal Binary Autoencoding with Pairwise Correlations\\
 C15: On the Quantitative Analysis of Decoder-Based Generative Models\\ C15: On the Quantitative Analysis of Decoder-Based Generative Models\\
-C16: Learning to Remember Rare Events\\+C16: Adversarial machine learning at scale\\
 C17: Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks\\ C17: Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks\\
 C18: Capacity and Learnability in Recurrent Neural Networks\\ C18: Capacity and Learnability in Recurrent Neural Networks\\
Line 62: Line 67:
 C20: Exploring Sparsity in Recurrent Neural Networks\\ C20: Exploring Sparsity in Recurrent Neural Networks\\
 C21: Structured Attention Networks\\ C21: Structured Attention Networks\\
-C22: ZoneoutRegularizing RNNs by Randomly Preserving Hidden Activations\\+C22: Learning to RepeatFine Grained Action Repetition for Deep Reinforcement Learning\\
 C23: Variational Lossy Autoencoder\\ C23: Variational Lossy Autoencoder\\
 C24: Learning to Query, Reason, and Answer Questions On Ambiguous Texts\\ C24: Learning to Query, Reason, and Answer Questions On Ambiguous Texts\\
Line 69: Line 74:
 C27: Data Noising as Smoothing in Neural Network Language Models\\ C27: Data Noising as Smoothing in Neural Network Language Models\\
 C28: Neural Variational Inference For Topic Models\\ C28: Neural Variational Inference For Topic Models\\
-C29: Words or Characters? Fine-grained Gating ​for Reading ​Comprehension\\+C29: Bidirectional Attention Flow for Machine ​Comprehension\\
 C30: Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\\ C30: Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic\\
 C31: Stochastic Neural Networks for Hierarchical Reinforcement Learning\\ C31: Stochastic Neural Networks for Hierarchical Reinforcement Learning\\
Line 78: Line 83:
 ====Tuesday Morning (April 25th, 10:30am to 12:​30pm)==== ====Tuesday Morning (April 25th, 10:30am to 12:​30pm)====
 C1: DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning\\ C1: DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning\\
-C2: SampleRNN: An Unconditional End-to-End Neural Audio Generation Model\\+C2: A SELF-ATTENTIVE SENTENCE EMBEDDING\\
 C3: Deep Probabilistic Programming\\ C3: Deep Probabilistic Programming\\
 C4: Lie-Access Neural Turing Machines\\ C4: Lie-Access Neural Turing Machines\\
Line 103: Line 108:
 C25: Query-Reduction Networks for Question Answering\\ C25: Query-Reduction Networks for Question Answering\\
 C26: Machine Comprehension Using Match-LSTM and Answer Pointer\\ C26: Machine Comprehension Using Match-LSTM and Answer Pointer\\
-C27: Bidirectional Attention Flow for Machine ​Comprehension\\+C27: Words or Characters? Fine-grained Gating ​for Reading ​Comprehension\\
 C28: Dynamic Coattention Networks For Question Answering\\ C28: Dynamic Coattention Networks For Question Answering\\
 C29: Multi-view Recurrent Neural Acoustic Word Embeddings\\ C29: Multi-view Recurrent Neural Acoustic Word Embeddings\\
Line 112: Line 117:
  
 <​html><​div id='​tuesday_afternoon'></​div></​html>​ <​html><​div id='​tuesday_afternoon'></​div></​html>​
-====Tuesday Afternoon (April 25th, 4:30pm to 6:30pm)====+====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)====
 C1: Sigma Delta Quantized Networks\\ C1: Sigma Delta Quantized Networks\\
 C2: Paleo: A Performance Model for Deep Neural Networks\\ C2: Paleo: A Performance Model for Deep Neural Networks\\
Line 137: Line 142:
 C23: Variable Computation in Recurrent Neural Networks\\ C23: Variable Computation in Recurrent Neural Networks\\
 C24: Deep Variational Information Bottleneck\\ C24: Deep Variational Information Bottleneck\\
-C25: A SELF-ATTENTIVE SENTENCE EMBEDDING\\+C25: SampleRNN: An Unconditional End-to-End Neural Audio Generation Model\\
 C26: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency\\ C26: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency\\
 C27: Frustratingly Short Attention Spans in Neural Language Modeling\\ C27: Frustratingly Short Attention Spans in Neural Language Modeling\\
-C28: Offline Bilingual Word Vectors ​Without a Dictionary\\+C28: Offline Bilingual Word Vectors, Orthogonal Transformations and the Inverted Softmax\\
 C29: LEARNING A NATURAL LANGUAGE INTERFACE WITH NEURAL PROGRAMMER\\ C29: LEARNING A NATURAL LANGUAGE INTERFACE WITH NEURAL PROGRAMMER\\
 C30: Designing Neural Network Architectures using Reinforcement Learning\\ C30: Designing Neural Network Architectures using Reinforcement Learning\\
Line 171: Line 176:
 C21: Temporal Ensembling for Semi-Supervised Learning\\ C21: Temporal Ensembling for Semi-Supervised Learning\\
 C22: On Detecting Adversarial Perturbations\\ C22: On Detecting Adversarial Perturbations\\
-C23: Identity Matters in Deep Learning\\+C23: Understanding deep learning requires rethinking generalization\\
 C24: Adversarial Feature Learning\\ C24: Adversarial Feature Learning\\
 C25: Learning through Dialogue Interactions\\ C25: Learning through Dialogue Interactions\\
Line 195: Line 200:
 C9: Neural Photo Editing with Introspective Adversarial Networks\\ C9: Neural Photo Editing with Introspective Adversarial Networks\\
 C10: A Learned Representation For Artistic Style\\ C10: A Learned Representation For Artistic Style\\
-C11: Adversarial Machine ​Learning ​at Scale\\ +C11: Learning ​to Remember Rare Events\\ 
-C12: Stick-Breaking Variational Autoencoders\\+C12: Optimization as a Model for Few-Shot Learning\\
 C13: Support Regularized Sparse Coding and Its Fast Encoder\\ C13: Support Regularized Sparse Coding and Its Fast Encoder\\
 C14: Discrete Variational Autoencoders\\ C14: Discrete Variational Autoencoders\\
-C15: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\+C15: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\
 C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\ C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\
 C17: Semi-Supervised Classification with Graph Convolutional Networks\\ C17: Semi-Supervised Classification with Graph Convolutional Networks\\
Line 214: Line 219:
 C28: Reasoning with Memory Augmented Neural Networks for Language Comprehension\\ C28: Reasoning with Memory Augmented Neural Networks for Language Comprehension\\
 C29: Dialogue Learning With Human-in-the-Loop\\ C29: Dialogue Learning With Human-in-the-Loop\\
-C30: Learning to RepeatFine Grained Action Repetition for Deep Reinforcement Learning\\+C30: ZoneoutRegularizing RNNs by Randomly Preserving Hidden Activations\\
 C31: Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening\\ C31: Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening\\
 C32: Learning Visual Servoing with Deep Features and Trust Region Fitted Q-Iteration\\ C32: Learning Visual Servoing with Deep Features and Trust Region Fitted Q-Iteration\\
 C33: An Actor-Critic Algorithm for Sequence Prediction\\ C33: An Actor-Critic Algorithm for Sequence Prediction\\