ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:workshop_posters [2017/03/31 17:00]
rnogueira
iclr2017:workshop_posters [2017/04/23 09:27] (current)
hugo
Line 2: Line 2:
  
 Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track. Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track.
 +
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
  
 <​html><​div id='​monday_morning'></​div></​html>​ <​html><​div id='​monday_morning'></​div></​html>​
Line 12: Line 16:
 W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks\\ W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks\\
 W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\ W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\
-W8: Deep Nets Don't Learn via Memorization\\+W8: Dataset Augmentation in Feature Space\\
 W9: Learning Algorithms for Active Learning\\ W9: Learning Algorithms for Active Learning\\
 W10: Reinterpreting Importance-Weighted Autoencoders\\ W10: Reinterpreting Importance-Weighted Autoencoders\\
 W11: Robustness to Adversarial Examples through an Ensemble of Specialists\\ W11: Robustness to Adversarial Examples through an Ensemble of Specialists\\
-W12: Neural Expectation Maximization\\+W12: (empty) ​\\
 W13: On Hyperparameter Optimization in Learning Systems\\ W13: On Hyperparameter Optimization in Learning Systems\\
 W14: Recurrent Normalization Propagation\\ W14: Recurrent Normalization Propagation\\
Line 23: Line 27:
 W17: Joint Embeddings of Scene Graphs and Images\\ W17: Joint Embeddings of Scene Graphs and Images\\
 W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\ W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\
 +
  
 <​html><​div id='​monday_afternoon'></​div></​html>​ <​html><​div id='​monday_afternoon'></​div></​html>​
Line 44: Line 49:
 W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)\\ W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)\\
 W18: Efficient Sparse-Winograd Convolutional Neural Networks\\ W18: Efficient Sparse-Winograd Convolutional Neural Networks\\
 +W19: Neural Expectation Maximization\\
 +
  
 <​html><​div id='​tuesday_morning'></​div></​html>​ <​html><​div id='​tuesday_morning'></​div></​html>​
Line 68: Line 75:
  
 <​html><​div id='​tuesday_afternoon'></​div></​html>​ <​html><​div id='​tuesday_afternoon'></​div></​html>​
-====Tuesday Afternoon (April 25th, 2:30pm to 4:30pm)====+====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)====
 W1: Lifelong Perceptual Programming By Example\\ W1: Lifelong Perceptual Programming By Example\\
 W2: Neu0\\ W2: Neu0\\
Line 101: Line 108:
 W9: Trace Norm Regularised Deep Multi-Task Learning\\ W9: Trace Norm Regularised Deep Multi-Task Learning\\
 W10: Deep Learning with Sets and Point Clouds\\ W10: Deep Learning with Sets and Point Clouds\\
-W11: Dataset Augmentation in Feature Space\\+W11: Deep Nets Don't Learn via Memorization\\
 W12: Multiplicative LSTM for sequence modelling\\ W12: Multiplicative LSTM for sequence modelling\\
 W13: Learning to Discover Sparse Graphical Models\\ W13: Learning to Discover Sparse Graphical Models\\