ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:workshop_posters [2017/03/28 08:20]
rnogueira
iclr2017:workshop_posters [2017/04/23 09:27] (current)
hugo
Line 3: Line 3:
 Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track. Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track.
  
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
 +
 +<​html><​div id='​monday_morning'></​div></​html>​
 ====Monday Morning (April 24th, 10:30am to 12:​30pm)==== ====Monday Morning (April 24th, 10:30am to 12:​30pm)====
-  - Extrapolation and learning equations +W1: Extrapolation and learning equations\\ 
-  ​- ​Effectiveness of Transfer Learning in EHR data +W2: Effectiveness of Transfer Learning in EHR data\\ 
-  ​- ​Intelligent synapses for multi-task and transfer learning +W3: Intelligent synapses for multi-task and transfer learning\\ 
-  ​- ​Unsupervised and Efficient Neural Graph Model with Distributed Representations +W4: Unsupervised and Efficient Neural Graph Model with Distributed Representations\\ 
-  ​- ​Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix +W5: Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix\\ 
-  ​- ​Accelerating Eulerian Fluid Simulation With Convolutional Networks +W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks\\ 
-  ​- ​Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels +W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\ 
-  - Deep Nets Don't Learn via Memorization +W8: Dataset Augmentation in Feature Space\\ 
-  ​- ​Learning Algorithms for Active Learning +W9: Learning Algorithms for Active Learning\\ 
-  ​- ​Reinterpreting Importance-Weighted Autoencoders +W10: Reinterpreting Importance-Weighted Autoencoders\\ 
-  ​- ​Robustness to Adversarial Examples through an Ensemble of Specialists +W11: Robustness to Adversarial Examples through an Ensemble of Specialists\\ 
-  - Neural Expectation Maximization +W12: (empty) \\ 
-  ​- ​On Hyperparameter Optimization in Learning Systems +W13: On Hyperparameter Optimization in Learning Systems\\ 
-  ​- ​Recurrent Normalization Propagation +W14: Recurrent Normalization Propagation\\ 
-  ​- ​Joint Training of Ratings and Reviews with Recurrent Recommender Networks +W15: Joint Training of Ratings and Reviews with Recurrent Recommender Networks\\ 
-  ​- ​Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses +W16: Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses\\ 
-  ​- ​Joint Embeddings of Scene Graphs and Images +W17: Joint Embeddings of Scene Graphs and Images\\ 
-  ​- ​Unseen Style Transfer Based on a Conditional Fast Style Transfer Network+W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\
  
 +
 +<​html><​div id='​monday_afternoon'></​div></​html>​
 ====Monday Afternoon (April 24th, 4:30pm to 6:30pm)==== ====Monday Afternoon (April 24th, 4:30pm to 6:30pm)====
-  - Audio Super-Resolution using Neural Networks +W1: Audio Super-Resolution using Neural Networks\\ 
-  ​- ​Semantic embeddings for program behaviour patterns +W2: Semantic embeddings for program behaviour patterns\\ 
-  ​- ​De novo drug design with deep generative models : an empirical study +W3: De novo drug design with deep generative models : an empirical study\\ 
-  ​- ​Memory Matching Networks for Genomic Sequence Classification +W4: Memory Matching Networks for Genomic Sequence Classification\\ 
-  ​- ​Char2Wav: End-to-End Speech Synthesis +W5: Char2Wav: End-to-End Speech Synthesis\\ 
-  ​- ​Fast Chirplet Transform Injects Priors in Deep Learning of Animal Calls and Speech +W6: Fast Chirplet Transform Injects Priors in Deep Learning of Animal Calls and Speech\\ 
-  ​- ​Weight-averaged consistency targets improve semi-supervised deep learning results +W7: Weight-averaged consistency targets improve semi-supervised deep learning results\\ 
-  ​- ​Particle Value Functions +W8: Particle Value Functions\\ 
-  ​- ​Out-of-class novelty generation: an experimental foundation +W9: Out-of-class novelty generation: an experimental foundation\\ 
-  ​- ​Performance guarantees for transferring representations +W10: Performance guarantees for transferring representations\\ 
-  ​- ​Generative Adversarial Learning of Markov Chains +W11: Generative Adversarial Learning of Markov Chains\\ 
-  ​- ​Short and Deep: Sketching and Neural Networks +W12: Short and Deep: Sketching and Neural Networks\\ 
-  ​- ​Understanding intermediate layers using linear classifier probes +W13: Understanding intermediate layers using linear classifier probes\\ 
-  ​- ​Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity +W14: Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity\\ 
-  ​- ​Neural Combinatorial Optimization with Reinforcement Learning +W15: Neural Combinatorial Optimization with Reinforcement Learning\\ 
-  ​- ​Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents +W16: Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents\\ 
-  ​- ​Adversarial Discriminative Domain Adaptation (workshop extended abstract) +W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)\\ 
-  ​- ​Efficient Sparse-Winograd Convolutional Neural Networks+W18: Efficient Sparse-Winograd Convolutional Neural Networks\\ 
 +W19: Neural Expectation Maximization\\ 
  
 +<​html><​div id='​tuesday_morning'></​div></​html>​
 ====Tuesday Morning (April 25th, 10:30am to 12:​30pm)==== ====Tuesday Morning (April 25th, 10:30am to 12:​30pm)====
-  - Programming With a Differentiable Forth Interpreter +W1: Programming With a Differentiable Forth Interpreter\\ 
-  ​- ​Unsupervised Feature Learning for Audio Analysis +W2: Unsupervised Feature Learning for Audio Analysis\\ 
-  ​- ​Neural Functional Programming +W3: Neural Functional Programming\\ 
-  ​- ​A Smooth Optimisation Perspective on Training Feedforward Neural Networks +W4: A Smooth Optimisation Perspective on Training Feedforward Neural Networks\\ 
-  ​- ​Synthetic Gradient Methods with Virtual Forward-Backward Networks +W5: Synthetic Gradient Methods with Virtual Forward-Backward Networks\\ 
-  ​- ​Explaining the Learning Dynamics of Direct Feedback Alignment +W6: Explaining the Learning Dynamics of Direct Feedback Alignment\\ 
-  ​- ​Training a Subsampling Mechanism in Expectation +W7: Training a Subsampling Mechanism in Expectation\\ 
-  ​- ​Deep Kernel Machines via the Kernel Reparametrization Trick +W8: Deep Kernel Machines via the Kernel Reparametrization Trick\\ 
-  ​- ​Encoding and Decoding Representations with Sum- and Max-Product Networks +W9: Encoding and Decoding Representations with Sum- and Max-Product Networks\\ 
-  ​- ​Embracing Data Abundance +W10: Embracing Data Abundance\\ 
-  ​- ​Variational Intrinsic Control +W11: Variational Intrinsic Control\\ 
-  ​- ​Fast Adaptation in Generative Models with Generative Matching Networks +W12: Fast Adaptation in Generative Models with Generative Matching Networks\\ 
-  ​- ​Efficient variational Bayesian neural network ensembles for outlier detection +W13: Efficient variational Bayesian neural network ensembles for outlier detection\\ 
-  ​- ​Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols +W14: Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols\\ 
-  ​- ​Adaptive Feature Abstraction for Translating Video to Language +W15: Adaptive Feature Abstraction for Translating Video to Language\\ 
-  ​- ​Delving into adversarial attacks on deep policies +W16: Delving into adversarial attacks on deep policies\\ 
-  ​- ​Tuning Recurrent Neural Networks with Reinforcement Learning +W17: Tuning Recurrent Neural Networks with Reinforcement Learning\\ 
-  ​- ​DeepMask: Masking DNN Models for robustness against adversarial samples +W18: DeepMask: Masking DNN Models for robustness against adversarial samples\\ 
-  ​- ​Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli+W19: Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli\\
  
-====Tuesday Afternoon (April 25th, 4:30pm to 6:30pm)==== +<​html><​div id='​tuesday_afternoon'></​div></​html>​ 
-  ​- ​Lifelong Perceptual Programming By Example +====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)==== 
-  ​- ​Neu0 +W1: Lifelong Perceptual Programming By Example\\ 
-  ​- ​Dance Dance Convolution +W2: Neu0\\ 
-  ​- ​Bit-Pragmatic Deep Neural Network Computing +W3: Dance Dance Convolution\\ 
-  ​- ​On Improving the Numerical Stability of Winograd Convolutions +W4: Bit-Pragmatic Deep Neural Network Computing\\ 
-  ​- ​Fast Generation for Convolutional Autoregressive Models +W5: On Improving the Numerical Stability of Winograd Convolutions\\ 
-  ​- ​THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES +W6: Fast Generation for Convolutional Autoregressive Models\\ 
-  ​- ​Training Triplet Networks with GAN +W7: THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES\\ 
-  ​- ​On Robust Concepts and Small Neural Nets +W8: Training Triplet Networks with GAN\\ 
-  ​- ​Pl@ntNet app in the era of deep learning +W9: On Robust Concepts and Small Neural Nets\\ 
-  ​- ​Exponential Machines +W10: Pl@ntNet app in the era of deep learning\\ 
-  ​- ​Online Multi-Task Learning Using Biased Sampling +W11: Exponential Machines\\ 
-  ​- ​Online Structure Learning for Sum-Product Networks with Gaussian Leaves +W12: Online Multi-Task Learning Using Biased Sampling\\ 
-  ​- ​A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples +W13: Online Structure Learning for Sum-Product Networks with Gaussian Leaves\\ 
-  ​- ​Compositional Kernel Machines +W14: A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples\\ 
-  ​- ​Loss is its own Reward: Self-Supervision for Reinforcement Learning +W15: Compositional Kernel Machines\\ 
-  Changing Model Behavior at Test-time Using Reinforcement Learning +W16: Loss is its own Reward: Self-Supervision for Reinforcement Learning\\ 
-  ​- ​Precise Recovery of Latent Vectors from Generative Adversarial Networks +W17: REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models\\ 
-  ​- ​Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization+W18: Precise Recovery of Latent Vectors from Generative Adversarial Networks\\ 
 +W19: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization\\
  
 +<​html><​div id='​wednesday_morning'></​div></​html>​
 ====Wednesday Morning (April 26th, 10:30am to 12:​30pm)==== ====Wednesday Morning (April 26th, 10:30am to 12:​30pm)====
-  - NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD +W1: NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD\\ 
-  ​- ​The High-Dimensional Geometry of Binary Neural Networks +W2: The High-Dimensional Geometry of Binary Neural Networks\\ 
-  ​- ​Discovering objects and their relations from entangled scene representations +W3: Discovering objects and their relations from entangled scene representations\\ 
-  ​- ​A Differentiable Physics Engine for Deep Learning in Robotics +W4: A Differentiable Physics Engine for Deep Learning in Robotics\\ 
-  ​- ​Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations +W5: Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations\\ 
-  ​- ​Development of JavaScript-based deep learning platform and application to distributed training +W6: Development of JavaScript-based deep learning platform and application to distributed training\\ 
-  ​- ​Factorization tricks for LSTM networks +W7: Factorization tricks for LSTM networks\\ 
-  ​- ​Shake-Shake regularization of 3-branch residual networks +W8: Shake-Shake regularization of 3-branch residual networks\\ 
-  ​- ​Trace Norm Regularised Deep Multi-Task Learning +W9: Trace Norm Regularised Deep Multi-Task Learning\\ 
-  ​- ​Deep Learning with Sets and Point Clouds +W10: Deep Learning with Sets and Point Clouds\\ 
-  - Dataset Augmentation in Feature Space +W11: Deep Nets Don't Learn via Memorization\\ 
-  ​- ​Multiplicative LSTM for sequence modelling +W12: Multiplicative LSTM for sequence modelling\\ 
-  ​- ​Learning to Discover Sparse Graphical Models +W13: Learning to Discover Sparse Graphical Models\\ 
-  ​- ​Revisiting Batch Normalization For Practical Domain Adaptation +W14: Revisiting Batch Normalization For Practical Domain Adaptation\\ 
-  ​- ​Early Methods for Detecting Adversarial Images and a Colorful Saliency Map +W15: Early Methods for Detecting Adversarial Images and a Colorful Saliency Map\\ 
-  ​- ​Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data +W16: Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data\\ 
-  ​- ​Coupling Distributed and Symbolic Execution for Natural Language Queries +W17: Coupling Distributed and Symbolic Execution for Natural Language Queries\\ 
-  ​- ​Adversarial Examples for Semantic Image Segmentation +W18: Adversarial Examples for Semantic Image Segmentation\\ 
-  ​- ​RenderGAN: Generating Realistic Labeled Data+W19: RenderGAN: Generating Realistic Labeled Data\\
  
 +<​html><​div id='​wednesday_afternoon'></​div></​html>​
 ====Wednesday Afternoon (April 26th, 4:30pm to 6:30pm)==== ====Wednesday Afternoon (April 26th, 4:30pm to 6:30pm)====
-  - Song From PI: A Musically Plausible Network for Pop Music Generation +W1: Song From PI: A Musically Plausible Network for Pop Music Generation\\ 
-  ​- ​Charged Point Normalization:​ An Efficient Solution to the Saddle Point Problem +W2: Charged Point Normalization:​ An Efficient Solution to the Saddle Point Problem\\ 
-  ​- ​Towards "​AlphaChem":​ Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies +W3: Towards "​AlphaChem":​ Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies\\ 
-  ​- ​CommAI: Evaluating the first steps towards a useful general AI +W4: CommAI: Evaluating the first steps towards a useful general AI\\ 
-  ​- ​Joint Multimodal Learning with Deep Generative Models +W5: Joint Multimodal Learning with Deep Generative Models\\ 
-  ​- ​Transferring Knowledge to Smaller Network with Class-Distance Loss +W6: Transferring Knowledge to Smaller Network with Class-Distance Loss\\ 
-  ​- ​Regularizing Neural Networks by Penalizing Confident Output Distributions +W7: Regularizing Neural Networks by Penalizing Confident Output Distributions\\ 
-  ​- ​Adversarial Attacks on Neural Network Policies +W8: Adversarial Attacks on Neural Network Policies\\ 
-  ​- ​Generalizable Features From Unsupervised Learning +W9: Generalizable Features From Unsupervised Learning\\ 
-  ​- ​Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters +W10: Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters\\ 
-  ​- ​Semi-supervised deep learning by metric embedding +W11: Semi-supervised deep learning by metric embedding\\ 
-  - REBARLow-variance, unbiased gradient estimates for discrete latent variable models +W12Changing Model Behavior at Test-time Using Reinforcement Learning\\ 
-  ​- ​Variational Reference Priors +W13: Variational Reference Priors\\ 
-  ​- ​Gated Multimodal Units for Information Fusion +W14: Gated Multimodal Units for Information Fusion\\ 
-  ​- ​Playing SNES in the Retro Learning Environment +W15: Playing SNES in the Retro Learning Environment\\ 
-  ​- ​Unsupervised Perceptual Rewards for Imitation Learning +W16: Unsupervised Perceptual Rewards for Imitation Learning\\ 
-  ​- ​Perception Updating Networks: On architectural constraints for interpretable video generative models +W17: Perception Updating Networks: On architectural constraints for interpretable video generative models\\ 
-  ​- ​Adversarial examples in the physical world+W18: Adversarial examples in the physical world\\