ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:workshop_posters [2017/03/27 17:39]
hugo created
iclr2017:workshop_posters [2017/04/23 09:27] (current)
hugo
Line 1: Line 1:
-=====Workshop Poster Sessions=====+======Workshop Poster Sessions======
  
-Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number.+Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number ​in the area dedicated to the Workshop Track.
  
-===Monday Morning (April 24th, 10:​30am ​to 12:30pm)=== +======Note ​to the Presenters======= 
-  - Extrapolation ​and learning equations +Each poster panel is 2 meters large and 1 meter tall.\\ 
-  - Effectiveness of Transfer Learning in EHR data +If needed, tape will be provided ​to fix your poster.
-  - Intelligent synapses for multi-task and transfer learning +
-  - Unsupervised and Efficient Neural Graph Model with Distributed Representations +
-  - Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix +
-  - Accelerating Eulerian Fluid Simulation With Convolutional Networks +
-  - Forced ​to Learn: Discovering Disentangled Representations Without Exhaustive Labels +
-  - Deep Nets Don't Learn via Memorization +
-  - Learning Algorithms for Active Learning +
-  - Reinterpreting Importance-Weighted Autoencoders +
-  - Robustness to Adversarial Examples through an Ensemble of Specialists +
-  - Neural Expectation Maximization +
-  - On Hyperparameter Optimization in Learning Systems +
-  - Recurrent Normalization Propagation +
-  - Joint Training of Ratings and Reviews with Recurrent Recommender Networks +
-  - Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses +
-  - Joint Embeddings of Scene Graphs and Images +
-  - Unseen Style Transfer Based on a Conditional Fast Style Transfer Network+
  
-===Monday ​Afternoon ​(April 24th, 4:30pm to 6:30pm)=== +<​html><​div id='​monday_morning'></​div></​html>​ 
-  Audio Super-Resolution using Neural ​Networks +====Monday ​Morning ​(April 24th, 10:30am to 12:30pm)==== 
-  - Semantic embeddings ​for program behaviour patterns +W1: Extrapolation and learning equations\\ 
-  ​De novo drug design with deep generative models : an empirical study +W2: Effectiveness of Transfer Learning in EHR data\\ 
-  - Memory Matching ​Networks ​for Genomic Sequence Classification +W3: Intelligent synapses for multi-task and transfer learning\\ 
-  - Char2WavEnd-to-End Speech Synthesis +W4: Unsupervised and Efficient ​Neural ​Graph Model with Distributed Representations\\ 
-  - Fast Chirplet Transform Injects Priors ​in Deep Learning ​of Animal Calls and Speech +W5: Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix\\ 
-  Weight-averaged consistency targets improve semi-supervised deep learning results +W6: Accelerating Eulerian Fluid Simulation With Convolutional ​Networks\\ 
-  - Particle Value Functions +W7Forced ​to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\ 
-  - Out-of-class novelty generation: an experimental foundation +W8: Dataset Augmentation ​in Feature Space\\ 
-  - Performance guarantees for transferring representations +W9: Learning ​Algorithms for Active Learning\\ 
-  - Generative Adversarial ​Learning ​of Markov Chains +W10: Reinterpreting Importance-Weighted Autoencoders\\ 
-  - Short and DeepSketching and Neural Networks +W11Robustness to Adversarial Examples through ​an Ensemble of Specialists\\ 
-  - Understanding intermediate layers using linear classifier probes +W12: (empty) \\ 
-  - Symmetry-Breaking Convergence Analysis ​of Certain Two-layered Neural ​Networks ​with ReLU nonlinearity +W13: On Hyperparameter Optimization in Learning ​Systems\\ 
-  - Neural Combinatorial Optimization with Reinforcement ​Learning +W14Recurrent Normalization Propagation\\ 
-  - Tactics ​of Adversarial Attacks ​on Deep Reinforcement Learning Agents +W15: Joint Training ​of Ratings and Reviews with Recurrent Recommender ​Networks\\ 
-  - Adversarial Discriminative Domain Adaptation (workshop extended abstract) +W16: Towards an Automatic Turing Test: Learning ​to Evaluate Dialogue Responses\\ 
-  - Efficient Sparse-Winograd Convolutional Neural Networks+W17: Joint Embeddings ​of Scene Graphs and Images\\ 
 +W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\
  
-===Tuesday Morning (April 25th, 10:30am to 12:30pm)=== 
-  - Programming With a Differentiable Forth Interpreter 
-  - Unsupervised Feature Learning for Audio Analysis 
-  - Neural Functional Programming 
-  - A Smooth Optimisation Perspective on Training Feedforward Neural Networks 
-  - Synthetic Gradient Methods with Virtual Forward-Backward Networks 
-  - Explaining the Learning Dynamics of Direct Feedback Alignment 
-  - Training a Subsampling Mechanism in Expectation 
-  - Deep Kernel Machines via the Kernel Reparametrization Trick 
-  - Encoding and Decoding Representations with Sum- and Max-Product Networks 
-  - Embracing Data Abundance 
-  - Variational Intrinsic Control 
-  - Fast Adaptation in Generative Models with Generative Matching Networks 
-  - Efficient variational Bayesian neural network ensembles for outlier detection 
-  - Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols 
-  - Adaptive Feature Abstraction for Translating Video to Language 
-  - Delving into adversarial attacks on deep policies 
-  - Tuning Recurrent Neural Networks with Reinforcement Learning 
-  - DeepMask: Masking DNN Models for robustness against adversarial samples 
-  - Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli 
  
-===Tuesday ​Afternoon (April ​25th, 4:30pm to 6:​30pm)=== +<​html><​div id='​monday_afternoon'></​div></​html>​ 
-  - Lifelong Perceptual Programming By Example +====Monday ​Afternoon (April ​24th, 4:30pm to 6:30pm)==== 
-  - Neu0 +W1: Audio Super-Resolution using Neural ​Networks\\ 
-  - Dance Dance Convolution +W2: Semantic embeddings ​for program behaviour patterns\\ 
-  ​Bit-Pragmatic Deep Neural ​Network Computing +W3: De novo drug design with deep generative models : an empirical study\\ 
-  - On Improving the Numerical Stability of Winograd Convolutions +W4: Memory Matching ​Networks ​for Genomic Sequence Classification\\ 
-  - Fast Generation ​for Convolutional Autoregressive Models +W5: Char2Wav: End-to-End Speech Synthesis\\ 
-  - THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES +W6: Fast Chirplet Transform Injects Priors ​in Deep Learning ​of Animal Calls and Speech\\ 
-  - Training Triplet ​Networks ​with GAN +W7: Weight-averaged consistency targets improve semi-supervised ​deep learning ​results\\ 
-  On Robust Concepts and Small Neural Nets +W8: Particle Value Functions\\ 
-  ​Pl@ntNet app in the era of deep learning +W9: Out-of-class novelty generation: an experimental foundation\\ 
-  - Exponential Machines +W10: Performance guarantees ​for transferring representations\\ 
-  Online Multi-Task Learning Using Biased Sampling +W11: Generative Adversarial Learning ​of Markov Chains\\ 
-  - Online Structure Learning ​for Sum-Product Networks with Gaussian Leaves +W12: Short and Deep: Sketching and Neural Networks\\ 
-  - A Theoretical Framework for Robustness ​of (Deep) Classifiers against Adversarial Samples +W13: Understanding intermediate layers using linear classifier probes\\ 
-  - Compositional Kernel Machines +W14Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity\\ 
-  - Loss is its own RewardSelf-Supervision for Reinforcement Learning +W15: Neural Combinatorial Optimization with Reinforcement Learning\\ 
-  - Changing Model Behavior at Test-time Using Reinforcement Learning +W16: Tactics of Adversarial Attacks on Deep Reinforcement Learning ​Agents\\ 
-  - Precise Recovery of Latent Vectors from Generative ​Adversarial ​Networks +W17: Adversarial ​Discriminative Domain Adaptation (workshop extended abstract)\\ 
-  Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization+W18: Efficient Sparse-Winograd Convolutional Neural Networks\\ 
 +W19: Neural Expectation Maximization\\
  
-===Wednesday Morning (April 26th, 10:30am to 12:30pm)=== 
-  - NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD 
-  - The High-Dimensional Geometry of Binary Neural Networks 
-  - Discovering objects and their relations from entangled scene representations 
-  - A Differentiable Physics Engine for Deep Learning in Robotics 
-  - Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations 
-  - Development of JavaScript-based deep learning platform and application to distributed training 
-  - Factorization tricks for LSTM networks 
-  - Shake-Shake regularization of 3-branch residual networks 
-  - Trace Norm Regularised Deep Multi-Task Learning 
-  - Deep Learning with Sets and Point Clouds 
-  - Dataset Augmentation in Feature Space 
-  - Multiplicative LSTM for sequence modelling 
-  - Learning to Discover Sparse Graphical Models 
-  - Revisiting Batch Normalization For Practical Domain Adaptation 
-  - Early Methods for Detecting Adversarial Images and a Colorful Saliency Map 
-  - Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data 
-  - Coupling Distributed and Symbolic Execution for Natural Language Queries 
-  - Adversarial Examples for Semantic Image Segmentation 
-  - RenderGAN: Generating Realistic Labeled Data 
  
-===Wednesday Afternoon (April 26th, 4:30pm to 6:​30pm)=== +<​html><​div id='​tuesday_morning'></​div></​html>​ 
-  ​- ​Song From PI: A Musically Plausible Network for Pop Music Generation +====Tuesday Morning (April 25th, 10:30am to 12:​30pm)==== 
-  ​- ​Charged Point Normalization:​ An Efficient Solution to the Saddle Point Problem +W1: Programming With a Differentiable Forth Interpreter\\ 
-  ​- ​Towards "​AlphaChem":​ Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies +W2: Unsupervised Feature Learning for Audio Analysis\\ 
-  ​- ​CommAI: Evaluating the first steps towards a useful general AI +W3: Neural Functional Programming\\ 
-  ​- ​Joint Multimodal Learning with Deep Generative Models +W4: A Smooth Optimisation Perspective on Training Feedforward Neural Networks\\ 
-  ​- ​Transferring Knowledge to Smaller Network with Class-Distance Loss +W5: Synthetic Gradient Methods with Virtual Forward-Backward Networks\\ 
-  ​- ​Regularizing Neural Networks by Penalizing Confident Output Distributions +W6: Explaining the Learning Dynamics of Direct Feedback Alignment\\ 
-  ​- ​Adversarial Attacks on Neural Network Policies +W7: Training a Subsampling Mechanism in Expectation\\ 
-  ​- ​Generalizable Features From Unsupervised Learning +W8: Deep Kernel Machines via the Kernel Reparametrization Trick\\ 
-  ​- ​Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters +W9: Encoding and Decoding Representations with Sum- and Max-Product Networks\\ 
-  ​- ​Semi-supervised deep learning by metric embedding +W10: Embracing Data Abundance\\ 
-  - REBARLow-variance, unbiased gradient estimates for discrete latent variable models +W11: Variational Intrinsic Control\\ 
-  ​- ​Variational Reference Priors +W12: Fast Adaptation in Generative Models with Generative Matching Networks\\ 
-  ​- ​Gated Multimodal Units for Information Fusion +W13: Efficient variational Bayesian neural network ensembles for outlier detection\\ 
-  ​- ​Playing SNES in the Retro Learning Environment +W14: Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols\\ 
-  ​- ​Unsupervised Perceptual Rewards for Imitation Learning +W15: Adaptive Feature Abstraction for Translating Video to Language\\ 
-  ​- ​Perception Updating Networks: On architectural constraints for interpretable video generative models +W16: Delving into adversarial attacks on deep policies\\ 
-  ​- ​Adversarial examples in the physical world+W17: Tuning Recurrent Neural Networks with Reinforcement Learning\\ 
 +W18: DeepMask: Masking DNN Models for robustness against adversarial samples\\ 
 +W19: Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli\\ 
 + 
 +<​html><​div id='​tuesday_afternoon'></​div></​html>​ 
 +====Tuesday Afternoon (April 25th, 2:00pm to 4:​00pm)==== 
 +W1: Lifelong Perceptual Programming By Example\\ 
 +W2: Neu0\\ 
 +W3: Dance Dance Convolution\\ 
 +W4: Bit-Pragmatic Deep Neural Network Computing\\ 
 +W5: On Improving the Numerical Stability of Winograd Convolutions\\ 
 +W6: Fast Generation for Convolutional Autoregressive Models\\ 
 +W7: THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES\\ 
 +W8: Training Triplet Networks with GAN\\ 
 +W9: On Robust Concepts and Small Neural Nets\\ 
 +W10: Pl@ntNet app in the era of deep learning\\ 
 +W11: Exponential Machines\\ 
 +W12: Online Multi-Task Learning Using Biased Sampling\\ 
 +W13: Online Structure Learning for Sum-Product Networks with Gaussian Leaves\\ 
 +W14: A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples\\ 
 +W15: Compositional Kernel Machines\\ 
 +W16: Loss is its own Reward: Self-Supervision for Reinforcement Learning\\ 
 +W17: REBAR: Low-variance,​ unbiased gradient estimates for discrete latent variable models\\ 
 +W18: Precise Recovery of Latent Vectors from Generative Adversarial Networks\\ 
 +W19: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization\\ 
 + 
 +<​html><​div id='​wednesday_morning'></​div></​html>​ 
 +====Wednesday Morning (April 26th, 10:30am to 12:​30pm)==== 
 +W1: NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD\\ 
 +W2: The High-Dimensional Geometry of Binary Neural Networks\\ 
 +W3: Discovering objects and their relations from entangled scene representations\\ 
 +W4: A Differentiable Physics Engine for Deep Learning in Robotics\\ 
 +W5: Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations\\ 
 +W6: Development of JavaScript-based deep learning platform and application to distributed training\\ 
 +W7: Factorization tricks for LSTM networks\\ 
 +W8: Shake-Shake regularization of 3-branch residual networks\\ 
 +W9: Trace Norm Regularised Deep Multi-Task Learning\\ 
 +W10: Deep Learning with Sets and Point Clouds\\ 
 +W11: Deep Nets Don't Learn via Memorization\\ 
 +W12: Multiplicative LSTM for sequence modelling\\ 
 +W13: Learning to Discover Sparse Graphical Models\\ 
 +W14: Revisiting Batch Normalization For Practical Domain Adaptation\\ 
 +W15: Early Methods for Detecting Adversarial Images and a Colorful Saliency Map\\ 
 +W16: Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data\\ 
 +W17: Coupling Distributed and Symbolic Execution for Natural Language Queries\\ 
 +W18: Adversarial Examples for Semantic Image Segmentation\\ 
 +W19: RenderGAN: Generating Realistic Labeled Data\\ 
 + 
 +<​html><​div id='​wednesday_afternoon'></​div></​html>​ 
 +====Wednesday Afternoon (April 26th, 4:30pm to 6:30pm)==== 
 +W1: Song From PI: A Musically Plausible Network for Pop Music Generation\\ 
 +W2: Charged Point Normalization:​ An Efficient Solution to the Saddle Point Problem\\ 
 +W3: Towards "​AlphaChem":​ Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies\\ 
 +W4: CommAI: Evaluating the first steps towards a useful general AI\\ 
 +W5: Joint Multimodal Learning with Deep Generative Models\\ 
 +W6: Transferring Knowledge to Smaller Network with Class-Distance Loss\\ 
 +W7: Regularizing Neural Networks by Penalizing Confident Output Distributions\\ 
 +W8: Adversarial Attacks on Neural Network Policies\\ 
 +W9: Generalizable Features From Unsupervised Learning\\ 
 +W10: Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters\\ 
 +W11: Semi-supervised deep learning by metric embedding\\ 
 +W12Changing Model Behavior at Test-time Using Reinforcement Learning\\ 
 +W13: Variational Reference Priors\\ 
 +W14: Gated Multimodal Units for Information Fusion\\ 
 +W15: Playing SNES in the Retro Learning Environment\\ 
 +W16: Unsupervised Perceptual Rewards for Imitation Learning\\ 
 +W17: Perception Updating Networks: On architectural constraints for interpretable video generative models\\ 
 +W18: Adversarial examples in the physical world\\