Show Detail |
Timezone: UTC
|
Filter Rooms:
MON 25 APR
7 a.m.
8 a.m.
9:30 a.m.
11 a.m.
4 p.m.
Invited Talk:
Been Kim
(ends 5:15 PM)
5:30 p.m.
(ends 7:30 PM)
7 p.m.
TUE 26 APR
midnight
Oral
s
12:00-1:30
[12:00]
Language modeling via stochastic processes
[12:15]
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling
[12:30]
Real-Time Neural Voice Camouflage
[12:45]
ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics
[1:00]
Open-Set Recognition: A Good Closed-Set Classifier is All You Need
[1:15]
Vision-Based Manipulators Need to Also See from Their Hands
(ends 1:30 AM)
Oral
s
12:00-1:30
[12:00]
Hyperparameter Tuning with Renyi Differential Privacy
[12:15]
PiCO: Contrastive Label Disambiguation for Partial Label Learning
[12:30]
Poisoning and Backdooring Contrastive Learning
[12:45]
Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design
[1:00]
The Information Geometry of Unsupervised Reinforcement Learning
[1:15]
Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics
(ends 1:30 AM)
1:30 a.m.
(ends 3:30 AM)
3 a.m.
8 a.m.
Oral
s
8:00-9:30
[8:00]
Understanding over-squashing and bottlenecks on graphs via curvature
[8:15]
Efficiently Modeling Long Sequences with Structured State Spaces
[8:30]
Neural Structured Prediction for Inductive Node Classification
[8:45]
A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?"
[9:00]
CycleMLP: A MLP-like Architecture for Dense Prediction
[9:15]
Variational Inference for Discriminative Learning with Generative Modeling of Feature Incompletion
(ends 9:30 AM)
Oral
s
8:00-9:45
[8:00]
Expressiveness and Approximation Properties of Graph Neural Networks
[8:15]
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
[8:30]
Learning Strides in Convolutional Neural Networks
[8:45]
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions
[9:00]
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
[9:15]
DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS
[9:30]
Representational Continuity for Unsupervised Continual Learning
(ends 9:45 AM)
Oral
s
8:00-9:30
[8:00]
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space
[8:15]
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization
[8:30]
Data-Efficient Graph Grammar Learning for Molecular Generation
[8:45]
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data
[9:00]
Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation
[9:15]
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
(ends 9:30 AM)
9:30 a.m.
(ends 11:30 AM)
4 p.m.
Invited Talk:
John Amuasi
(ends 5:15 PM)
5:30 p.m.
7 p.m.
8 p.m.
10 p.m.
WED 27 APR
midnight
1:30 a.m.
(ends 3:30 AM)
8 a.m.
Invited Talk:
Cordelia Schmid
(ends 9:15 AM)
9:30 a.m.
11 a.m.
4 p.m.
Oral
s
4:00-5:30
[4:00]
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
[4:15]
Asymmetry Learning for Counterfactually-invariant Classification in OOD Tasks
[4:30]
A Fine-Grained Analysis on Distribution Shift
[4:45]
Sparse Communication via Mixed Distributions
[5:00]
Frame Averaging for Invariant and Equivariant Network Design
[5:15]
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
(ends 5:30 PM)
Oral
s
4:00-5:30
[4:00]
Bootstrapped Meta-Learning
[4:15]
Coordination Among Neural Modules Through a Shared Global Workspace
[4:30]
Meta-Learning with Fewer Tasks through Task Interpolation
[4:45]
Weighted Training for Cross-Task Learning
[5:00]
Domino: Discovering Systematic Errors with Cross-Modal Embeddings
[5:15]
Extending the WILDS Benchmark for Unsupervised Adaptation
(ends 5:30 PM)
5:30 p.m.
(ends 7:30 PM)
7 p.m.
THU 28 APR
midnight
1:30 a.m.
(ends 3:30 AM)
3 a.m.
8 a.m.
Oral
s
8:00-9:30
[8:00]
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme
[8:15]
Natural Language Descriptions of Deep Features
[8:30]
Finetuned Language Models are Zero-Shot Learners
[8:45]
Large Language Models Can Be Strong Differentially Private Learners
[9:00]
GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation
[9:15]
Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting
(ends 9:30 AM)
Oral
s
8:00-9:30
[8:00]
Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models
[8:15]
Comparing Distributions by Measuring Differences that Affect Decision Making
[8:30]
Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling
[8:45]
RISP: Rendering-Invariant State Predictor with Differentiable Simulation and Rendering for Cross-Domain Parameter Estimation
[9:00]
BEiT: BERT Pre-Training of Image Transformers
[9:15]
Resolving Training Biases via Influence-based Data Relabeling
(ends 9:30 AM)
9:30 a.m.
(ends 11:30 AM)
11 a.m.
1 p.m.
4 p.m.
5:30 p.m.
(ends 7:30 PM)
7 p.m.
7:30 p.m.
9 p.m.
FRI 29 APR
midnight
1:30 a.m.
(ends 3:30 AM)
7 a.m.
8:45 a.m.
9 a.m.
10 a.m.
noon
12:45 p.m.
1 p.m.
Workshop:
3rd Workshop on practical ML for Developing Countries: learning under limited/low resource scenarios
(ends 7:45 PM)
3 p.m.
4 p.m.
Workshop:
(ends 1:00 AM)