Berkeley Lab’s machine learning experts will join the global AI and data science community in Vancouver, Canada, for NeurIPS 2024, one of the premier conferences on Information Processing Systems. From December 10-15, this multidisciplinary event will showcase cutting-edge research through invited talks, demos, and oral and poster presentations. The program also features a professional exposition, hands-on tutorials, and workshops offering a dynamic space for exchanging innovative ideas.

Explore our day-by-day guide below to discover how Berkeley Lab researchers are contributing to the future of machine learning at NeurIPS 2024

TIMETYPESESSION / PRESENTATIONCONTRIBUTORSLOCATION
Wednesday 12/11
4:30 p.m. PST — 7:30 p.m. PST PosterEfficient Leverage Score Sampling for Tensor Train DecompositionVivek Bharadwaj, Beheshteh Toloueirakhshan, Osman Asif Malik, Guillaume RabusseauEast Exhibit Hall A-C #2008
Thursday 12/12
11 a.m. PST — 2 p.m. PSTPosterKVQuant: Towards 10 Million Context Length LLM Inference with KV Cache QuantizationColeman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael Mahoney, Sophia Shao, Kurt Keutzer, Amir GholamiEast Exhibit Hall A-C #2008
11 a.m. PST — 2 p.m. PSTPosterData-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Wuyang Chen, Jialin Song, Pu Ren, Shashank Subramanian, Dmitriy Morozov, Michael MahoneyEast Exhibit Hall A-C #4710
Friday 12/13
11 a.m. PST — 2 p.m. PSTPosterSharpness-diversity tradeoff: improving flat ensembles with SharpBalanceHaiquan Lu, Xiaotian Liu, Yefan Zhou, Qunli Li, Kurt Keutzer, Michael Mahoney, Yujun Yan, Huanrui Yang, Yaoqing YangWest Ballroom A-D #5705
11 a.m. PST — 2 p.m. PSTPosterAlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language ModelsHaiquan Lu, Xiaotian Liu, Yefan Zhou, Qunli Li, Kurt Keutzer, Michael Mahoney, Yujun Yan, Huanrui Yang, Yaoqing YangEast Exhibit Hall A-C #4810
11 a.m. PST — 2 p.m. PSTPosterHow many classifiers do we need?Hyunsuk Kim, Liam Hodgkinson, Ryan Theisen, Michael MahoneyWest Ballroom A-D #5708
11 a.m. PST — 2 p.m. PSTPosterThe Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical DomainsEric Qu, Aditi KrishnapriyanEast Exhibit Hall A-C #3910
Saturday 12/14
8:15 a.m. PST - 5 p.m. PSTPaperVisualizing Loss Functions as Topological Landscape Profiles [Part of the
NeurIPS 2024 Workshop on Symmetry and Geometry in Neural Representations (NeurReps)]
Caleb Geniesse, Jiaqing Chen, Tiankai Xie, Ge Shi, Yaoqing Yang, Dmitriy Morozov, Talita Perciano, Michael Mahoney, Ross Maciejewski, Gunther WeberWest Ballroom C
3 p.m. - 4 p.m. PSTPaperAn Active Learning Performance Model for Parallel Bayesian Calibration of Expensive Simulation [Part of the NeurIPS Workshop on Bayesian Decision-making and Uncertainty]Ozge Surer and Stefan M. WildEast Meeting Room 8, 15
Sunday 12/15
8:15 a.m. PST - 5 p.m. PSTWorkshopNeurIPS 2024 Workshop: Machine Learning and the Physical SciencesSiddharth Mishra-Sharma, Nicole Hartman, Vinicius Mikuni, Mariel Pettee, Sebastian Wagner-Carena, Antoine Wehenkel, Kyle Cranmer, Savannah Thais, Benjamin Nachman, Brian NordEast Exhibition Hall B, C
8:15 a.m. PST - 5 p.m. PSTWorkshopFoundation Models for Science: Progress, Opportunities, and ChallengesWuyang Chen, Pu Ren, Elena Massara, Yongji Wang, N. Benjamin Erichson, Laurence Perreault-Levasseur, Bo Li, Swarat ChaudhuriMeeting Room #202 - 204
9:45 a.m. PST - 10:25 a.m. PSTInvited TalkFoundation Models for Science: Progress, Opportunities, and Challenges WorkshopMichael MahoneyMeeting Room #202 - 204
11:20 a.m. PST - 12:20 p.m. PSTPaperEvaluating Loss Landscapes from a Topology Perspective [Part of the NeurIPS 2024 Workshop on Scientific Methods for Understanding Deep Learning (SciForDL)]Tiankai Xie, Caleb Geniesse, Jiaqing Chen, Yaoqing Yang, Dmitriy Morozov, Michael Mahoney, Ross Maciejewski, Gunther WeberWest Meeting Room 205-207
Competition Track Program
PROJECTBERKELEY LAB CONTIBUTORSABSTRACT CONTACT
FAIR Universe – the challenge of handling uncertainties in fundamental scienceDavid Rousseau, Wahid Bhimji, Paolo Calafiura, Ragansu Chakkappai, Yuan-Tang Chou, Sascha Diefenbacher, Steven Farrell, Aishik Ghosh, Isabelle Guyon, Chris Harris, Elham E Khoda, Benjamin Nachman, Yulei Zhang, Ihsan UllahWe propose a challenge organised in conjunction with the Fair Universe project, a collaborative effort funded by the US Department of Energy and involving the Lawrence Berkeley National Laboratory, Université Paris-Saclay, University of Washington, and ChaLearn. This initiative aims to forge an open AI ecosystem for scientific discovery. The challenge will focus on measuring the physics properties of elementary particles with imperfect simulators due to differences in modelling systematic errors. Additionally, the challenge will leverage a large-compute-scale AI platform for sharing datasets, training models, and hosting machine learning competitions. Our challenge will bring together the physics and machine learn- ing communities to advance our understanding and methodologies in handling systematic (otherwise known as epistemic) uncertainties within AI techniques.[email protected]