back to list

newsitem: ICML 2024

Our cluster will present no less than 13 papers at the ICML 2024 conference and its workshops!

These are the works presented at the main conference:

·       Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method by Kishaan Jeeveswaran, Elahe Arani, and Bahram Zonooz


·       Efficient Exploration in Average-Reward Constrained Reinforcement Learning: Achieving Near-Optimal Regret With Posterior Sampling by Danil Provodin, Maurits Kaptein and Mykola Pechenizkiy

·       MALIBO: Meta-learning for Likelihood-free Bayesian Optimization by Jiarong Pan, Stefan Falkner, Felix Berkenkamp, and Joaquin Vanschoren

·       TrustLLM: Trustworthiness in Large Language Models by Lichao Sun et al. including Joaquin Vanschoren

·       Scalable Safe Policy Improvement for Factored Multi-Agent MDPs by Federico Bianchi, Edoardo Zorzi, Alberto Castellini, Thiago D. Simão, Matthijs T. J. Spaan, and Alessandro Farinelli

·       Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity by Yin, Lu, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, and Shiwei Liu

·       Junk DNA hypothesis: A task-centric angle of llm pre-trained weights through sparsity by Yin, Lu, Shiwei Liu, Ajay Jaiswal, Souvik Kundu, and Zhangyang Wang


·       BiDST: Dynamic Sparse Training is a Bi-Level Optimization Problem by Ji, Jie, Gen Li, Lu Yin, Minghai Qin, Geng Yuan, Linke Guo, Shiwei Liu, and Xiaolong Ma

·       CaM: Cache Merging for Memory-efficient LLMs Inference. by Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji

·       Every Sparse Pattern Every Sparse Ratio All At Once by Zhangheng LI, Shiwei Liu, Tianlong Chen, AJAY KUMAR JAISWAL, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang

 

We’ll also present the following workshop papers:

·       Accelerating Simulation of Two-Phase Flows with Neural PDE Surrogates by Yoeri Poels, Koen Minartz, Harshit Bansal, Vlado Menkovski at the AI4Science workshop

·       Exploring the development of complexity over depth and time in deep neural networks by Hannah Pinson, Aurélien Boland, Vincent Ginis, Mykola Pechenizkiy at the HiLD workshop

·       Variational Stochastic Gradient Descent for Deep Neural Networks by H Chen, A Kuzina, B Esmaeili, JM Tomczak at the WANT workshop

 

See you in Vienna!

Details