Celestine Dünner

Celestine Mendler-Dünner

I am a postdoctoral researcher at UC Berkeley interested in algorithmic aspects of machine learning and artificial intelligence. I am holding an SNF Early Postdoc.Mobility fellowship and I am hosted by Prof. Moritz Hardt. Prior to that I worked as a research scientist at IBM Research Zurich and contributed to the development of the Snap ML library. I have obtained my PhD from ETH Zurich where I was affiliated with the Data Analytics Laboratory and supervised by Prof. Thomas Hofmann.

LinkedIn Google Scholar Twitter

News

  We will host a session on performative prediction at the WiML Un-workshop@ICML'20 -- [abstract]
    I recieved the ETH Medal for my dissertation
    I won the Fritz Kutter Award for the high industrial impact of my research on system-aware algorithm design
   Our latest paper was accepted at NeurIPS 2019 as a spotlight presentation
    I was awarded the SNF Early Postdoc.Mobility fellowship and will join UC Berkeley in Summer 2019

Research Projects

I am interested in algorithmic challenges related to the practical deployment of machine learning algorithms. These are the two main projects I am currently working on:
  • System-Aware Algorithm Design

    When training machine learning models in production we often care about speed: fast training time allows short development cycles, offers fast time-to-insight, and after all, it saves valuable resources. The goal of System-aware algorithm design is to achieve fast training through efficient utilization of compute resources available in modern heterogeneous systems. We demonstrate [NeurIPS'18] that this approach can lead to several orders of magnitude reduction in training time compared to standard system-agnostic methods.
    To achieve this we incorporate knowledge about system-level bottlenecks into the algorithm design. In particular, we develop new principled tools and methods for training machine learning models focusing on: compute parallelism [NeurIPS'19][ICML'20], hierarchical memory structures [HiPC'19][NeurIPS'17], accelerator units [FGCS'17] and interconnect bandwidth in distributed systems [ICML'18]. Most innovations of this research have been integrated in the IBM Snap ML library and help diverse companies improve speed, efficiency and scalability of their machine learning workloads.

  • Performative Prediction

    Whenever we use supervised learning in social settings, we almost never make predictions for predictions' sake, but rather to inform decision making within some broader context. Hence, our choice of predictive model can lead to changes in the way the broader system behaves. We call such predictions performative.
    We introduce the framework of performative prediction to supervised learning [ICML'20] and address challenges faced in stochastic optimization when the deployment of a model triggers performative effects in the data distribution it is being trained on [ArXiv'20]. As a subfield of learning theory, performative prediction is only starting to receive attention from the community and there are many exciting open challenges to explore.

Publications

2020
Stochastic Optimization for Performative Prediction
Celestine Mendler-Dünner*, Juan C. Perdomo*, Tijana Zrnic*, Moritz Hardt
ArXiv preprint
Randomized Block-Diagonal Preconditioning for Parallel Learning
Celestine Mendler-Dünner, Aurelien Lucchi
International Conference on Machine Learning (ICML)
Performative Prediction
Juan C. Perdomo*, Tijana Zrnic*, Celestine Mendler-Dünner, Moritz Hardt
International Conference on Machine Learning (ICML)
2019
SySCD: A System-Aware Parallel Coordinate Descent Algorithm
N.Ioannou*, C.Mendler-Dünner*, T.Parnell
Advances in Neural Information Processing Systems (NeurIPS -- Spotlight)
On Linear Learning with Manycore Processors
E.Wszola, C.Mendler-Dünner, M.Jaggi, M.Püschel
IEEE International Conference on High Performance Computing (HiPC -- best paper finalist)
System-Aware Algorithms for Machine Learning
C.Mendler-Dünner
ETH Research Collection (PhD Thesis -- ETH medal)
2018
Snap ML: A Hierarchical Framework for Machine Learning
C.Dünner*, T.Parnell*, D.Sarigiannis, N.Ioannou, A.Anghel, G.Ravi, M.Kandasamy and H.Pozidis
Advances in Neural Information Processing Systems (NeurIPS)
Sampling Acquisition Functions for Batch Bayesian Optimization
A. De Palma, C.Mendler-Dünner, T.Parnell, A. Anghel, H. Pozidis
Workshop on Bayesian Nonparametrics (BNP@NeurIPS)
A Distributed Second-Order Algorithm You Can Trust
C.Dünner, M. Gargiani, A. Lucchi, A. Bian, T. Hofmann and M. Jaggi
International Conference on Machine Learning (ICML)
Addressing Interpretability and Cold-Start in Matrix Factorization for Recommender Systems
C.Dünner*, M. Vlachos*, R.Heckel, V.Vassiliaadis, T.Parnell and K.Atasu
IEEE Transactions on Knowledge and Data Engineering (TKDE)
Tera-Scale Coordinate Descent on GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
Journal of Future Generation Computer Systems (FGCS)
2017
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
C.Dünner, T.Parnell, M.Jaggi
Advances in Neural Information Processing Systems (NIPS)
Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark
C.Dünner, T.Parnell, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Conference on Big Data (IEEE Big Data)
High-Performance Recommender System Training Using Co-Clustering on CPU/GPU Clusters
K.Atasu, T.Parnell, C.Dünner, M.Vlachos and H.Pozidis
International Conference on Parallel Processing (ICPP)
Large-Scale Stochastic Learning using GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning)
Scalable and Interpretable Product Recommendations via Overlapping Co-Clustering
R.Heckel, M.Vlachos, T.Parnell and C.Dünner
IEEE International Conference on Data Engineering (ICDE)
2016
Primal-Dual Rates and Certificates
C.Dünner, S.Forte, M.Takac and M.Jaggi
International Conference on Machine Learning (ICML)
*equal contribution