Celestine Dünner

Celestine Mendler-Dünner

I am a postdoctoral researcher at UC Berkeley interested in algorithmic aspects of machine learning and artificial intelligence, with a focus on practical challenges such as scalability, effciency and social impact. I am holding an SNF Early Postdoc.Mobility fellowship and I am hosted by Prof. Moritz Hardt. Prior to that I worked as a research scientist at IBM Research Zurich and contributed to the development of the Snap ML library. I have obtained my PhD from ETH Zurich where I was affiliated with the Data Analytics Laboratory and supervised by Prof. Thomas Hofmann.

LinkedIn Google Scholar Twitter

News

    IBM released Snap ML for public use: > pip install snapml
     I am honored to be a member of the ELLIS society.
   We are hosting a session on performative prediction at the WiML Un-workshop at ICML 2020
     I was awarded the ETH Medal for my dissertation
     I won the Fritz Kutter Award for the high industrial impact of my research on system-aware algorithm design
    Our latest paper was accepted at NeurIPS 2019 as a spotlight presentation
     I was awarded the SNF Early Postdoc.Mobility fellowship and will join UC Berkeley in Summer 2019
   I successfully defended my PhD. I am now a postdoctoral researcher at IBM Research, Zurich
    Snap ML in the press: Forbes, The Register and EE Times writing about our research

Research Projects

  • System-Aware Algorithm Design

    When training machine learning models in production, speed and efficiency are critical factors. Fast training times allow short development cycles, offer fast time-to-insight, and after all, save valuable resources. Our approach to achieving fast training is to enable the efficient use of modern hardware through novel algorithm design. In particular, we develop principled tools and methods for training machine learning models focusing on: compute parallelism [NeurIPS'19][ICML'20], hierarchical memory structures [HiPC'19][NeurIPS'17], accelerator units [FGCS'17] and interconnect bandwidth in distributed systems [ICML'18]. We demonstrated [NeurIPS'18] that such an approach can lead to several orders of magnitude reduction in training time compared to standard system-agnostic methods. The core innovations of this research have been integrated in the IBM Snap ML library and help diverse companies improve speed, efficiency and scalability of their machine learning workloads.

  • Dynamics of Decision Making

    Machine learning is increasingly used to support consequential decisions. When informing decisions, predictions have the potential to change the way the broader system behaves. In particular, they can alter the data distribution the predictive model has been trained on -- a dynamic effect that traditional machine learning fails to account for. To address this, we introduce the framework of performative prediction to supervised learning [ICML'20]. We analyze the dynamics of retraining strategies in this setup and address challenges faced in stochastic optimization when the deployment of a model triggers performative effects in the data distribution it is being trained on [NeurIPS'20]. As a subfield of learning theory, performative prediction is only starting to receive attention from the community and there are many exciting, unexplored connections to questions in causality, control theory, economics, and sociology.

Publications

*equal contribution

Conference and Journal Publications

Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
AAAI Conference on Artificial Intelligence (AAAI), 2021.
Stochastic Optimization for Performative Prediction
C.Mendler-Dünner*, J.C.Perdomo*, T.Zrnic* and M.Hardt
Advances in Neural Information Processing Systems (NeurIPS), 2020.
Randomized Block-Diagonal Preconditioning for Parallel Learning
C.Mendler-Dünner and A.Lucchi
International Conference on Machine Learning (ICML), 2020.
Performative Prediction
J.C.Perdomo*, T.Zrnic*, C.Mendler-Dünner and M.Hardt
International Conference on Machine Learning (ICML), 2020.
SySCD: A System-Aware Parallel Coordinate Descent Algorithm
N.Ioannou*, C.Mendler-Dünner* and T.Parnell
Advances in Neural Information Processing Systems (NeurIPS -- Spotlight), 2019.
On Linear Learning with Manycore Processors
E.Wszola, C.Mendler-Dünner, M.Jaggi and M.Püschel
IEEE International Conference on High Performance Computing (HiPC -- best paper finalist), 2019.
System-Aware Algorithms for Machine Learning
C.Mendler-Dünner
ETH Research Collection (PhD Thesis -- ETH medal), 2019.
Snap ML: A Hierarchical Framework for Machine Learning
C.Dünner*, T.Parnell*, D.Sarigiannis, N.Ioannou, A.Anghel, G.Ravi, M.Kandasamy and H.Pozidis
Advances in Neural Information Processing Systems (NeurIPS), 2018.
A Distributed Second-Order Algorithm You Can Trust
C.Dünner, M.Gargiani, A.Lucchi, A.Bian, T.Hofmann and M.Jaggi
International Conference on Machine Learning (ICML), 2018.
Addressing Interpretability and Cold-Start in Matrix Factorization for Recommender Systems
C.Dünner*, M. Vlachos*, R.Heckel, V.Vassiliaadis, T.Parnell and K.Atasu
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
Tera-Scale Coordinate Descent on GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
Journal of Future Generation Computer Systems (FGCS), 2018.
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
C.Dünner, T.Parnell and M.Jaggi
Advances in Neural Information Processing Systems (NIPS), 2017.
Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark
C.Dünner, T.Parnell, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Conference on Big Data (IEEE Big Data), 2017.
High-Performance Recommender System Training Using Co-Clustering on CPU/GPU Clusters
K.Atasu, T.Parnell, C.Dünner, M.Vlachos and H.Pozidis
International Conference on Parallel Processing (ICPP), 2017.
Large-Scale Stochastic Learning using GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning), 2017.
Scalable and Interpretable Product Recommendations via Overlapping Co-Clustering
R.Heckel, M.Vlachos, T.Parnell and C.Dünner
IEEE International Conference on Data Engineering (ICDE), 2017.
Primal-Dual Rates and Certificates
C.Dünner, S.Forte, M.Takac and M.Jaggi
International Conference on Machine Learning (ICML), 2016.

Peer-reviewed Workshop Contributions

Revisiting Design Choices in Proximal Policy Optimization
C.C.-Y.Hsu, C.Mendler-Dünner and M.Hardt
Workshop on Real World Challenges in RL (RWRL@NeurIPS), 2020.
Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
Workshop on Privacy Preserving ML (PPML@NeurIPS), 2020.
Breadth-first, Depth-next Training of Random Forests
A.Anghel*, N.Ioannou*, T.Parnell, N.Papandreou, C.Mendler-Dünner and H.Pozidis
Workshop on Systems for ML (MLSys@NeurIPS), 2019.
Snap ML
C.Mendler-Dünner and A.Anghel
Women in Machine Learning Workshop (WiML@NeurIPS), 2018.
Sampling Acquisition Functions for Batch Bayesian Optimization
A.De Palma, C.Mendler-Dünner, T.Parnell, A.Anghel and H.Pozidis
Workshop on Bayesian Nonparametrics (BNP@NeurIPS), 2018.
Parallel training of linear models without compromising convergence
N.Ioannou, C.Mendler-Dünner, K.Kourtis, T.Parnell
Workshop on Systems for ML (MLSys@NeurIPS), 2018.