Algorithms & Society

My research studies foundations of machine learning with a focus on social questions. I am broadly working towards developing theoretical as well as practical tools to support safe, reliable and trustworthy machine learning with a positive impact on society. This encompasses technical challenges around interactive machine learning, optimization in dynamic environments, and resource-efficient learning, as well as interdisciplinary questions on understanding social dynamics around algorithms, quantifying their impact on digital economies, and developing tools to support the responsible use of machine learning models in social science research.

Projects Publications Patents

Featured Research Projects

Large Language Models (LLM) offer a promising data source of social science research. As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models. The research questions rage from learning about biases of LLMs, to extracting infromation about human subpopulations that are otherwise hard to reach. In our work we carefully examine the validity of surveys as a method to extract population statistics from LLMs. We reveal pitfalls, design more statistically rigorous methods, and develop tools for systematic testing of LLM capabilities to emulate human outcomes.
Snap ML
Evaluating LLMs as risk scores

A Python library to systematically translate the ACS survey data into natural text prompts to benchmark LLM outputs against US population statistics. Individual prediction tasks are non-realizable and provide insights into model's ability to express natural uncertainty in human outcomes.

Predictions when deployed in societal systems can trigger actions and reactions of individuals. Thereby they can change the way the broader system behaves -- a dynamic effect that traditional machine learning fails to account for. To formalize and reason about performativity in the context of machine learning, we have developed the framework of performative prediction. It extends the classical risk minimization framework in that it allows the data distribution to depend on the deployed model. This inherently dynamic viewpoint leads to new solution concepts and optimization challenges, and it brings forth interesting connections to concepts from causality, game theory and control.
Snap ML
Performative Prediction

Algorithmic predictions play an increasingly important role in digital economies as they mediate services, platforms, and markets at societal scale. The fact that such services can steer consumption and behavior is a central concern in modern anti-trust. I am exploring how the concept of performativity can help quantify economic power in digital economies and support digital market investigations. Intuitively, the more powerful a firm the more performative their predictions, a causal effect strength we can measure from observational and experimental data.
Powermeter
Performative power of online search

A Chrome extension that measure how content arrangements impact user click behavior through randomized experiments.

Digital platforms critically rely on data provided by individuals. Thus data offers a lever to the participants for countering power imbalances and gaining some control over the system. It can be seen a new technology specific alternative to traditional strikes. We study different learning settings and quantify the critical mass of individuals that need to be mobilized to achieve concrete goals. The effecitveness of strategies is closely related to performativity of the platform's algorithm. Research questions encompass the design of strategies, challenges of coordination, as well as connection to labor markets and collective action theory in political economy.
Powermeter
Algorithmic collective action

  • Theoretical framework. ICML 2022.
  • Application to sequential recommender systems. NeurIPS 2024.
  • A collection of documented use-cases. Github.

When building societal-scale machine learning systems, training time and resource constraints can be a critical bottleneck for dynamic system optimization. Efficient training algorithms that are aware of distributed architectures, interconnect topologies, memory constraints and accelerator units form an important bilding block towards using available resources most efficiently. As part of my PhD research, we demonstrated that system-aware algroithms can lead to several orders of magnitude reduction in training time compared to standard system-agnostic methods. Today, the innovations of my PhD research form the backbone of the IBM Snap ML library that has been integrated with several of IBM's core AI products.
Snap ML
IBM Snap Machine Learning

Snap ML is a library that provides resource efficient and fast training of popular machine learning models on modern computing systems.
>400k downloads on PyPi
https://www.zurich.ibm.com/snapml/

Publications and Preprints

*alphabetical order
Decline Now: A combinatorial model for algorithm collective action
D.Sigg, M.Hardt and C.Mendler-Dünner
Arxiv, 2024.
Evaluating language models as risk scores
A.Cruz, M.Hardt and C.Mendler-Dünner
Advances in Neural Information Processing Systems (NeurIPS), 2024.
Questioning the Survey Responses of Large Language Models
R.Dominguez-Olmedo, M.Hardt and C.Mendler-Dünner
Advances in Neural Information Processing Systems (NeurIPS -- oral), 2024.
An engine not a camera: Measuring performative power of online search
C.Mendler-Dünner, G.Carovano and M.Hardt
Advances in Neural Information Processing Systems (NeurIPS), 2024.
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists
J.Baumann and C.Mendler-Dünner
European Workshop on Algorithmic Fairness (EWAF), 2024.
Advances in Neural Information Processing Systems (NeurIPS), 2024.
Causal Inference out of Control: Estimating Performativity without Tretament Randomization
G.Cheng, M.Hardt and C.Mendler-Dünner
International Conference on Machine Learning (ICML), 2024.
Performative Prediction: Past and Future
M.Hardt and C.Mendler-Dünner
Arxiv, 2023.
Collaborative Learning via Prediction Consensus
D.Fan, C.Mendler-Dünner and M.Jaggi
Advances in Neural Information Processing Systems (NeurIPS), 2023.
Algorithmic Collective Action in Machine Learning
M.Hardt*, E.Mazumdar*, C.Mendler-Dünner* and T.Zrnic*
International Conference on Machine Learning (ICML), 2023.
Anticipating Performativity by Predicting from Predictions
C.Mendler-Dünner, F.Ding and Y.Wang
Advances in Neural Information Processing Systems (NeurIPS), 2022.
Performative Power
M.Hardt*, M.Jagadeesan* and C.Mendler-Dünner*
Advances in Neural Information Processing Systems (NeurIPS), 2022.
Regret Minimization with Performative Feedback
M.Jagadeesan, T.Zrnic and C.Mendler-Dünner
International Conference on Machine Learning (ICML), 2022.
Symposium on Foundations of Responsible Computing (FORC), 2022.
Test-time Collective Prediction
C.Mendler-Dünner, W.Guo, S.Bates and M.I.Jordan
Advances in Neural Information Processing Systems (NeurIPS), 2021.
Alternative Microfoundations for Strategic Classification
M.Jagadeesan, C.Mendler-Dünner and M.Hardt
International Conference on Machine Learning (ICML), 2021.
Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
AAAI Conference on Artificial Intelligence (AAAI), 2021.
Stochastic Optimization for Performative Prediction
C.Mendler-Dünner*, J.C.Perdomo*, T.Zrnic* and M.Hardt
Advances in Neural Information Processing Systems (NeurIPS), 2020.
Performative Prediction
J.C.Perdomo*, T.Zrnic*, C.Mendler-Dünner and M.Hardt
International Conference on Machine Learning (ICML), 2020.
Randomized Block-Diagonal Preconditioning for Parallel Learning
C.Mendler-Dünner and A.Lucchi
International Conference on Machine Learning (ICML), 2020.
SySCD: A System-Aware Parallel Coordinate Descent Algorithm
N.Ioannou*, C.Mendler-Dünner* and T.Parnell
Advances in Neural Information Processing Systems (NeurIPS -- Spotlight), 2019.
On Linear Learning with Manycore Processors
E.Wszola, C.Mendler-Dünner, M.Jaggi and M.Püschel
IEEE International Conference on High Performance Computing (HiPC -- best paper finalist), 2019.
System-Aware Algorithms for Machine Learning
C.Mendler-Dünner
ETH Research Collection (PhD Thesis -- ETH medal), 2019.
Snap ML: A Hierarchical Framework for Machine Learning
C.Dünner*, T.Parnell*, D.Sarigiannis, N.Ioannou, A.Anghel, G.Ravi, M.Kandasamy and H.Pozidis
Advances in Neural Information Processing Systems (NeurIPS), 2018.
A Distributed Second-Order Algorithm You Can Trust
C.Dünner, M.Gargiani, A.Lucchi, A.Bian, T.Hofmann and M.Jaggi
International Conference on Machine Learning (ICML), 2018.
Addressing Interpretability and Cold-Start in Matrix Factorization for Recommender Systems
C.Dünner*, M. Vlachos*, R.Heckel, V.Vassiliaadis, T.Parnell and K.Atasu
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2018.
Tera-Scale Coordinate Descent on GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
Journal of Future Generation Computer Systems (FGCS), 2018.
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
C.Dünner, T.Parnell and M.Jaggi
Advances in Neural Information Processing Systems (NIPS), 2017.
Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark
C.Dünner, T.Parnell, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Conference on Big Data (IEEE Big Data), 2017.
High-Performance Recommender System Training Using Co-Clustering on CPU/GPU Clusters
K.Atasu, T.Parnell, C.Dünner, M.Vlachos and H.Pozidis
International Conference on Parallel Processing (ICPP), 2017.
Scalable and Interpretable Product Recommendations via Overlapping Co-Clustering
R.Heckel, M.Vlachos, T.Parnell and C.Dünner
IEEE International Conference on Data Engineering (ICDE), 2017.
Primal-Dual Rates and Certificates
C.Dünner, S.Forte, M.Takac and M.Jaggi
International Conference on Machine Learning (ICML), 2016.

Peer-reviewed Workshop Contributions

Questioning the Survey Responses of Large Language Models
R.Dominguez-Olmedo, M.Hardt and C.Mendler-Dünner
Workshop on Reliable and Responsible Foundation Models (R2-FM@ICLR -- oral), 2024.
Revisiting Design Choices in Proximal Policy Optimization
C.C.-Y.Hsu, C.Mendler-Dünner and M.Hardt
Workshop on Real World Challenges in RL (RWRL@NeurIPS), 2020.
Differentially Private Stochastic Coordinate Descent
G.Damaskinos, C.Mendler-Dünner, R.Guerraoui, N.Papandreou and T.Parnell
Workshop on Privacy Preserving ML (PPML@NeurIPS), 2020.
Breadth-first, Depth-next Training of Random Forests
A.Anghel*, N.Ioannou*, T.Parnell, N.Papandreou, C.Mendler-Dünner and H.Pozidis
Workshop on Systems for ML (MLSys@NeurIPS), 2019.
Snap ML
C.Mendler-Dünner and A.Anghel
Women in Machine Learning Workshop (WiML@NeurIPS), 2018.
Sampling Acquisition Functions for Batch Bayesian Optimization
A.De Palma, C.Mendler-Dünner, T.Parnell, A.Anghel and H.Pozidis
Workshop on Bayesian Nonparametrics (BNP@NeurIPS), 2018.
Parallel training of linear models without compromising convergence
N.Ioannou, C.Mendler-Dünner, K.Kourtis, T.Parnell
Workshop on Systems for ML (MLSys@NeurIPS), 2018.
Large-Scale Stochastic Learning using GPUs
T.Parnell, C.Dünner, K.Atasu, M.Sifalakis and H.Pozidis
IEEE International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning), 2017.

Patents

US20210264320A1 - T. Parnell, A. Anghel, N. Ioannou, N. Papandreou, C. Mendler-Dünner, D. Sarigiannis, H. Pozidis.
US11562270B2 - M. Kaufmann, T. Parnell, A. Kourtis, C. Mendler-Dünner.
US11573803B2 - N. Ioannou, C. Dünner, T. Parnell.
US11295236B2 - C. Dünner, T. Parnell, H. Pozidis.
US11315035B2 - T. Parnell, C. Dünner, H. Pozidis, D. Sarigiannis
US11461694B2 - T. Parnell, C. Dünner, D. Sarigiannis, H. Pozidis.
US11301776B2 - C. Dünner, T. Parnell, H. Pozidis.
US10147103B2 - C. Dünner, T. Parnell, H. Pozidis, V. Vasileiadis, M. Vlachos.
US10839255B2 - K. Atasu, C. Dünner, T. Mittelholzer, T. Parnell, H. Pozidis, M. Vlachos.