Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI
5 May 2026
Tangier, Morocco
About the workshop
This workshop focuses on calibration, the alignment between predicted probabilities and observed frequencies, which is fundamental to reliable decision-making and trust in modern AI systems. Bringing together researchers from machine learning, statistics, theoretical computer science, and applied domains such as medicine and forecasting, the workshop aims to unify perspectives on calibration theory, evaluation, and practice. Through a tutorial, invited talks, contributed posters, and interactive discussions, we seek to foster a shared understanding of calibration and to build a lasting cross-disciplinary community around trustworthy probabilistic prediction.
Call for papers
The primary aim of this workshop is to bring together researchers and practitioners working on calibration across machine learning, statistics, theoretical computer science, and applied domains. We seek to clarify foundational questions, align evaluation practices, and explore the practical implications of calibration for reliable and trustworthy AI systems.
Topics
The potential topics include, but are not limited to:
- Foundations of calibration and probabilistic forecasting
- Calibration metrics and evaluation methodologies
- Proper scoring rules and decision-theoretic perspectives
- Calibration in high-dimensional and multiclass settings
- Post-hoc and end-to-end calibration methods
- Calibration under distribution shift
- Calibration for generative models and large language models
- Calibration in high-stakes applications (e.g., medicine, forecasting, finance)
- Connections between calibration, uncertainty, and trust in AI
Submissions
π¨ Submit to our workshop and win a free registration for AISTATS 2026 π¨
We will offer a free conference registration to the best workshop submission led by a student, don't miss the opportunity to showcase your work and attend the conference for free!
We invite submissions of short papers presenting recent work on calibration. Submissions are accepted through OpenReview.
If your paper about calibration (or a closely related topic) is already accepted at the main AISTATS 2026 conference (congrats π), you can register to present it at our poster session by filling the following form: main conference paper track.
Important dates
- Call for contributions: January 12, 2026
- Submission deadline: February 20, 2026 (Anywhere on Earth)
- Notification of acceptance: Early March 2026
- Workshop date: May 5, 2026
Format
Submissions should be formatted using the AISTATS LaTeX style. Papers are limited to 4 pages (excluding references and appendices). The review process will be double-blind. Accepted contributions will be presented as posters during the workshop. If you include an appendix, keep in mind that reviewers might not read it carefully. Your principal idea / contribution should be understandable from the main text.
Policies
Submissions under review at other venues are allowed. All accepted papers are non-archival and will be made publicly available on OpenReview.
Speakers
Peter Flach
Tutorial β Foundations of Calibration
Ewout W. Steyerberg
Keynote β Trustworthy Patient-level Predictions
Johanna Ziegel
Keynote β Calibration of Probabilistic Predictions
Futoshi Futami
Invited Talk β Statistical perspectives
Florian Buettner
Invited Talk β Calibrated Uncertainty for Biomedical Applications
Nika Haghtalab
Invited Talk β Multi-objective LearningSchedule
Tutorial Peter Flach
Foundations of Calibration, Metrics, and Open Questions
Coffee Break
Keynote Ewout W. Steyerberg
Towards Trustworthy Patient-level Predictions: A Multiverse of Uncertainty and Heterogeneity
Invited Talk Futoshi Futami
Statistical Perspectives on Calibration
Lunch Break
Keynote Johanna Ziegel
Calibration of Probabilistic Predictions
Invited Talk Florian Buettner
Leveraging Calibrated Uncertainty Estimates for Biomedical Applications
Poster Session
Contributed Posters Showcasing Recent Work on Calibration
Coffee Break
Invited Talk Nika Haghtalab
Multi-objective learning: An Algorithmic Toolbox for Optimal Predictions on any Downstream Task and Loss.
Open Problems Session
Moderated Discussions on Open Challenges in Calibration
Organizers
Sebastian Gruber
KU Leuven
Teodora Popordanoska
KU Leuven
Yifan Wu
Microsoft Research
Eugène Berta
INRIA
Francis Bach
INRIA