Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI

5 May 2026

Tangier, Morocco

About the workshop

This workshop focuses on calibration, the alignment between predicted probabilities and observed frequencies, which is fundamental to reliable decision-making and trust in modern AI systems. Bringing together researchers from machine learning, statistics, theoretical computer science, and applied domains such as medicine and forecasting, the workshop aims to unify perspectives on calibration theory, evaluation, and practice. Through a tutorial, invited talks, contributed posters, and interactive discussions, we seek to foster a shared understanding of calibration and to build a lasting cross-disciplinary community around trustworthy probabilistic prediction.

Call for papers

The primary aim of this workshop is to bring together researchers and practitioners working on calibration across machine learning, statistics, theoretical computer science, and applied domains. We seek to clarify foundational questions, align evaluation practices, and explore the practical implications of calibration for reliable and trustworthy AI systems.

Topics

The potential topics include, but are not limited to:

  • Foundations of calibration and probabilistic forecasting
  • Calibration metrics and evaluation methodologies
  • Proper scoring rules and decision-theoretic perspectives
  • Calibration in high-dimensional and multiclass settings
  • Post-hoc and end-to-end calibration methods
  • Calibration under distribution shift
  • Calibration for generative models and large language models
  • Calibration in high-stakes applications (e.g., medicine, forecasting, finance)
  • Connections between calibration, uncertainty, and trust in AI

Submissions

🚨 Submit to our workshop and win a free registration for AISTATS 2026 🚨
We will offer a free conference registration to the best workshop submission led by a student, don't miss the opportunity to showcase your work and attend the conference for free!

We invite submissions of short papers presenting recent work on calibration. Submissions are accepted through OpenReview.

If your paper about calibration (or a closely related topic) is already accepted at the main AISTATS 2026 conference (congrats πŸŽ‰), you can register to present it at our poster session by filling the following form: main conference paper track.

Important dates

  • Call for contributions: January 12, 2026
  • Submission deadline: February 20, 2026 (Anywhere on Earth)
  • Notification of acceptance: Early March 2026
  • Workshop date: May 5, 2026

Format

Submissions should be formatted using the AISTATS LaTeX style. Papers are limited to 4 pages (excluding references and appendices). The review process will be double-blind. Accepted contributions will be presented as posters during the workshop. If you include an appendix, keep in mind that reviewers might not read it carefully. Your principal idea / contribution should be understandable from the main text.

Policies

Submissions under review at other venues are allowed. All accepted papers are non-archival and will be made publicly available on OpenReview.

Speakers

Peter Flach

Peter Flach

Tutorial β€” Foundations of Calibration
Ewout W. Steyerberg

Ewout W. Steyerberg

Keynote β€” Trustworthy Patient-level Predictions
Johanna Ziegel

Johanna Ziegel

Keynote β€” Calibration of Probabilistic Predictions
Futoshi Futami

Futoshi Futami

Invited Talk β€” Statistical perspectives
Florian Buettner

Florian Buettner

Invited Talk β€” Calibrated Uncertainty for Biomedical Applications
Nika Haghtalab

Nika Haghtalab

Invited Talk β€” Multi-objective Learning

Schedule

Peter Flach

Tutorial Peter Flach

Foundations of Calibration, Metrics, and Open Questions

Coffee Break

Ewout W. Steyerberg

Keynote Ewout W. Steyerberg

Towards Trustworthy Patient-level Predictions: A Multiverse of Uncertainty and Heterogeneity

Futoshi Futami

Invited Talk Futoshi Futami

Statistical Perspectives on Calibration

Lunch Break

Johanna Ziegel

Keynote Johanna Ziegel

Calibration of Probabilistic Predictions

Florian Buettner

Invited Talk Florian Buettner

Leveraging Calibrated Uncertainty Estimates for Biomedical Applications

Poster Session

Contributed Posters Showcasing Recent Work on Calibration

Coffee Break

Nika Haghtalab

Invited Talk Nika Haghtalab

Multi-objective learning: An Algorithmic Toolbox for Optimal Predictions on any Downstream Task and Loss.

Open Problems Session

Moderated Discussions on Open Challenges in Calibration

Organizers

Sebastian Gruber

Sebastian Gruber

KU Leuven
Teodora Popordanoska

Teodora Popordanoska

KU Leuven
Yifan Wu

Yifan Wu

Microsoft Research
Eugène Berta

Eugène Berta

INRIA
Francis Bach

Francis Bach

INRIA
Edgar Dobriban

Edgar Dobriban

University of Pennsylvania