Towards Trustworthy Predictions: Theory and Applications of Calibration for Modern AI

5 May 2026

Tangier, Morocco

About the workshop

This workshop focuses on calibration, the alignment between predicted probabilities and observed frequencies, which is fundamental to reliable decision-making and trust in modern AI systems. Bringing together researchers from machine learning, statistics, theoretical computer science, and applied domains such as medicine and forecasting, the workshop aims to unify perspectives on calibration theory, evaluation, and practice. Through a tutorial, invited talks, contributed posters, and interactive discussions, we seek to foster a shared understanding of calibration and to build a lasting cross-disciplinary community around trustworthy probabilistic prediction.

Call for papers

The primary aim of this workshop is to bring together researchers and practitioners working on calibration across machine learning, statistics, theoretical computer science, and applied domains. We seek to clarify foundational questions, align evaluation practices, and explore the practical implications of calibration for reliable and trustworthy AI systems.

Topics

The potential topics include, but are not limited to:

  • Foundations of calibration and probabilistic forecasting
  • Calibration metrics and evaluation methodologies
  • Proper scoring rules and decision-theoretic perspectives
  • Calibration in high-dimensional and multiclass settings
  • Post-hoc and end-to-end calibration methods
  • Calibration under distribution shift
  • Calibration for generative models and large language models
  • Calibration in high-stakes applications (e.g., medicine, forecasting, finance)
  • Connections between calibration, uncertainty, and trust in AI

Submission

We invite submissions of short papers presenting recent work on calibration. Submissions will be handled via OpenReview. The submission link will be announced soon.

Important Dates

  • Call for contributions: Late January 2026
  • Submission deadline: Late February 2026
  • Notification of acceptance: Early March 2026
  • Workshop Date: 5 May 2026

Format

Submissions should be formatted using the AISTATS LaTeX style. Papers are limited to 4 pages (excluding references). The review process will be double-blind. Accepted contributions will be presented as posters during the workshop.

Policies

Submissions under review at other venues are allowed. All accepted papers are non-archival and will be made publicly available on OpenReview.

Speakers

Peter Flach

Peter Flach

Tutorial — Foundations of calibration
Ewout W. Steyerberg

Ewout W. Steyerberg

Keynote — Trustworthy patient-level predictions
Johanna Ziegel

Johanna Ziegel

Keynote — Calibration & scoring rules
Futoshi Futami

Futoshi Futami

Invited Talk — Statistical perspectives
Florian Buettner

Florian Buettner

Invited Talk — Biomedical AI calibration
Nika Haghtalab

Nika Haghtalab

Invited Talk — ML & theory of calibration

Schedule

Peter Flach

Tutorial Peter Flach

Foundations of calibration, metrics, and open questions.

Coffee Break

Ewout W. Steyerberg

Keynote Ewout W. Steyerberg

Towards trustworthy patient-level predictions: uncertainty and heterogeneity.

Futoshi Futami

Invited Talk Futoshi Futami

Statistical perspectives on calibration.

Lunch Break

Johanna Ziegel

Keynote Johanna Ziegel

Calibration and proper scoring rules.

Florian Buettner

Invited Talk Florian Buettner

Applied aspects of calibration in biomedical AI.

Poster Session

Contributed posters showcasing recent work on calibration.

Coffee Break

Nika Haghtalab

Invited Talk Nika Haghtalab

Machine learning and theoretical perspectives on calibration.

Open Problems Session

Moderated discussions on open challenges in calibration.

Organizers

Sebastian Gruber

Sebastian Gruber

KU Leuven
Teodora Popordanoska

Teodora Popordanoska

KU Leuven
Yifan Wu

Yifan Wu

Microsoft Research
Eugène Berta

Eugène Berta

INRIA
Francis Bach

Francis Bach

INRIA
Edgar Dobriban

Edgar Dobriban

University of Pennsylvania