Quantifying uncertainty in machine learning predictions is essential for trustworthy and robust decision-making. However, obtaining valid uncertainty estimates with non-asymptotic coverage guarantees across diverse data distributions remains a significant challenge.
Conformal Prediction offers a flexible, model-agnostic framework for uncertainty quantification, providing distribution-free, finite-sample statistical guarantees. It constructs prediction sets that contain the true outcome with a user-specified error level—regardless of the underlying data distribution or predictive model.
This talk introduces the core principles of conformal prediction and examines recent extensions to multi-response regression, where modeling dependencies among multiple outputs is key. We will review several methods for constructing prediction regions in this setting, highlighting their practical tradeoffs and theoretical properties.