UQ for AI Safety and Fairness

This project covers a series of my work on uncertainty quantification under distribution shift, especially covariate shift. We look at various metrics to evaluate the quality of uncertainty estimates, ranging from marginal coverage, sharpness, to expected calibration errors (for subgroups). We aim to develop rigorous and principled methods that is also practical in real world applications. For example, we try to improve and better balance the sample complexity and computational complexity for popular distributon-free methods like conformal predictions. We also utilize data beyond traditional labeled data in supervised learning for the purpose of model calibration. For example, we leverage human annotation to “align” the LLM better with human uncertainty.

Here are some featured papers in this line of work: