Distributionally Robust Learning under Covariate Shift

This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks.

Distributionally robust learning involves a minimax game between a predictor and an adversary, who is usually subject to constraints from data. Instead of focusing on the robustness against adversarial perturbations on the covariate variables, as in many recent literatures, I focus on using conditional output distributions as adversaries. This formulation has two major advantages: (1) It provides a conservative way to quantify the model uncertainties under covariate shift, which benefits data collection and experimental design on various real-world tasks. (2) It provides consistent predictors for minimizing non-smooth loss functions, which is usually elusive for the empirical risk minimization framework.

Here are some featured papers in this line of work: