Distributionally Robust Learning under Covariate Shift

This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks.

Distributionally robust learning involves a minimax game between a predictor and an adversary, who is usually subject to constraints from data. Instead of focusing on the robustness against adversarial perturbations on the covariate variables, as in many recent literatures, I focus on using conditional output distributions as adversaries. This formulation has two major advantages: (1) It provides a conservative way to quantify the model uncertainties under covariate shift, which benefits data collection and experimental design on various real-world tasks. (2) It provides consistent predictors for minimizing non-smooth loss functions, which is usually elusive for the empirical risk minimization framework.

Here are some featured papers in this line of work:

  • In the vanilla supervised learning setting:

    • The first paper using logloss in the framework under covariate shift: the robust bias-aware classifier: Anqi Liu and Brian D. Ziebart “Robust Classification under Sample Selection Bias”, In NeurIPS2014. Spotlight.

    • The first paper minimizing non-smooth loss function in this framework that provides consistent analytical solutions for minimax games under constraints: Rizal Fathony, Anqi Liu, Kaiser Asif, and Brian D. Ziebart “Adversarial Multiclass Classification: A Risk Minimization Perspective”, In NeurIPS2016.

    • The first paper solving regression problems in this framework under covariate shift that directly predicts Gaussian mean and variance for uncertainty estimation in continuous problems: Xiangli Chen, Mathew Monfort, Anqi Liu, and Brian D. Ziebart “Robust Covariate Shift Regression”, In AISTATS2016.

    • The first paper providing unified solutions for the framework under the general loss settings, like for cost-sensitive and abstaining loss functions: Rizal Fathony, Kaiser Asif, Anqi Liu, Mohammad Ali Bashiri, Wei Xing, Sima Behpour, Xinhua Zhang, and Brian D. Ziebart “Consistent Robust Adversarial Prediction for General Multiclass Classification”, On Arxiv 2018.

  • In the interactive learning setting:

  • In the large-scale learning setting:

  • In the fair learning setting:

    • The first paper dealing with the intersection between covariate shift and fairness from a robust learning point of view: Ashkan Rezaei, Anqi Liu, Omid Memarrast, and Brian D. Ziebart. “Robust Fairness Under Covariate Shift”, AAAI2021.