This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks.
Distributionally robust learning involves a minimax game between a predictor and an adversary, who is usually subject to constraints from data. Instead of focusing on the robustness against adversarial perturbations on the covariate variables, as in many recent literatures, I focus on using conditional output distributions as adversaries. This formulation has two major advantages: (1) It provides a conservative way to quantify the model uncertainties under covariate shift, which benefits data collection and experimental design on various real-world tasks. (2) It provides consistent predictors for minimizing non-smooth loss functions, which is usually elusive for the empirical risk minimization framework.
Here are some featured papers in this line of work:
In the vanilla supervised learning setting:
The first paper using logloss in the framework under covariate shift: the robust bias-aware classifier: Anqi Liu and Brian D. Ziebart “Robust Classification under Sample Selection Bias”, In NeurIPS2014. Spotlight.
The first paper minimizing non-smooth loss function in this framework that provides consistent analytical solutions for minimax games under constraints: Rizal Fathony, Anqi Liu, Kaiser Asif, and Brian D. Ziebart “Adversarial Multiclass Classification: A Risk Minimization Perspective”, In NeurIPS2016.
The first paper solving regression problems in this framework under covariate shift that directly predicts Gaussian mean and variance for uncertainty estimation in continuous problems: Xiangli Chen, Mathew Monfort, Anqi Liu, and Brian D. Ziebart “Robust Covariate Shift Regression”, In AISTATS2016.
The first paper providing unified solutions for the framework under the general loss settings, like for cost-sensitive and abstaining loss functions: Rizal Fathony, Kaiser Asif, Anqi Liu, Mohammad Ali Bashiri, Wei Xing, Sima Behpour, Xinhua Zhang, and Brian D. Ziebart “Consistent Robust Adversarial Prediction for General Multiclass Classification”, On Arxiv 2018.
In the interactive learning setting:
The first paper tackling the sample bias problems in active learning from a robust learning point of view: Anqi Liu, Lev Reyzin, and Brian D. Ziebart “Shift-Pessimistic Active Learning using Robust Bias-Aware Prediction”, In AAAI2015.
The first paper integrating the conservative uncertainty quantification to help improve safe exploration efficiency and constraint satisfaction in real-world systems: Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, and Yisong Yue “Robust Regression for Safe Exploration in Control”, In L4DC 2020.
The first paper providing robust dynamics estimation and end-to-end guarantees for safe planning in stochastic control systems: Yashwanth Kumar Nakka, Anqi Liu, Guanya Shi, Anima Anandkumar, Yisong Yue, and Soon-Jo Chung. “Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems”, RA-L, 2020.
In the large-scale learning setting:
- The first paper scaling up the distributionally robust learning under covariate shift to solve large scale sim-to-real unsupervised domain adaptation tasks: Haoxuan Wang, Anqi Liu, Zhiding Yu, Yisong Yue, and Anima Anandkumar. “Distributionally Robust Learning for Unsupervised Domain Adaptation”, on Arxiv 2020.
In the fair learning setting:
- The first paper dealing with the intersection between covariate shift and fairness from a robust learning point of view: Ashkan Rezaei, Anqi Liu, Omid Memarrast, and Brian D. Ziebart. “Robust Fairness Under Covariate Shift”, AAAI2021.