I am an Assistant Professor in the CS department at the Whiting School of Engineering of the Johns Hopkins University.
I am looking for motivated students to join my group. Details here.
My research interest lies in machine learning for trustworthy AI. I am broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. This requires the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. I am particularly interested in high-stake applications that concern the safety and societal impact of AI.
I develop, analyze, and apply methods in statistical machine learning, deep learning, and sequential decision making. One established line of work is in distributionally robust learning under covariate shift. My recent projects cover topics in different types of distribution shift, active learning, safe exploration, off-policy learning, fair machine learning, semi-supervised learning, cost-sensitive classification and hierarchical classification.
I worked with Prof. Yisong Yue and Prof. Anima Anandkumar as a postdoc in the Department of Computing and Mathematical Sciences (CMS) of California Institute of Technology (Caltech). Before that, I received my Ph.D. in Department of Computer Science, University of Illinois at Chicago (UIC). I was fortunate to have Prof. Brian Ziebart as my advisor.
Selected Recent Papers:
Ashkan Rezaei, Anqi Liu, Omid Memarrast, and Brian D. Ziebart. “Robust Fairness Under Covariate Shift”, in AAAI 2021.
Eric Zhao, Anqi Liu, Anima Anandkumar, and Yisong Yue “Active Learning under Label Shift”, in AISTATS 2021.
Yashwanth Kumar Nakka, Anqi Liu, Guanya Shi, Anima Anandkumar, Yisong Yue, and Soon-Jo Chung. “Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems”, RA-L, 2020.
Haoxuan Wang, Anqi Liu, Zhiding Yu, Yisong Yue, and Anima Anandkumar. “Deep Distributionally Robust Learning for Calibrated Uncertainties under Domain Shift”, on Arxiv 2021.
Anqi Liu, Hao Liu, Tongxin Li, Saeed Karimi Bidhendi, Yisong Yue, and Anima Anandkumar. “Disentangling Observed Causal Effects from Latent Confounders using Method of Moments”, on Arxiv 2021.
Anqi Liu, Hao Liu, Anima Anandkumar, and Yisong Yue. “Distributionally Robust Off-Policy Evaluation”, PDF coming soon.
Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, and Yisong Yue “Robust Regression for Safe Exploration in Control”, In L4DC 2020.
This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks. Media Coverage: The Value of Saying ‘I Don’t Know’.
We aim to use AI techniques for building a more trustworthy social media. We work on online trolling detection, public discussion monitoring, social network analysis, and so on. Media Coverage: AI for #MeToo.