I am an Assistant Professor in the CS department at the Whiting School of Engineering of the Johns Hopkins University. I am also affiliated with the Johns Hopkins Mathematical Institute for Data Science (MINDS) and the Johns Hopkins Institute for Assured Autonomy (IAA). I collaborate extensitively with the Center for Language and Speech Processing (CLSP) and the Laboratory for Computational Sensing and Robotics (LCSR).
My research interest lies in machine learning for trustworthy AI. I am broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. This requires the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. I am particularly interested in high-stake applications that concern the safety and societal impact of AI.
I develop, analyze, and apply methods in statistical machine learning, deep learning, and sequential decision making. One established line of work is in distributionally robust learning under covariate shift. My recent projects cover topics in different types of distribution shift, active learning, safe exploration, off-policy learning, fair machine learning.
I worked with Prof. Yisong Yue and Prof. Anima Anandkumar as a postdoc in the Department of Computing and Mathematical Sciences (CMS) of California Institute of Technology (Caltech). Before that, I received my Ph.D. in Department of Computer Science, University of Illinois at Chicago (UIC). I was very fortunate to have Prof. Brian Ziebart as my advisor.
Selected Recent News:
Received the Amazon Research Award!
Paper “Density-Regression: Efficient and Distance-Aware Deep Regressor for Uncertainty Estimation under Distribution Shifts” got accepted in AISTATS 2024.
Paper “Addressing the Binning Problem in Calibration Assessment through Scalar Annotation” got accepted in Transactions of the Association for Computational Linguistics.
Collaboration with Suchi Saria received a grant from the Gordon and Betty Moore Foundation on safety monitoring of clinical machine learning devices.
Paper “Designing for Appropriate Reliance: The role of AI Uncertainty Presentation, Initial User Decisions, and Demographics in AI-Assisted Decision Making” got accepted in CSCW 2024.
Received a grant from JHU IAA to support an AI Fairness Auditing project with UKRI TAS-Hub.
Received the JHU Discovery Award.
I am co-organizing the 2nd Safe RL workshop in IJCAI 2023. CFP here! Please distribute the news and contribute a paper!
Paper “Addressing Efficiency Bottlenecks of Conformal Prediction under Standard and Feedback Covariate Shift” got accepted in the ICML2023 conference.
Paper “Double-Weighting for Covariate Shift Adaptation” got accepted in the ICML2023 conference. Paper.
Paper “Learning Calibrated Uncertainties for Domain Shift: A Distributionally Robust Learning Approach” got accepted in the IJCAI 2023 conference.
This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks. Media Coverage: The Value of Saying ‘I Don’t Know’.
We aim to tackle two key challenges in model auditing for safeguarding AI. The first is the ubiquitous distribution shift, especially subpopulation shift. The second is that many UQ approaches require either intensive computing power or an impractical amount or quality of data that may be unavailable in real-world scenarios. Media Coverage: Putting trust to the test.