I am an Assistant Professor in the CS department at the Whiting School of Engineering of the Johns Hopkins University. I am also affiliated with the Johns Hopkins Mathematical Institute for Data Science (MINDS) and the Johns Hopkins Institute for Assured Autonomy (IAA). I collaborate extensitively with the Center for Language and Speech Processing (CLSP) and the Laboratory for Computational Sensing and Robotics (LCSR).
My research interest lies in machine learning for trustworthy AI. I am broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. This requires the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. I am particularly interested in high-stake applications that concern the safety and societal impact of AI.
I develop, analyze, and apply methods in statistical machine learning, deep learning, and sequential decision making. One established line of work is in distributionally robust learning under covariate shift. My recent projects cover topics in different types of distribution shift, active learning, safe exploration, off-policy learning, fair machine learning.
I worked with Prof. Yisong Yue and Prof. Anima Anandkumar as a postdoc in the Department of Computing and Mathematical Sciences (CMS) of California Institute of Technology (Caltech). Before that, I received my Ph.D. in Department of Computer Science, University of Illinois at Chicago (UIC). I was very fortunate to have Prof. Brian Ziebart as my advisor.
Selected Recent News:
Paper “Variance-Aware Linear UCB with Deep Representation for Neural Contextual Bandits” got accepted in AISTATS 2025.
Paper “Training-Aware Risk Control for Intensity Modulated Radiation Therapies Quality Assurance with Conformal Prediction” got presented in ML4H symposium.
Paper “Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation” got presented in NeurIPS 2024.
Paper “SurgicAI: A Hierarchical Platform for Fine-Grained Surgical Policy Learning and Benchmarking” got presented in NeurIPS 2024 (datasets and benchmarks track).
Paper “Conformal Validity Guarantees Exist for Any Data Distribution (and How to Find Them)” got accepted in ICML 2024.
Paper “Density-Softmax: Efficient Test-time Model for Uncertainty Estimation and Robustness under Distribution Shifts” got accepted in ICML 2024.
Received the Amazon Research Award!
Paper “Density-Regression: Efficient and Distance-Aware Deep Regressor for Uncertainty Estimation under Distribution Shifts” got accepted in AISTATS 2024.
This project covers a series of my work, ranging from fundamentals of distributionally robust learning under covariate shift, to its integration to real-world safe exploration and domain adaption tasks. Media Coverage: The Value of Saying ‘I Don’t Know’.
We aim to tackle two key challenges in model auditing for safeguarding AI. The first is the ubiquitous distribution shift, especially subpopulation shift. The second is that many UQ approaches require either intensive computing power or an impractical amount or quality of data that may be unavailable in real-world scenarios. Media Coverage: Putting trust to the test.