I'm a postdoctoral researcher in the VADER Lab at Arizona State University . My primary research interests are visualization and explainable AI. I am currently working on visualizing high-dimensional loss functions to enhance the transparency and reliability of machine learning models, with a particular focus on scientific machine learning applications (e.g. Physics-informed Neural Networks, Material Discovery, etc.)
My research integrates principles from visual analytics and AI to develop tools that allow researchers and practitioners to better understand the complex behaviors of models. During my Ph.D., I focused on Explaining the Vulnerabilities of Machine Learning Models through Visual Analytics , covering areas such as adversarial machine learning, ML fairness, and data robustness analysis. This work aimed to uncover and mitigate potential risks in AI systems, ensuring that they perform reliably and fairly across various applications. Currently, I am passionate about bridging the gap between advanced AI techniques and their practical applications, ensuring that models are not only powerful but also interpretable and trustworthy. Through my work, I aim to contribute to the broader field of AI by making it more accessible and reliable for scientific inquiry and real-world deployment.
I got my MS degree in computer science at Stevens Institute of Technology , focusing on distributed systems, and I earned my BS in computer science at Beijing Forestry University .