
Neurosymbolic Invariant Reasoner
Leveraging neurosymbolic reasoning for robotic-agent task planning discrimination
I am currently a Researcher (PhD Candidate) at Purdue University. Before coming to Purdue, I was a Research Engineer at the Red Lab, Indiana University. My research interests are broadly at the intersection of Neurosymbolic Reasoning, Robot Learning, Human Robot-Agent Interaction, and Safety Alignment. My long-term research goal is to build AI Agents and Robotic systems that can perceive, understand, and interact with the world by thinking about action-impact consequences. My research draws from multi-disciplinary fields to realize this goal. My work has been published in different computing venues, including NeurIPS D&B (Spotlight Paper), ICRA, IROS, CHI, CSCW, and ICSE. I have been fortunate to collaborate with both industry and government partners, including Microsoft and US EDA.
Leveraging neurosymbolic reasoning for robotic-agent task planning discrimination
PrefCLM: Enhancing Preference-based Reinforcement Learning with Crowdsourced Large Language Models.
Adaptive Task Allocation in Multi-Human Multi-Robot Teams under Team Heterogeneity and Dynamic Information Uncertainty
Modeling and Evaluating Trust Dynamics in Multi-Human Multi-Robot Task Allocation.
Investigating the human values embedded in RLHF datasets
A specialized data-intensive analytic platform developed for the US EDA.