About

I am an applied scientist at Amazon working on foundational AI models. I received my Ph.D. in Computer Engineering from Northwestern University, where I was advised by Prof. Qi Zhu. I obtained my B.S. in Electrical Engineering from Zhejiang University in 2019.

My long-term research goal is to build trustworthy AI agents, with an emphasis on:

  • Pre- and post-training optimization of LLMs to enhance interactive and trustworthy decision-making
  • Robust and explainable machine learning for safety-critical systems

Education

  • Northwestern University – Ph.D. Computer Engineering, 2019–2024
  • Zhejiang University – B.S. Electrical Engineering, 2015–2019

Research overview

My work sits at the intersection of machine learning, autonomy, and safety, focusing on building reliable decision-making systems in interactive environments. Recently, I have been working on knowledge distillation and reinforcement learning to develop state-of-the-art LLMs and agents for real-world applications at scale.

News

Selected publications

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems.

Ruochen Jiao*, Shaoyuan Xie*, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu

ICLR 2025 · paper

Demonstrates that LLM-based embodied agents can be vulnerable to backdoor attacks and provides evaluation and defenses toward safer LLM-driven decision-making.

Kinematics-aware Trajectory Generation and Prediction with Latent Stochastic Differential Modeling.

Ruochen Jiao*, Yixuan Wang*, Xiangguo Liu, Simon Zhan, Chao Huang, Qi Zhu

IROS 2024 · paper

Introduces a kinematics-aware latent SDE model that generates physically consistent and diverse future trajectories for autonomous driving.

Semi-supervised Semantics-guided Adversarial Training for Robust Trajectory Prediction.

Ruochen Jiao, Xiangguo Liu, Takami Sato, Alfred Chen, Qi Zhu

ICCV 2023 · paper

Uses semantics-guided adversarial training to improve trajectory prediction robustness under noisy and adversarial agent behaviors.

Enforcing Hard Constraints with Soft Barriers: Safety-driven Reinforcement Learning in Unknown Stochastic Environments.

Yixuan Wang, Sinong Simon Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu

ICML 2023 · paper

Bridges safe control and RL by enforcing hard safety constraints through soft barrier functions under stochastic dynamics.

Experience

  • Applied Scientist, Amazon, Seattle, WA
    Store Foundational AI · 2024–present
  • Applied Scientist Intern, Amazon, Seattle, WA
    Jun. 2023 – Sep. 2023
  • Research Scientist Intern, Toyota InfoTech Labs, Mountain View, CA
    Jun. 2021 – Sep. 2021
  • Big Data Engineer, Intel, Shanghai, China
    Mar. 2019 – Jun. 2019

Service

Conference reviewer

NeurIPS, ICLR, ICML, ECCV, ICCV, CVPR, IROS, ICRA, AAAI, AISTATS, IV

Journal reviewer

TMLR, RA-L, TMM, TNNLS, TCAD, TCPS, TIV, IEEE JSAC