Ph.D. student in Robotics, University of Michigan, Ann Arbor
Email, Google Scholar, Github, Linkedin
I'm now a 6th year Ph.D. student in Robotics Institute at University of Michigan, working with professor Xi Jessie Yang at the Interaction & Collaboration Research Lab (ICRL).
I received a B.E. in Mechanical Engineering and Automation from Tsinghua University in 2012 and a M.S. in Mechanical Engineering from Carnegie Mellon University in 2014, where I worked with professor Katia Sycara and professor Nilanjan Chakraborty. Prior to joining ICRL, I used to work with professor Dmitry Berenson at the ARM Lab.
My research interest lies in the intersection of machine learning, robotics and human factors. My current research interest is human robot/AI interaction/collaboration. My research goal is to study how to use machine learning to understand human, how robot makes decisions based on the interred behavior of human and how to make human understand robot/AI.
Successful shared control between human operators and autonomy in ground vehicles critically relies on a mutual understanding and adaptation. Existing haptic shared control schemes, however, do not take full consideration of the human agent. To fill this research gap, we presented a haptic shared control scheme that adapts to a human operator's workload, eyes on road and input torque in real-time. We proposed a Bayesian Inference model for assessing human's workload which combined different machine learning models for different features.
Publication: (* equal contribution)
[1] R. Luo*, Y. Wang*, Y. Weng, V. Paul, M. J. Brudnak, P. Jayakumar, M. Reed, J. L. Stein, T. Ersal, X. J. Yang, Toward Real-time Assessment of Workload: A Bayesian Inference Approach, in HFES 2019. [PDF]
[2] R. Luo*, Y. Weng*, Y. Wang, P. Jayakumar, M. J. Brudnak, V. Paul, V. R. Desaraju, J. L. Stein, T. Ersal, X. J. Yang, A Workload Adaptive Haptic Shared Control Scheme for Semi-Autonomous Driving. in Transactions On Human-Machine Systems. [Under Review]
[3] Y. Weng, R. Luo, P. Jayakumar, M. J. Brudnak, V. Paul, V. R. Desaraju, J. L. Stein, X. J. Yang, T. Ersal, Design and Human-in-the-Loop Evaluation of a Workload-Adaptive Haptic Shared Control Framework for Semi-Autonomous Driving. in ACC 2020. [PDF]
To enhance autonomy transparency, we proposed an option-centric rationale display inspired by the research on design rationale. The display details all the available next actions, the criteria for choosing a particular one and highlight the final recommendation. Our results showed that by conveying the intelligent assistant's intent and decision-making rationale via the option-centric rationale display, participants had higher trust in the system and calibrated their trust faster.
Publication:
[1] R. Luo, N. Du, K. Y. Huang, X. J. Yang, Enhancing Transparency in Human-autonomy Teaming via the Option-centric Rationale Display, in HFES 2019. [PDF]
[2] R. Luo, N. Du, X. J. Yang, Enhancing Autonomy Transparency: an Option-centric Rationale Approach, in International Journal of Human-Computer Interaction 2020. [Under Review]
In museum education, a docent inspires visitors by asking questions and encouraging active participation. The interaction requires the docent to estimate the visitor’s comfort level with the artwork in order to engage in a friendly and invigorating conversation. However, with human docents, visitors may feel uncomfortable to ask questions or to express themselves due to the preconceived notion of museums being exclusive to the wealthy and formally educated In collaboration with the University of Michigan Museum of Art, we aim to design an interactive robot decent that is able to estimate a visitor’s comfort level with art and in response to the comfort level, guide them adaptively.
Publication:
[1] R. Luo, S. Benge, N. Vasher, G. VanderVliet, J. Turner, M. Ghaffari, X. J. Yang, Toward an Interactive Robot Docent: Estimating Museum Visitors’ Comfort Level with Art, in RSS workshop, 2019. [PDF]
This project focuses on human-robot collaboration in industrial manipulation tasks that take place in a shared workspace. In this setting, we consider both the adaptation of the robot and the adaptation of the human.
Robot Adapts to Human: Given an observed part of a human's reaching motion, we wish to predict, as quickly as possible, the remainder of the trajectory so that the robot can avoid interference while performing a complimentary task. We proposed a two layer framework of Gaussian Mixture Models and an unsupervised online learning algorithm that updates these models with newly-observed trajectories. The proposed method can build models on-the-fly and adapt to new people and new motion styles as they emerge which requires no manual labeling. The results show that our framework can use human motion predictions to decide on robot motions that avoid the human in real-time applications with high reliability.
Human Adapts to Robot: In stead of online selecting robot motions, we consider offline motion planning of the robot. To be effective for human-robot collaboration a robot should plan its motion so that it is both safe and efficient. To achieve this, we propose two factors to consider in the cost function for the robot’s motion planner: (1) Avoidance of the workspace previously-occupied by the human, so that the motion is as safe as possible, and (2) Consistency of the robot’s motion, so that the motion is as predictable as possible for the human and they can perform their task without focusing undue attention on the robot.
Publication:
[1] R. Luo, D, Berenson, A Framework for Unsupervised Online Human Reaching Motion Recognition and Early Prediction, in IROS 2015. [PDF]
[2] R. Hayne, R. Luo, D. Berenson, Considering avoidance and consistency in motion planning for human-robot manipulation in a shared workspace, in ICRA, 2016. [PDF]
[3] R. Luo, R. Hayne, D, Berenson, Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces, in Autonomous Robots 2017. [PDF] [code]