Puze Liu - 刘普泽

I’m pursuing my Ph.D. degree at Intelligent Autonomous Systems, TU Darmstadt, supervised by Prof. Jan Peters, Ph.D..

My research focus on empowering Robots with complex skills utilizing advanced Machine Learning techniques. Specifically, I’m focusing on the safety problems when deploying learned policy on a real robot.

RESEARCH INTERESTS

  • | Robotics
  • | Robot Learning
  • | Safe Reinforcement Learning
  • | Control and Optimization
  • | Robot Air Hockey
  • | Human Robot Interaction
  • |

NEWS

  • 2022-09-27

    I win the "IROS Student Travel Award"!

  • 2022-07-01

    Our paper "Regularized Deep Signed Distance Fields for Reactive Motion Generation" is accepted at IROS 2022!

  • 2022-01-18

    Our paper "Dimensionality Reduction and Prioritized Exploration for Policy Search" is accepted at AISTATS 2022!

  • 2021-11-08

    Our paper "Robot Reinforcement Learning on the Constraint Manifold" is accepted at CoRL 2021 as oral presentation and selected as "Best Paper Award Finalist"!

  • 2021-09-30

    Our paper "Efficient and Reactive Planning for High Speed Robot Air Hockey" is accepted at IROS 2021
    and selected as "Best Entertainment and Amusement Paper Award Finalist"!

RESEARCH HIGHLIGHTS

SDF of Tiago and Human
SDF of Tiago and Human

09/2022

Safe RL: Manipulation, Navigation & Interactions
Safety is a crucial property of every robotic platform: any control policy should always comply with actuator limits and avoid collisions with the environment and humans. In reinforcement learning, safety is even more fundamental for exploring an environment without causing any damage. While there are many proposed solutions to the safe exploration problem, only a few of them can deal with the complexity of the real world. This paper introduces a new formulation of safe exploration for reinforcement learning of various robotic tasks. Our approach applies to a wide class of robotic platforms and enforces safety even under complex collision constraints learned from data by exploring the tangent space of the constraint manifold. Our proposed approach achieves state-of-the-art performance in simulated high-dimensional and dynamic tasks while avoiding collisions with the environment. We show safe real-world deployment of our learned controller on a TIAGo++ robot, achieving remarkable performance in manipulation and human-robot interaction tasks.
SDF of Tiago and Human
SDF of Tiago and Human

09/2022

ReDSDF: Regularized Deep Signed Distance Fields for Reactive Motion Generation
Autonomous robots should operate in real-world dynamic environments and collaborate with humans in tight spaces. A key component for allowing robots to leave structured lab and manufacturing settings is their ability to evaluate online and real-time collisions with the world around them. Distance-based constraints are fundamental for enabling robots to plan their actions and act safely, protecting both humans and their hardware. However, different applications require different distance resolutions, leading to various heuristic approaches for measuring distance fields w.r.t. obstacles, which are computationally expensive and hinder their application in dynamic obstacle avoidance use-cases. We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale, with fine-grained resolution over high-dimensional manifolds and articulated bodies like humans, thanks to our effective data generation and a simple inductive bias during training. We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe HumanRobot Interaction (HRI) in shared workspaces. Finally, we provide proof of concept of a real-world application in a HRI handover task with a mobile manipulator robot.
Moving on the Manifold

09/2022

ATACOM: Acting on the Tangent Space of the Constraint Manifold
To safely explore the environment under constraints, we construct a high-dimensional constraint manifold. Using the tangent space bases of the constraint manifold, the RL agent is able to sample only safe actions and the constrained optimization problem is then converted to an unconstraint problem.

RECENT PUBLICATIONS

[All Publications]

Pre-Prints

2022

  1. Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction
    Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Zhiyuan Hu, Jan Peters, and Georgia Chalvatzaki
    2022

Conference Papers

2022

  1. Regularized Deep Signed Distance Fields for Reactive Motion Generation
    Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Jan Peters, and Chalvatzaki Georgia
    In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022
  2. Robot Reinforcement Learning on the Constraint Manifold
    Best Paper Award Finalist
    Puze Liu, Davide Tateo, Haitham Bou-Ammar, and Jan Peters
    In Proceedings of the 5th Conference on Robot Learning, vol. 164, pp. 1357–1366, 2022
  3. Dimensionality Reduction and Prioritized Exploration for Policy Search
    Marius Memmel,  Puze Liu, Davide Tateo, and Jan Peters
    In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, vol. 151, pp. 2134–2157, 2022

2021

  1. Efficient and Reactive Planning for High Speed Robot Air Hockey
    Best Entertainment and Amusement Paper Award Finalist
    Puze Liu, Davide Tateo, Haitham Bou-Ammar, and Jan Peters
    In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 586-593, 2021
  2. Composable Energy Policies for Reactive Motion Generation and Reinforcement Learning
    Julen Urain, Anqi Li,  Puze Liu, Carlo D’eramo, and Jan Peters
    In Robotics: Science and Systems XVII (R:SS 2021), Jul, 2021

Workshop Papers

2022

  1. ReDSDF: Regularized Deep Signed Distance Fields for Robotics
    Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki
    ICRA Workshop: Motion Planning with Implicit Neural Representations of Geometry, 2022