Overview
Robotic manipulation of rigid objects has embarked on a remarkable trajectory, as demonstrated by increasingly advanced manipulation skills and real-world deployments. This progress has been enabled by improved perception pipelines, standardized 6DoF rigid object representations, mature planning, and control frameworks. However, real-world environments contain a diverse set of non-rigid objects: deformable ones, such as clothes, cables, food, and articulated ones, such as dishwashers, laptops, drawers, whose shape, topology, and behavior dynamically changes during interaction. Manipulating these objects remains a fundamental open challenge that requires advances across the entire robotic pipeline, from perception and representation to modelling, planning, and control.
This workshop focuses on robotic manipulation of non-rigid objects, with particular emphasis on deformable and articulated ones. We aim to bring together researchers working on complementary approaches that span end-to-end learning systems and modular pipelines, addressing challenges across the entire manipulation process.
The relevance of this workshop stems from the growing need for robots that can robustly operate in unstructured, human-centered, and natural environments, where interaction with non-rigid objects is unavoidable. Progress in this area is critical for applications such as household assistance, logistics, agriculture, healthcare, and industrial automation, where handling deformable and articulated objects remains one of the key bottlenecks to autonomy and real-world deployment. Currently, two dominant paradigms have emerged: end-to-end approaches that aim to learn manipulation policies from large-scale (often teleoperated) datasets, and modular pipelines inspired by the successes of rigid object manipulation. Each comes with distinct advantages and limitations in terms of robustness, generalization, data efficiency, and interpretability. By fostering discussion between researchers from both paradigms, this workshop seeks to identify common principles, open challenges, and promising directions toward scalable and reliable non-rigid object manipulation.
Speakers

Jeannette Bohg
Stanford University
She is an Assistant Professor of Computer Science at Stanford University and director of the Interactive Perception and Robot Learning Lab. My research asks two central questions: What are the principles underlying robust sensorimotor coordination in humans, and how can we implement them in robots? Addressing these questions requires working at the intersection of robotics, machine learning, and computer vision. In my lab, we focus particularly on robotic grasping and manipulation.

Niko Suenderhauf
Queensland University of Technology
He is a Professor at Queensland University of Technology (QUT) in Brisbane and Deputy Director of the QUT Centre for Robotics, where I lead the Visual Understanding and Learning Program. I am also Deputy Director (Research) for the ARC Research Hub in Intelligent Robotic Systems for Real-Time Asset Management (2022-2027) and was Chief Investigator of the Australian Centre for Robotic Vision 2017-2020. I conduct research in robotic vision and robot learning, at the intersection of robotics, computer vision, and machine learning. My research interests focus on robotic learning for manipulation, interaction and navigation, scene understanding, and the reliability of deep learning for open-set and open-world conditions.

Siyuan Huang
Beijing Institute for General Artificial Intelligence
He is a Research Scientist at the Beijing Institute for General Artificial Intelligence (BIGAI), directing the Embodied Robotics Center and BIGAI-Unitree Joint Lab of Embodied AI and Humanoid Robot. He received his Ph.D. from the Department of Statistics at the University of California, Los Angeles (UCLA). His research aims to build a general agent capable of understanding and interacting with 3D environments like humans. To achieve this, his work made contributions in (i) developing scalable and hierarchical representations for 3D reconstruction and semantic grounding, (ii) modeling and imitating human interactions with 3D world, and (iii) building robots proficient in interactions within the 3D world and with humans.

Speaker 4 - TDB
Schedule
This is a preliminary version of the schedule and may be subject to change.
| Time | Description | |
|---|---|---|
| 8:50 | Opening Remarks by the Workshop Organizers | |
| 9:00 |
Jeannette Bohg
Stanford University | TBD |
| 9:40 |
Niko Suenderhauf
Queensland University of Technology | TBD |
| 10:20 | Coffee Break and Poster Session | |
| 11:00 |
Siyuan Huang
Beijing Institute for General Artificial Intelligence | TBD |
| 11:00 |
Speaker 4 - TDB
| TBD |
| 12:20 | Roundtable Discussion | |
| 12:50 | Closing Remarks | |
Call for Papers
We invite the submission of field reports or extended abstracts on the following topics of interest:
- - Perception: How can robots perceive non-rigid objects using cameras and other multimodal sensors (e.g., tactile, force, and proprioception)?
- - Representation: How can non-rigid objects be represented; do we learn or deploy model-based representations; how to balance detail and precision with computational complexity?
- - Modelling and planning: How can we model and integrate non-rigid object dynamics into planning; what role do world models play in non-rigid object manipulation?
- - Approaches: What are the advantages and shortcoming of different paradigms for manipulation; with particular emphasis on end-to-end and modular ones?
- - Evaluation and Benchmarks: How to best measure progress; what are common benchmarks, datasets and metrics for manipulating non-rigid objects?
All submitted papers will be reviewed on the basis of technical quality, relevance, significance, and clarity. The page limit of submitted papers is 4 pages including references. We also accept submissions of previously presented work, that you have extended on and work that is being published as part of the RSS 2026 main conference. Upon acceptance, you will be able to present your submission as part of the poster session. All accepted submissions will be available for the workshop on this website (non archival).
Please submit your extended abstracts following the RSS 2026 format guidelines. For details see the following link:
Please submit your workshop paper through OpenReview.
Important Dates
Extended Abstract Submission Deadline: 12 June, 2026
Decision Notification: 17 June, 2026
Workshop Date: 13 July, 2026
Organizers

Leonard Klüpfel
German Aerospace Center (DLR)

Júlia Borràs Sol
Institut de Robòtica i Informàtica Industrial, CSIC-UPC

David Blanco-Mulero
Institut de Robòtica i Informàtica Industrial, CSIC-UPC

Alberta Longhini
Stanford University

Andrej Gams
Jozef Stefan Institute

Maximilian Durner
German Aerospace Center (DLR)

Rudolph Triebel
German Aerospace Center (DLR), Karlsruhe Institute of Technology (KIT)
