About the Workshop
Generative AI (GenAI) techniques, including stable diffusion, are transforming robotic perception and development by enabling rapid multimodal data generation and novel scene synthesis from prompts in text, images, and other modalities. This makes GenAI a scalable and diverse alternative for data generation, complementary to physical simulators like Mujoco and Isaac Lab.
This workshop brings together leading experts in AI, robotics, computer vision, and simulation to explore how GenAI can enhance robotic development and deployment, focusing on combining GenAI's diversity and scalability in data generation with physics-aware simulation to improve real-world transferability. It aims to establish a cloud-based framework for benchmarking robotic performance using GenAI-generated datasets.
Call for Papers
We invite submissions presenting novel research, methodologies, and applications related to robotic data generation, evaluation, and the integration of generative AI with simulation and real-world deployment. All papers will be peer-reviewed for originality, relevance, technical quality, and clarity. Accepted papers will be presented as posters. At least one author must attend in person to present.
Topics of Interest:
- Generative AI for robotic data generation and simulation
- Task-aligned and physically realistic data synthesis for robotics
- Evaluation and benchmarking of generated data and trained models
- Multimodal data generation and prompt alignment for robotics tasks
- Bridging simulation and real-world deployment in robotics
- Cloud-based robotic evaluation platforms and benchmarking frameworks
- Applications of GenAI in manipulation, navigation, and teleoperation
- Datasets, metrics, and reproducibility in robotic data generation
Awards:
A best paper award will be given to the most outstanding submission as selected by the program committee.
Important Dates:
- Deadline for Paper submissions: September 7, 2025
- Notification of acceptance: September 15, 2025
Late-Breaking Work Paper Submission:
- Deadline for Paper submissions: September 20, 2025
- Notification of acceptance: September 27, 2025
Submission Instructions:
- Papers must not exceed 4 pages (excluding references and appendices)
- Please use the main conference’s format guidelines and template
- Submit your paper via CMT: here
Tentative Program
Time | Talk | Tentative Titles and Comments |
---|---|---|
8:30 – 8:40 | Opening Remarks | Introduction to the workshop theme and objectives. |
8:40 – 9:05 | Keynote Talk 1 | Exploring Multimodal Visual Language Models for Embodied Intelligence: Opportunities, Challenges, and Future Directions, Q&A. |
9:05 – 9:30 | Keynote Talk 2 | Data Generation for Visual-Tactile Sensing, Q&A. |
9:30 – 9:55 | Keynote Talk 3 | Generative AI for Language-driven Grasping, Q&A. |
9:55 – 10:30 | Coffee Break & Poster Session & Live Demo | Informal networking with poster presentations and live demonstrations. |
10:30 – 10:55 | Keynote Talk 4 | Advantages and challenges of learning methods in robotic manipulation, Q&A. |
10:55 – 11:20 | Keynote Talk 5 | Robot Proprioception Meets Differentiable Simulation, Q&A. |
11:20 – 11:45 | Keynote Talk 6 | Towards Data-Driven Robotic Manipulation Research at Scale, Q&A. |
11:45 – 12:20 | Panel Discussion | Keynote speakers and organisers discuss challenges, best practices, and future directions with the audience. |
12:20 – 12:30 | Closing Remarks | Summary of key takeaways and potential collaboration opportunities. |
Invited Speakers
Organizers
Supporting IEEE RAS technical committees
Supported by






