Iván Hernández Dalas: AGIBOT introduces Genie Sim 3.0 simulation platform for embodied AI

Simulated environments. AGIBOT says Genie Sim 3.0 signals a shift toward treating simulation not as a tool, but as a foundation for developing and evaluating embodied AI at scale.

AGIBOT says Genie Sim 3.0 signals a shift toward treating simulation not as a tool, but as a foundation for developing and evaluating embodied AI at scale. Source: AGIBOT

While recent progress in robotics has been driven by advances in models and algorithms, real-world deployment continues to be constrained by high data collection costs, limited scenario diversity, and fragmented benchmarking standards, according to AGIBOT. The company today said it has upgraded its Genie Sim 3.0 development environment.

AGIBOT said its platform now addresses three long-standing bottlenecks in embodied AI: environment generation, data scalability, and standardized evaluation. The Shanghai-based company  said it designed Genie Sim 3.0 to integrate scene generation, simulation, data, and evaluation into a unified, reusable infrastructure.

Genie Sim World generates environments from language

Genie Sim 3.0 introduces a spatial world model that allows users to generate fully interactive 3D environments from simple text or image inputs. AGIBOT said its key capabilities include:

  • Multimodal input – No manual modeling or hardware setup required. Users can generate diverse environments with minimal input.
  • Minute-level scene creation – Neural network inference enables scene generation in minutes, compared with hours in traditional pipelines.
  • High-fidelity – Synchronized output of RGB, depth, lidar, and other multimodal data ensures alignment with real robot perception

Editor’s note: At the 2026 Robotics Summit & Expo on May 27 and 28 in Boston, there will be sessions on embodied and physical AI, as well as on humanoid robot development. Registration is now open.


SITE AD for the 2026 Robotics Summit save the date.

Genie Sim 3.0 benchmark offers comprehensive evaluation framework

For the five core capabilities of robot algorithms—instruction understanding, spatial reasoning, atomic skill operation, disturbance adaptation, and training-to-deployment generalization — AGIBOT said it has designed five corresponding task suites. Genie Sim Benchmark supports mainstream models such as the GO-2, Pi series, and GR00T series and provides a multi-dimensional, systematic evaluation of the models’ comprehensive performance in complex scenarios.

The framework evaluates five core capabilities of embodied AI systems:

  • Instruction following (GenieSim-Instruction) – Measures alignment between natural language instructions and robot behavior
    AGIBOT says Genie Sim Instruction provides for instruction following.
  • Spatial understanding (GenieSim-Spatial) – Evaluates reasoning over geometric and semantic spatial relationships
    Genie Sim 3.0 supports spatial understanding, says AGIBOT.
  • Manipulation skills (GenieSim-Manip) – Assesses execution of atomic skills and long-horizon task composition
    GenieSim-Manip assesses manipulation skills.
  • Robustness (GenieSim-Robust) – Tests adaptability under real-world disturbances such as lighting changes, sensor noise, and environment variations
    GenieSim 3.0 tests adaptability for robustness.
  • Sim2Real (GenieSim-Sim2Rea) – Includes a series of evaluation tasks for zero-shot real-robot transfer with high success rates
    GenieSim Sim2Real provides for zero-shot real-to-robot transfers.

GenieSim x RLinf: Scaling reinforcement learning in simulation

Genie Sim 3.0 also introduces deep integration with the RLinf framework, enabling a complete reinforcement learning (RL) pipeline for embodied AI.

AGIBOT said this complements vision-language-action (VLA) models, using low-cost RL post-training to bridge the last mile from “generalized understanding” to “precise micromanipulation.” It listed the following features:

  • Decoupled physics and rendering engines – Supports high-frequency (1,000Hz) physics simulation alongside high-fidelity visual observation
  • Massively parallel simulation – Significantly increases data throughput and accelerating model convergence
  • Closed-loop training and evaluation – RL agents can be trained and evaluated directly within Genie Sim tasks, with built-in reward signals
  • Standardized Gym interfaces – Ensures compatibility with RLinf and broader ecosystem tools

“This integration enables a seamless pipeline from large-scale simulation training to evaluation – bridging the gap between general understanding and precise control,” said the company.

Genie Sim 3.0 integrates with the RLinf framework for a reinforcement learning pipeline.

Genie Sim 3.0 integrates with the RLinf framework for a reinforcement learning pipeline. Source: AGIBOT

AGIBOT builts unified infrastructure for embodied AI

By combining large-scale simulation data, large language model (LLM)-driven environment generation, and standardized evaluation, AGIBOT asserted that Genie Sim 3.0 brings together the full development stack:

Environment → Data → Training → Evaluation

This can significantly reduce the engineering overhead traditionally required for robotics development, enabling faster iteration and broader experimentation, claimed the company.

“As the boundary between simulation and reality continues to narrow—and as environment generation scales from hours to minutes—Genie Sim 3.0 provides a critical foundation for the large-scale deployment of embodied AI,” it stated.

Open, shared infrastructure like Genie Sim could play a key role in accelerating the evolution of the global robotics ecosystem, said AGIBOT.

https://www.youtube.com/watch?v=AtRK6JDWUoA

The post AGIBOT introduces Genie Sim 3.0 simulation platform for embodied AI appeared first on The Robot Report.



View Source

Popular posts from this blog

Iván Hernández Dalas: 4 Show Floor Takeaways from CES 2019: Robots and Drones, Oh My!

Iván Hernández Dalas: How automation and farm robots are transforming agriculture

Iván Hernández Dalas: Physical Intelligence open-sources Pi0 robotics foundation model