Iván Hernández Dalas: KinetIQ framework from Humanoid orchestrates robot fleets

KinetIQ is designed to operate across humanoid form factors, says Humanoid.

KinetIQ is a single AI model that can control different morphologies and end-effector designs. | Source: Humanoid

Humanoid, a developer of humanoid robots and mobile manipulators, this week introduced KinetIQ. This is the London-based company’s own AI framework for orchestration of robot fleets across industrial, service, and home applications.

With KinetIQ, a single system controls robots with different embodiments and coordinates interactions between them, said SKL Robotics Ltd., which does business as Humanoid. The architecture is cross-timescale: Four layers operate simultaneously, from fleet-level goal assignment to millisecond-level joint control.

Each layer treats the layer below as a set of tools, orchestrating them via prompting and tool use to achieve goals set from above. This agentic pattern, proven in frontier AI systems, allows components to improve independently while the overall system scales naturally to larger fleets and more complex tasks.

Humanoid said its wheeled-base robots run industrial workflows: back-of-store grocery picking, container handling, and packing across retail, logistics, and manufacturing.

The company‘s bipedal robot is a research and development platform for service and household robots. It features voice interaction, online ordering, and grocery handling as an intelligent assistant.

KinetIQ starts with an AI fleet agent

The highest layer in the system is an agentic AI layer that treats each robot as a tool and reacts within seconds to use them and optimize fleet operations. Humanoid called this “System 3.”

System 3 integrates with facility management systems across logisticsretail, and manufacturing. It is applicable to service scenarios and smart-home coordination, explained the company.

The KinetIQ Agentic Fleet Orchestrator ingests task requests, expected outcomes, standard operating procedures (SOPs), real-time request updates, and facility context. The system also allocates tasks and information across wheeled and bipedal robots, coordinating robot swaps at workstations to maximize throughput and uptime.

Humanoid said the orchestrator directs two-way communication with facility systems to:

  • Receive new task requests and changes/reassignments
  • Track task progress and performance metrics
  • Report completion and issues
  • Ensure exceptions are handled and resolved in coordination with traditional or agentic facility management systems.

System 2 handles robot-level reasoning

A robot-level agentic layer that plans interactions with the environment to achieve goals set by System 3. It spans the second to sub-minute timescale, Humanoid explained.

System 2 uses an omni-modal language model to observe the environment and interpret high-level instructions from System 3. It decomposes goals into sub-tasks by reasoning about the required actions to complete its assignments, as well as the best sequence and approach.

KinetIQ dynamically updates plans from visual context instead of relying on fixed, pre-programmed sequences, similar to how agentic systems select and sequence tools. Users can save these plans as workflows/SOPs and execute them again in the future and share them across the fleet.

System 2 also monitors execution and evaluates whether the System 1 vision-language-action (VLA) model is making progress, said Humanoid. If the system determines that it’s unable to complete a task, or needs assistance, it requests human support through the fleet layer, or System 3.

Users can deliver assistance via interventions through prompting at System 2 level or through teleoperation or direct joint control at the System 1 level, either remotely or on-site.

KinetIQ System 1 tackles VLA-based task execution

Humanoid said the VLA neural network that commands target poses for a subset of robot body parts such as hands, torso, or pelvis drives progress toward immediate low-level objectives set by System 2.

System 1 exposes multiple low-level capabilities to System 2 that users can invoke via different prompts. Examples include picking and placing objects, manipulating containers, packing, or moving.

The VLM-based reasoning of System 2 selects the capability most appropriate for the current situation and the goal. Each low-level capability is also capable of reporting its status (success, failure, or in progress) back to System 2 to facilitate progress tracking.

KinetIQ VLA issues new predictions at a sub-second timescale, usually 5 to 10Hz. Each prediction constitutes a chunk of higher-frequency actions (30 to 50Hz, depending on the task) that will be executed by System 0.

Humanoid added that action execution is fully asynchronous. A new action chunk is always being prepared while the previous one is still executed.

To ensure that an asynchronously produced chunk doesn’t contradict the reality that unfolded while it was produced, KinetIQ uses the prefix conditioning technique: Every chunk prediction is conditioned on the part of the previous chunk that is expected to be executed during inference.

Unlike impainting, this is a universal technique equally applicable to both autoregressive and flow-matching models, asserted Humanoid.

System 0 handles RL-based whole-body control

The goal of System 0 is to achieve pose targets set by System 1, while solving for the state of all robot joints in a way that continuously guarantees dynamic stability. System 0 runs at 50 Hz, said Humanoid.

KinetIQ implementation of System 0 uses reinforcement learning (RL)-trained whole-body control for both bipedal and wheeled robots. Humanoid said this approach allows KinetIQ to fully exploit synergy between different platforms, benefiting from the power of RL in producing capable locomotion controllers.

Whole body control is trained solely in simulation with online RL, requiring about 15,000 hours of experience to produce a capable model.

Working in unison across multiple embodiments and timescales, Humanoids claimed that the four cognitive layers of KinetIQ can achieve complex goals that require fleet orchestration, reasoning, dexterous manipulation, dynamic recovery, and stability control.


SITE AD for the 2026 Robotics Summit save the date.

The post KinetIQ framework from Humanoid orchestrates robot fleets appeared first on The Robot Report.



View Source

Popular posts from this blog

Iván Hernández Dalas: 4 Show Floor Takeaways from CES 2019: Robots and Drones, Oh My!

Iván Hernández Dalas: How automation and farm robots are transforming agriculture

Iván Hernández Dalas: Physical Intelligence open-sources Pi0 robotics foundation model