Iván Hernández Dalas: IEEE survey sheds light on how AI and humanoids will affect robotics in 2026

A collaborative robot working in a lab. The IEEE studied different ways in which AI could benefit robotics.

The IEEE studied different ways in which AI could benefit robotics. Source: Adobe Stock

The Institute of Electrical and Electronics Engineers, or IEEE, recently announced the results of its “The Impact of Technology in 2026 and Beyond: an IEEE Global Study.” For the study, the IEEE spoke with technology leaders from Brazil, China, India, Japan, the U.K., and the U.S.

The organization found that 52% of technologists think the robotics industry will be one of the industries most impacted by artificial intelligence in the coming year. In addition, 77% of technologists agreed that the novelty of humanoid robots can inject fun into the workplace, but over time, they will become like commonplace co-workers with circuits.

Bhushan Patel, a senior member who has been with IEEE for more than three years, gave The Robot Report more insight into the report. His answers have been edited for clarity and brevity.

AI has been in robotics for years now. What’s pushing it to the forefront next year?

Patel: We are standing at a major inflection point right now, where AI is no longer just supporting technology. It’s becoming the brain of robotics.

What’s driving the shift is the convergence of three powerful forces: machine learning models, exponential growth in real-world robotic data, and computing advances.

Over the last decade, robotics has been primarily about precision, repeatability, and safety. In other words, just mastering motion. Robots were extremely capable, but they are operated within a structured environment, like assembly lines in manufacturing. Amazon was using robotics in its fulfillment centers for the timely delivery of packages. Those are pre-programmed workflows.

What’s happening now, thanks to AI, is that the robots are starting to perceive, learn, and adapt quickly. This evolution is transforming robots from static tools into dynamic collaborators that can function in a semi-structured environment.

From a technical standpoint, AI is giving robots contextual intelligence through computer vision, sensor fusion, and reinforcement learning. Robot can interpret their surroundings and make their decisions in real time.

For example, in manufacturing, AI-driven robots can now detect variances and autonomously adjust forces or trajectories to maintain quality without human recalibration. Similarly, in healthcare, surgical robots can analyze tissue characteristics, recognize anatomical structures, and guide the surgeon toward the optimal path during the surgery.

What is really accelerating adoption in 2025 and beyond is that organizations have begun to trust AI as a co-pilot, not just a calculator. Five years ago, AI was viewed as an experiment or even risky in regulated fields like healthcare and defense.

Today, it’s seen as an enabler of precision, safety, and efficiency. This cultural and regulatory shift, alongside this tangible result, is fueling confidence across the industry.

What affect will AI robotics have on the people who work with them and the places that deploy them?

Patel: On the human side, AI is amplifying expertise. We are moving toward a world where a less-experienced surgeon or technician can perform a complex procedure with just guidance from AI systems.

We are also seeing AI transforming the entire robotic value chain, not just the final product. For example, in design and manufacturing, AI tools are now optimizing robotic arm kinematics, simulating dynamic loads, and even predicting manufacturing tolerance in operations.

AI-driven data platforms are helping hospitals, factories, and logistics centers analyze fleet performance, manage their utilization, and continuously refine task planning.

I see it as recognition that we are crossing the threshold from possibility to an inevitability. The models are robust enough, the computing is affordable enough, and the ROI is proven enough that AI in robotics is no longer just a research topic; it’s an operational necessity.

What’s driving the change is not just technological progress. I believe it’s an alignment of maturity, motivation, and momentum. AI is ready, and the world needs adaptable automation, and robotics is the perfect embodiment of that need.


SITE AD for the 2026 Robotics Summit save the date.

How does IEEE think generative AI, specifically, will be applied to robotics?

Patel: Generative AI is reshaping robotics in some of the most profound ways we have seen in decades. It’s turning robotics development from a rule-based discipline into a creative, adaptive, and continuously learning ecosystem.

Traditional AI made robots smarter. Generative AI will make them more imaginative. At its core, generative AI enables machines to generate, which means to create simulations, design control strategies, or even task grants, rather than simply following pre-coded rules. This is opening several key applications across the entire robotics life cycle, which include design simulation, autonomy, and human-robot interaction.

First, in the design and simulation phase, generative AI is dramatically reducing the time and cost to prototype a new robot and its components. For example, engineers are now using generative design algorithms to automatically create a lightweight robotic arm or joint optimized for strength, flexibility, and cost.

Similarly, in simulation and control, generative AI can create realistic, synthetic data that helps robots learn safely and quickly. Take surgical robots, for example. Instead of training on limited real-world video, data engineers can generate thousands of lifelike surgical scenarios with variable anatomy, lighting, and motion.

The second major application is around autonomy and decision-making. Generative AI models, particularly large language models, or LLMs, can help robots reason about context. A robotic assistant could generate multiple possible sequences to complete a task and simulate outcomes before deciding which is safest or which is most efficient. Think of it as a form of predictive imagination.

The third thing I would say is human-robot collaboration. One of the biggest barriers in robotics has always been communication. Humans think in intent, while robots think in code. Generative AI bridges that gap with natural language understanding and multimodal input.

We are already seeing this in early-stage research, where surgeons can issue spoken commands or draw virtual annotations that the robot interprets and executes autonomously. In industrial settings, engineers can describe a task in plain English.

So these capabilities are being made possible because foundation models are expanding beyond text. We now have large models that can process video, 3D data, and sensor systems. Generative AI introduces questions about validation, explanation, and biases.

When a model creates a new robotic behavior, how do we ensure safety and regulatory compliance? That’s why I believe the near-term application will focus on human-in-the-loop systems where generative AI augments the designer or the operator but doesn’t replace them.

Two surgeons watching a surgical robot at work on a patient.

New AI approaches promise to improve human-robot interactions, reported IEEE. Source: Adobe Stock

How should roboticists ensure they’re deploying AI ethically?

Patel: As generative AI starts influencing how robots are designed, trained, and developed, the responsibility shifts from can we build this to should we build this, and how do we build it responsibly?

I usually look at ethical use in robotics through three lenses. The first line is data, the second is decision, and the third is deployment.

Let’s start with data. Generative AI models are only as good as the data they are trained on. In robotics, the data often involves humans, whether it’s motion tracking, voice commands, or surgical footage. Robotics engineers need to ensure that the data is collected transparently, with consent, and whenever it’s possible, they should be analyzed.

In healthcare robotics, for example, synthetic data generation can actually improve ethics by reducing reliance on patient data sets while still enabling realistic model training. The first step is building a data pipeline that is both diverse and privacy-preserving.

The second lens is a decision. Generative AI can propose new design strategies or behaviors that seem perfectly logical to the algorithm, but might have real-world consequences we didn’t even intend. Human oversight becomes crucial.

We should design human-in-the-loop systems where generative outputs are reviewed, validated, and tested before being implemented in any physical environment. In robotics, that means adding interpretability layers, which is a tool that helps engineers understand why an AI-generated control plan or design was chosen.

The last lens is the deployment. Even after a robot ships, the ethical responsibility doesn’t end. Generative AI enables robots to learn and adapt post-deployment, which means roboticists need to monitor for drift. Drift is when a system’s behavior slowly changes over time. It’s we call it a deviation drift.

So, establish governance mechanisms like version control for AI models, audit trails for training, data, regulatory, safety, and recertification are all governance mechanisms to ensure the technology continues to behave even after it was deployed in the field.

I think one key mindset shift is moving from AI ethics as a checklist to AI ethics as a culture. It’s about embedding ethical thinking right into the decision process, not as in compliance box.

Ethical robotics isn’t about slowing innovation down. It’s only about building trust, so innovation can still be done responsibly. Or, as I like to say, a robot’s intelligence comes from data, but its integrity comes from its designers.

Do you think humanoid robots being seen as ‘fun’ in the workplace, as the IEEE study noted, will help or hurt deployments?

Patel: The idea of humanoids injecting fun into the workplace might sound light-hearted at first, but I actually think it speaks to a deeper truth about human-robot interaction. Emotional design and engagement are becoming just as important as mechanical performance.

In the early days of robotics, most deployments were entirely functional. Think of factory arms, warehouse machines, or surgical robots. The focus was on precision throughput, which means output of the function and reliability.

As humanoid robotics start entering more social and collaborative environments, like offices, hospitals, and retail spaces, the human experience becomes part of the success metric. So, I do believe the fun factor can accelerate early adoption. When a humanoid robot greets people with natural gestures, humor, or expression, it helps overcome the initial discomfort that humans often feel around autonomous systems.

That sense of novelty and playfulness makes people more receptive to working alongside a robot and, in that way, acts as a bridge to trust. We are already seeing this dynamic in action, for example, SoftBank’s Pepper robot was designed not to perform heavy-duty tasks, but to engage people emotionally. It could read facial expressions, respond playfully, and create an approachable atmosphere.

Over time, as the survey suggests, the novelty fades, and that’s actually a good thing. Once humanoids become commonplace co-workers with circuits, as 77% of technologists told IEEE, it means we have moved past the hype and into practical integration. The fun factor gets people to try the technology, but the sustained value comes from reliability, adaptive, adaptivity, and meaningful human-robot collaboration.

Fun is the door opener, not the destination. It lowers barriers, sparks curiosity, and helps society embrace humanoids as part of daily life. But long-term adoption will depend on how the system adds value.

Humanoid robots working in a factory.

The potential of humanoid robots has sparked curiosity and investment. Source: Adobe Stock

When does the IEEE think humanoids might become common in the workplace?

Patel: We are probably looking at around five to seven years before humanoids start to feel like normal co-workers or assistants in an everyday environment.

But of course, that depends on how we define “commonplace.” That’s the first thing. So let me explain what I meant by that. Right now, humanoid robots are moving rapidly from the research stage into early pilot deployments. You have companies like Agility Robotics, Figure AI, and Tesla, all building general-purpose humanoids that can operate in a human environment.

These systems are still expensive and limited in numbers. But what’s happening is that they are demonstrating consistent reliability in semi-structured environments like logistics centers, manufacturing floors, and some healthcare pilot programs.

Now, if you think back to how industrial robots spread, they started in automotive plants in the 1960s and were commonplace across manufacturing by the 1990s. The adoption curve for humanoids will likely be much faster, mainly because now we have AI, we have sensors, and we have computing power that is evolving at exponential rates. We have also learned a lot about human-robot safety and regulation over the past decade, which helps this accelerated integration.

In the short term, the next two to three years, I think we will see humanoids become a familiar sight in controlled environments, like warehouses, research labs, and maybe hospitals for logistics support. The public will get used to seeing them in the background.

By around 2030 or 2035, we will start seeing wider deployments in semi-public spaces like corporate campuses, hospitals, hospitality, and assisted living facilities.

By the early 2040s or late 2030s, I believe humanoids will likely cross the threshold from novelty to normalcy. That’s when people will start referring to them the same way we talk about delivery drones or voice assistants today. They are helpful, and nobody really stops to think twice about them.

But it’s not just about the hardware getting better. It’s about cultural adoption. I believe the IEEE survey captured that perfectly, because most technologists believe humanoids will initially bring a sense of fun and curiosity to the workplace, but over time, they will just blend in as reliable teammates. I completely agree with that trajectory.

Of course, there will still be a pocket of resistance, like concerns about job displacement or simply discomfort with entropic machines. But as people experience their benefit, like reducing strains in healthcare and improving safety in factories, the value proposition will outweigh the anxiety.

Within a decade, humanoids won’t just be headlines or demo videos. They will be quietly clocking in alongside us. A robot will be just another colleague helping humans focus on higher-value creativity and the compass network.

Humanoid robots working in a factory assembly line. Humanoids will eventually be accepted as co-workers, said IEEE survey respondents.

Humanoids will eventually be accepted as co-workers, said IEEE survey respondents. Source: Adobe Stock

The post IEEE survey sheds light on how AI and humanoids will affect robotics in 2026 appeared first on The Robot Report.



View Source

Popular posts from this blog

Iván Hernández Dalas: 4 Show Floor Takeaways from CES 2019: Robots and Drones, Oh My!

Iván Hernández Dalas: How automation and farm robots are transforming agriculture

[Ivan Hernandez Dalas] Mechatronics in Ghost in the Shell