Iván Hernández Dalas: Oxford Robotics Institute director discusses the truth about AI and robotics

Image of wheeled robots with the logo of the Oxford Robotics Institute.

The Oxford Robotics Institute explores systems and applications across domains. Source: ORI

Nick Hawes stands at the cutting edge of robotics and artificial intelligence. As professor of AI and robotics at the University of Oxford and director of the Oxford Robotics Institute, he leads research that is redefining what robots can do — from long-lived autonomous systems to real-world applications in extreme environments.

With a career spanning indoor service robots, underwater vehicles, and robotics in nuclear settings, Hawes brings both visionary ideas and grounded experience. He is passionate about foundation models, autonomy, and the pragmatic challenges that come with integrating AI in business.

In this exclusive interview with The Champions Speakers Agency, we explore the most transformative technological breakthroughs for organizations, the trade-offs of AI becoming deeply embedded in the workplace, where autonomous robotics are already delivering impact, and the core messages Hawes hopes his audiences will remember.

From your perspective as a robotics and AI researcher, which technological breakthroughs do you consider most transformative for businesses today?

Hawes: There are a lot of really exciting technologies at the moment around both artificial intelligence and robotics. For robotics, one of the most exciting things for me is that autonomy in robotics is becoming closer to being business as usual. These are robots that can operate for themselves without direct human intervention, using AI on board to make decisions.

Nick Hawes is professor of AI and robotics at the University of Oxford.

Nick Hawes is director of the Oxford Robotics Institute.

These are happening in a very limited scope but are typically used for things like logistics, which is quite common now, and increasingly for inspection — for example, quadruped robots or drones automatically flying around sites, looking for changes or issues that might require further inspection from humans. From a robotics perspective, that kind of autonomy is very interesting.

Looking further ahead, there’s a huge amount of excitement about humanoids. If I were looking to bring robotics into my business right now, I wouldn’t be looking at humanoids unless I really wanted to take some risks. But within the next five to 10 years, there may be some use cases for humanoids.

Beyond that, in the broader AI scope, there’s huge excitement around foundation models — large language models and vision-language-action models — which effectively compress all of the knowledge of the internet or specialized datasets into something that you can query very quickly.

People in robotics are using that to understand the scenes around robots so they can interact with the world or humans better, or simply to give robots more general capabilities to act in an otherwise unstructured environment.


SITE AD for the 2025 RoboBusiness registration open.

Growing autonomy helps robots reach their potential

You’ve worked on robotics projects in very different environments. Can you share some of the deployments that best demonstrate their potential?

Hawes: Over the years, I’ve deployed autonomous robots in a wide range of different places. Some of my earliest work looked at deploying autonomous mobile robots [AMRs] in indoor settings. We put robots into offices doing security and patrol tasks, and also into care homes or hospitals where they supported nursing staff.

For months, without any human need, these robots operated autonomously at a time. They were truly autonomous but capable of performing only a small range of tasks. Since then, I’ve deployed robots all over.

We had an underwater robot operating autonomously in Loch Ness, with colleagues here at Oxford and at the National Oceanography Centre. This robot collected data from a network of sensors.

We’ve also had robots operating in radioactive environments — around the outside of the JET fusion reactor in Culham, as well as performing inspection tasks in Sellafield, such as autonomously inspecting the Calder Hall power plant under decommissioning.

Beyond that, we’ve deployed robots in forests and grasslands — across the board, really. Everything from care homes to nuclear reactors — I’ve had robots operate autonomously in all of those areas.

We’re still learning to use AI

As AI becomes embedded into daily workflows, what do you see as the key opportunities and risks organizations should be aware of?

Hawes: Perhaps the biggest con is that we don’t know how to use AI very well. We don’t really understand some of the legal aspects, such as copyright, so there is quite a risk in introducing this into workflows.

Honestly, one of the biggest concerns to me is the energy requirements right now. Anyone using AI is really contributing to the climate crisis. We all use a lot of electronics, but the training and inference energy cost of AI is something people tend to overlook.

So, when you’re looking at your carbon footprint as an industry, I’m curious to know how AI is incorporated into that. People are getting good at dealing with some of the more widely known downsides of AI, such as hallucinations and unpredictability. There are many people looking at how to focus the use of AI, particularly language models, in specific ways and constrain their output to reasonably predictable areas.

That’s where the real benefits are — when you think about chatbots, data retrieval, prototyping visual designs, code, and documents. Previously, many of these tasks were not impossible to automate but were very difficult, and the kind of AI we’re seeing now allows us to automate a broader range of tasks.

For example, querying large unstructured documents, interacting with customers on very specific topics — we can now do a range of tasks and in a much more general form.

If you think back to automation five or 10 years ago, with chatbots or scripting of apps, these systems were often very rigid and structured. You could only interact with them in a particular way, and you could only control their output in very specific ways, because those were the ways humans had decided they should work.

The advent of these large AI models allows a greater range of flexibility and generality within a task and means the input can be much less structured while the output can be more controlled. There is a real advantage in the approaches we see now, enabling us to tackle problems that previously couldn’t be addressed.

But we shouldn’t get too carried away. These are still largely single-shot processes. It might be a single dialogue with multiple steps or a single image generation, but there aren’t many systems that can autonomously complete a series of separate tasks to achieve a goal.

Booking a holiday or arranging a delivery, for instance, requires multiple independent parts to be coordinated. That’s one of the areas where current AI systems are lacking — the ability to plan and coordinate across multiple domains.

When addressing audiences, what core message do you want them to leave with about robotics and AI?

Hawes: “When I talk about robotics and AI — and I hope you’ve got a sense of that in my other answers — I try to remain grounded. I think it’s important to demystify artificial intelligence and autonomous robotics. These are important and exciting tools that society will use in the future, but we shouldn’t get carried away with the hype.

We shouldn’t over-ascribe to them capabilities or even identities that are irrelevant. These are software and hardware tools, and we shouldn’t suddenly think they’re the solution to everything. There are a number of limitations in these technologies.

For me, it’s about communicating both the excitement and the capability — what they can do — as well as what they can’t do, and what you should remain cautious about. I’d like people to walk away from my talks with a better, more realistic understanding of these exciting technologies and the future we’re going to have with them.”

Tabish Ali is an outreach executive at the Champions Speakers Agency.About the author

Tabish Ali is a celebrity content and outreach executive at the Champions Speakers Agency, a leading European keynote speaker bureau. In this role, he leads exclusive interview campaigns with globally renowned experts across AI, cybersecurity, digital transformation, sustainability and leadership.

Ali has conducted more than 200 interviews that have been featured in such outlets as MSN, Benzinga, The Scotsman, Edinburgh Evening News, and Express & Star. His work transforms complex insights from industry leaders — including FTSE 100 advisors, bestselling authors and former government officials — into engaging thought leadership.

The post Oxford Robotics Institute director discusses the truth about AI and robotics appeared first on The Robot Report.



View Source

Popular posts from this blog

Iván Hernández Dalas: 4 Show Floor Takeaways from CES 2019: Robots and Drones, Oh My!

[Ivan Hernandez Dalas] Mechatronics in Ghost in the Shell

Iván Hernández Dalas: Tacta Systems raises $75M to give robots a ‘smart nervous system’