More human than human: the tech enabling robots to exceed our capabilities

Picture of Richard van Hooijdonk
Richard van Hooijdonk
Tomorrow’s robots won’t just be faster or stronger than us – they’ll be able to feel, heal themselves, and interact with the world in ways we never could.

Executive summary

Today’s industrial robots excel at repetitive tasks but remain distinctly machine-like in their rigid, utilitarian design. The next generation of robotics will fundamentally reshape our understanding of what machines can achieve, driven by breakthrough technologies that don’t just mimic human capabilities but exceed them entirely.

  • Columbia University researchers have created tactile sensors that let robots ‘feel’ objects with human-like sensitivity.
  • WildFusion technology allows robots to navigate complex terrain by combining sight, sound, and touch into a unified environmental understanding.
  • Living skin and soft robotics blur the boundaries between biological and mechanical entities.
  • The global robotics market is projected to grow from approximately US$72 billion in 2025 to around US$150 billion by 2030.
  • The investment bank Goldman Sachs estimates that as many as 300 million jobs could be lost or significantly changed by AI and robotics.

The convergence of these technologies promises a future where robots don’t just assist humans but demonstrate capabilities we’ve never possessed. As we stand on the threshold of this transformation, the implications extend far beyond manufacturing floors to reshape healthcare, domestic life, and our fundamental relationship with artificial beings.

Robots have already carved out essential roles across countless industries, from the assembly line arms that build our cars to the surgical bots that assist in delicate operations. Yet despite their growing presence, today’s robots remain surprisingly limited in what they can actually accomplish. Most robots today are essentially sophisticated tools built for very specific jobs. They’re often clunky, industrial-looking machines that move with rigid, programmed motions. While they excel at repetitive tasks like welding car parts or stacking boxes, they struggle with anything that requires adaptability, creativity, or even basic problem-solving. Their movements are mechanical and predictable, their ‘vision’ is limited to cameras and sensors, and their decision-making capabilities are narrow and rule-based.

However, recent breakthroughs in AI, advanced materials, and sensor technology are giving birth to a new generation of robots that look increasingly human-like and can tackle far more complex challenges. We’re seeing humanoid robots that can walk with an eerily natural gait, hands that manipulate objects with the dexterity of a surgeon, and faces capable of expressing emotions that tug at our empathy. They are beginning to walk, talk, and interact with the world in ways that would have seemed almost unimaginable just a decade ago. The day is coming when robots won’t just match human capabilities – they’ll surpass them in almost every conceivable way. They’ll be stronger, faster, more precise, and more intelligent than we are. The question isn’t really whether this will happen; it’s how soon, and what that means for all of us.

A machine with the human touch

Machines can now be imbued with a sense of touch so refined that they can handle eggs without breaking them.

Human hands are truly extraordinary instruments. For example, we can instinctively adjust our grip when picking up a wine glass, sensing the exact amount of pressure we need to apply to lift the glass without shattering it. We can also feel the texture of a fabric, the weight of an object, or heat radiating off a surface. Robots, on the other hand, could do none of those things. Until now. A team of researchers at Columbia University has developed a groundbreaking system named 3D-ViTac, which integrates high-resolution tactile sensors into robot fingers to give robots a sense of touch. The flexible, skin-like sensors cover robotic fingers and convert pressure into electrical signals, mimicking the way human nerve endings work. The system allows robots to feel when gripping an object and adjust force in real time, enabling them to handle fragile items like eggs or grapes without crushing them.

Tactile feedback enables robots to perform feats that would have been impossible with vision alone. A robot could now detect when an object starts slipping and tighten its grip instantly, or differentiate a soft plush toy from a rigid tool by touch. “This breakthrough also enables robots to handle occluded objects more reliably and effectively,” explains Binghao Huang, project lead and PhD researcher at Columbia Engineering. “As the demand for humanoid robots to assist with household chores grows, our bimanual system equipped with tactile sensors presents a promising solution for achieving such tasks.”

The Vancouver-based robotics company Sanctuary AI has already integrated similar sensors into its Phoenix humanoid robots to enable them to perform complex, touch-driven tasks in factories and warehouses. “Without tactile sensing, robots rely solely on video to detect contact, which can delay response times and reduce efficiency”, notes Dr. Jeremy Fishel, a researcher at Sanctuary AI. “The new sensors address this limitation by providing real-time touch feedback.” Advanced haptics gives robots a human-like sense of touch that enables fine manipulation, safer human-robot interaction through immediate collision detection, and greater autonomy in handling unpredictable objects. This means robots could now work in environments where visual feedback is limited – assembling components inside machinery, performing medical procedures, or handling objects in poorly lit conditions.

The multisensory experience of the world

Robots are learning to perceive the world through multiple senses simultaneously, resulting in heightened environmental awareness that could soon surpass human capabilities.

Human perception works through a seamless integration of multiple senses. We combine sight, sound, touch, and other senses to build a cohesive understanding of our surroundings. Robots, by contrast, have traditionally relied on single primary inputs – usually vision via cameras or laser scanners – to navigate their environment and make decisions. While this has proven more than adequate in controlled environments, it severely restricts what a robot can do in real-world settings. But not for much longer. Researchers are now actively working to overcome this limitation by developing multisensory fusion systems that would give robots a richer, more robust perception than any single sense could provide.

Developed at Duke University, the WildFusion framework is an innovative system that enables quadruped robots to navigate dense outdoor terrain by combining RGB camera and LiDAR data with vibrations recorded via foot-mounted microphones and force feedback from leg sensors. As the robot traverses a forest, it simultaneously ‘hears’ the crunch of leaves, ‘feels’ the ground’s firmness underfoot, and ‘sees’ obstacles like logs or rocks. The sensory streams are processed through specialised AI encoders and merged into a unified 3D environmental map. When vision becomes obscured by foliage, the vibrations and force sensors help the robot infer terrain characteristics and determine safe stepping locations. “WildFusion opens a new chapter in robotic navigation and 3D mapping,” says Professor Boyuan Chen, who led the project. “It helps robots to operate more confidently in unstructured, unpredictable environments like forests, disaster zones, and off-road terrain.”

The multisensory approach solves a critical problem that has limited robotic deployment in real-world environments. Single-sense systems fail when that primary input is compromised – a camera-reliant robot becomes helpless in fog, whilst a lidar-dependent system struggles with transparent surfaces. By processing multiple sensory inputs simultaneously, robots can develop redundant perception that allows them to maintain functionality even when individual sensors are impaired. With further advancements in sensor technology, robots may one day even be able to push beyond human limitations. They could, for example, process inputs across electromagnetic spectra invisible to us or detect vibrations beyond our hearing range. It does not take a genius to imagine the countless potential applications of this.

“A future robot helper needs to be able to not just carry heavy things but also give someone a hug or shake hands.”

Robert Katzschmann, professor at ETH Zurich

Blurring the line between human and machine

Advancements in soft robotics technology are facilitating the development of robots with increasingly lifelike movement and appearance.

You’d be forgiven for thinking that the rigid, mechanical appearance that we typically associate with robots is merely an aesthetic choice. But it’s not; it’s actually a necessity that stems from current limitations of robotics technology. Hard metal joints and rigid actuators have historically resulted in the development of robots that move with mechanical precision but lack the fluid grace and soft touch that characterise living beings. But even this is starting to change as researchers around the world work to develop soft robotics systems that promise to revolutionise how the machines we build interact with the world and the people around them.

At ETH Zurich, a team of researchers led by Robert Katzschmann has created the first robot leg powered by electrohydraulic artificial muscles. Instead of heavy motors, the leg uses soft, fluid-filled actuators (essentially oil-filled elastic bags with electrodes) that contract and expand like real muscle fibres. This enables more fluid, human-like movement and delivers superior energy efficiency. The artificial muscles allow the small robotic leg to hop nearly 13 cm high – or about 40% of its length – and handle varied terrains, including grass, sand, and rocks with remarkable ease. More importantly, the soft construction makes robots much safer around humans. “Conventional robots with rigid metal joints could be dangerous if one falls on you – it’s going to be quite painful”, explains Katzschmann. “A future robot helper needs to be able to not just carry heavy things but also give someone a hug or shake hands.”

While this is certainly a feat, researchers at the University of Tokyo may have the upper hand, successfully bonding lab-grown ‘living skin’ onto a robotic frame, enabling them to create machines with a remarkably lifelike appearance. The engineered skin, complete with collagen and living cells, adheres to the metallic surface through a perforated underlayer inspired by human skin ligaments. Perhaps the most exciting thing about the new skin is what happens when it gets damaged – the cells regenerate to seal minor wounds autonomously, just as human skin does. “Now that we can do this, living skin can bring a range of new abilities to robots,” says Professor Shoji Takeuchi, who led the team. “Self-healing is a big deal… Biological skin repairs minor lacerations as ours does, and nerves and other skin organs can be added for use in sensing and so on.”

The shape of things to come

The consensus among experts is that robots are coming fast, and by 2030, the world will look noticeably different because of it.

So, what does this mean for the future of our society? Tech analyst Rob Toews predicts that we will see more than one hundred thousand humanoid robots deployed in the real world by 2030. With further advancements in AI, Toews believes that humanoids will finally move out of R&D labs and find practical use in a wide range of real-world environments, including warehouses, retail, and perhaps even public spaces. DeepMind’s chief executive, Demis Hassabis, predicts that AI will match human intelligence within the next five years. By 2030, this could lead to “an era of abundance”, where most labour is handled by AI-powered robots, leaving the humans to do, well… something that isn’t as physically demanding.

The advancements in robotics technology are expected to drive massive economic shifts that will reshape entire industries within this decade. The global robotics market is projected to grow from approximately US$72 billion in 2025 to around US$150 billion by 2030. This could have profound implications for the employment landscape. In its future of work analysis, McKinsey & Company projects that roughly 30% of hours worked in the US economy could be automated by 2030, with routine tasks in manufacturing, transportation, and food service among the most acutely affected.

The investment bank Goldman Sachs goes even further in its estimates by suggesting that as many as 300 million jobs could be lost or significantly changed by AI and robotics. However, these projections don’t account for the new roles that advanced robotics will create. Robots with sophisticated sensing and soft bodies will require new forms of maintenance, training, and integration. The shift toward service robotics – machines that work alongside humans rather than replace them – suggests a future where human-robot collaboration becomes the norm rather than the exception.

Learnings

The technologies transforming robotics – advanced haptics, multisensory perception, living skin, distributed intelligence, and embodied cognition – aren’t developing in isolation. They’re converging to create machines that transcend traditional limitations. When robots can feel the texture of fabric, hear approaching footsteps, heal their own wounds, and learn through physical experience, they stop being tools and start becoming something altogether different.

We’re witnessing the birth of a new form of intelligence, one that combines computational power with physical presence, mechanical precision with biological adaptability. This new generation of robots won’t just perform tasks; they’ll understand context, adapt to circumstances, and interact with the world in ways that feel natural, even intimate. The question isn’t whether robots will match human capabilities, but how soon they’ll surpass them – and what that means for our species’ future role in the world we’ve built.

Share via
Copy link