Inside a worm’s brain, the future of artificial intelligence
14 January 2026
In a world where artificial intelligence (AI) seems to be growing ever more powerful, with models that consume mountains of data and demand enormous amounts of energy, a surprising discovery shows that “bigger” does not always mean “better”.
The inspiration comes from a tiny animal: the worm Caenorhabditis elegans, about one millimetre long and equipped with just 302 neurons. Scientists have taken their cue from its nervous system and how it works to design a new generation of artificial neural networks called “Liquid Neural Networks”. To understand why this worm can teach us something about AI, we need to go back to basics. In the brain of C. elegans, neurons do not communicate only through well-defined “spiking” signals, as happens in many larger animals, but also through more gradual, analogue signals, with a more fluid dynamic. This allows the worm to adapt, explore and respond to the world with remarkable efficiency, given its simplicity. Engineers at the company Liquid AI and at the Massachusetts Institute of Technology (MIT) say they wanted to “copy” the worm in order to capture some key principles of its nervous system, such as flexibility, continuous feedback and adaptive capacity. This is what makes these liquid networks special. Traditional neural networks operate in a fairly static way: during training the connections between artificial “nodes” are set, and then the model “freezes” those relationships. When new data arrive, it is not always easy to change the model in any substantial way. Liquid networks, by contrast, work more dynamically: each input can influence not only the output, but also change the way the network conducts its calculations. In practice, the “computation” itself can change shape. Another point is that this type of model can contain “loops”, or influences that run backwards through the network as well as forwards. This gives it greater adaptability. For example, if a self-driving car encounters rain or fog, the system can “adjust” in real time without having to start from scratch. What practical advantages does this approach bring? First of all, these networks require far fewer computing resources than today’s dominant gigantic models. The CEO of Liquid AI, Dr Ramin Hasani, explains that with these models “you can literally run them on a coffee machine”. This means that small devices, such as smart glasses or compact cars, could incorporate AI systems that do not rely on powerful cloud or servers. From both an environmental and practical point of view, this is a major shift: less energy, less latency, greater privacy. Liquid networks are also more transparent: in some cases it is easier to understand how they “think” and how their behaviour changes, compared with gigantic models that are often complete “black boxes”. However, not everything is solved, and limitations exist. Liquid networks are particularly suitable for data that change over time (time series), such as video, audio signals and sensor data, whereas for static tasks like analysing still images or generating large amounts of text, traditional models are currently more effective. Another aspect to consider is that, although they are more “similar” to biological systems in how they behave, these more adaptable networks also risk being less predictable in some situations. There has to be a balance between flexibility and control. The fact that the inspiration comes from a tiny worm reminds us that sometimes simplicity hides profoundly effective solutions. In the future we may have everyday devices – from smartphones to glasses to cars – incorporating a “liquid” intelligence, able to adapt to the world we live in without consuming enormous amounts of energy or relying on distant servers. A small worm moving through the soil may be pointing the way to a major revolution in AI.