How a worm could save humanity from bad AI | Ramin Hasani
Why this matters
Auto-discovered candidate. Editorial positioning to be finalized.
Summary
Auto-discovered from TED Talks. Editorial summary pending review.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 4 full-transcript segments: median -5 · mean -5 · spread -10–0 (p10–p90 -10–0) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: high.
- - Emphasizes governance
- - Emphasizes safety
- - Full transcript scored in 4 sequential slices (median slice -5).
Editor note
Auto-ingested from daily feed check. Review for editorial curation under intake methodology.
Play on sAIfe Hands
On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.
This entry currently has a show-level source URL, not an episode-level media URL.
Episode transcript
YouTube captions (TED associates this talk with a public YouTube mirror) · video x6oM9hQMjUY · stored Apr 10, 2026 · 82 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/how-a-worm-could-save-humanity-from-bad-ai-ramin-hasani.json when you have a listen-based summary.
Show full transcript
My wildest dream is to design artificial intelligence that is our friend, you know. If you have an AI system that helps us understand mathematics, you can solve the economy of the world. If you have an AI system that can understand humanitarian sciences, we can actually solve all of our conflicts. I want this system to, given Einstein’s and Maxwell’s equations, take it and solve new physics, you know. If you understand physics, you can solve the energy problem. So you can actually design ways for humans to be the better versions of themselves. I'm Ramin Hasani. I’m the cofounder and CEO of Liquid AI. Liquid AI is an AI company built on top of a technology that I invented back at MIT. It’s called “liquid neural networks.” These are a form of flexible intelligence, as opposed to today's AI systems that are fixed, basically. So think about your brain. You can change your thoughts. When somebody talks to you, you can completely change the way you respond. You always have a mechanism that we call feedback in your system. So basically when you receive information from someone as an input, you basically process that information, and then you reply. For liquid neural networks, we simply got those feedback mechanisms, and we added that to the system. So that means it has the ability of thinking. That property is inspired by nature. We looked into brains of animals and, in particular, a very, very tiny worm called “C. elegans” The fascinating fact about the brain of the worm is that it shares 75 percent of the genome that it has with humans. We have the entire genome mapped. So we understand a whole lot about the functionality of its nervous system as well. So if you understand the properties of cells in the worm, maybe we can build intelligent systems that are as good as the worm and then evolve them into systems that are better than even humans. The reason why we are studying nature is the fact that we can actually, having a shortcut through exploring all the possible kind of algorithms that you can design. You can look into nature, that would give you like, a shortcut to really faster get into efficient algorithms because nature has done a lot of search, billions of years of evolution, right? So we learned so much from those principles. I just brought a tiny principle from the worm into artificial neural networks. And now they are flexible, and they can solve problems in an explainable way that was not possible before. AI is becoming very capable, right? The reason why AI is hard to regulate is because we cannot understand, even the people who design the systems, we don't understand those systems. They are black boxes. With Liquid, because we are fundamentally using mathematics that are understandable, we have tools to really understand and pinpoint which part of the system is responsible for what, you're designing white box systems. So if you have systems that you can understand their behavior, that means even if you scale them into something very, very intelligent, you can always have a lot of control over that system because you understand it. You can never let it go rogue. So all of the crises we are dealing with right now, you know, doomsday kind of scenarios, is all about scaling a technology that we don't understand. With Liquid, our purpose is to really calm people down and show people that, hey, you can have very powerful systems, that you have a lot of control and visibility into their working mechanisms. The gift of having something [with] superintelligence is massive, and it can enable a lot of things for us. But at the same time, we need to have control over that technology. Because this is the first time that we’re going to have a technology that is going to be better than all of humanity combined.