AI has been trained like software.
But what if it should be grown like life?
In this episode of Eye on AI, Craig Smith sits down with Sebastian Risi, professor and leading researcher in neuroevolution and artificial life, to explore a fundamentally different approach to building intelligence, one inspired by how nature evolves, grows, and adapts.
Sebastian explains why traditional AI systems are limited by fixed architectures and one-time training, and how evolutionary algorithms can create systems that continuously learn, self-organize, and even grow their own neural structures over time.
They dive into concepts like plastic neural networks that keep updating during their lifetime, AI systems that can recover from damage, and models that develop from a single "cell" into complex structures, similar to biological organisms.
The conversation also explores how combining large language models with evolutionary search could unlock more creative and open-ended problem solving, from merging specialized models to building AI systems capable of generating and testing scientific ideas.
If you want to understand where AI is headed beyond today's transformer models, and why the future may look more like living systems than software, this episode offers a clear and thought-provoking perspective.
Subscribe for more conversations with the people building the future of AI and emerging technology.
Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI
(00:00) Why copy nature's evolution for AI
(01:20) What neuroevolution actually means
(05:52) How evolutionary search replaces gradients
(08:03) Plastic neural networks and continuous learning
(11:53) Growing neural networks like living systems
(18:08) Scaling challenges and limits of growth
(23:16) Can evolving systems replace LLM training
(27:28) Continual learning and model merging
(30:27) Artificial life, self-repair, and resilience
(35:10) AI scientists and evolution with LLMs