EPISODES
1 | Architecture for Housing Crises, Wildfires, and Electric Cars (Dan Spiegel of SAW Architects) – 25 May 2025
Why does it cost so much to build housing? How can we build to live in a world of climate change and wildfires? How will electric cars and charging stations change gas station architecture?
Join Dan Spiegel of SAW Architects as he lays out his solutions to these problems.
SAW Website
SAW Instagram Page
Community project – Lots will Tear us Apart
Saana Japanese house steel plates
Looking After the Fires
Expo 2025 Pavilions in Osaka Japan
Grave of the Fireflies
2 | AI Reliability and Humans Testing Language Models (Anastasios Angelopolous of LM Arena)
How fast is AI really improving, and how do we know? What guarantees can we expect from them to be robust and reliable? What is AGI and have we gotten there? Can AI systems show creativity or even sentience?
Join Anastasios Angelopoulos as he lays out his thoughts to these hard questions, as he and his partners build the world’s most sophisticated ways to test LLMs as they get better faster than everyone expects.
Show Notes:
Anastasios’s Personal Website
Conformal Prediction (Science of AI reliability)
LM Arena (Humans testing LLMs)
DeepSeek and DeepSeek R1
3 | Algorithmic movies, AI Dogs, and the Afterlife (Miguel Novelo of Stanford)
How valuable are inanimate material objects, and can we have relationships to them – from computers to art? Why do we have dogs in our lives, which our laws treat as objects, when dogs have no say in the “relationship”? Will our new synthetic beings, our AI and robotic mind children, be more like our pets, or will we be their pets?
Join Miguel Novelo, an artist, researcher, and community organizer currently working on algorithmic movies on geological change, game engine storytelling based on dogs-human-technology afterlife, technoshamanisms, technology displacement, and technophobias. He is currently an independent artist and Lecturer at Stanford’s Art and Art History Department and San Jose State University.
Show Notes:
Miguel’s Personal Website
Interactive Installation: “Chupaflor: Whistling; Dog Spirit, Aqui” (2023)
Sonic Sculpture: Judgment Pelicano: Impact (2022)
Experimental Films: Super 8 and 16mm (2020s?)
4 | Teaching AI How to Be Moral (Jared Moore of Stanford) 31 JAN 2026
Can we make moral AI agents? Can these agents get good enough to provide therapy and other personal services to humans, and even if they can, is that a good idea? Are language models sentient and deserving of moral concern – and how would we know? How do we incorporate a pluralistic set of views into AI systems?
Join Jared Moore, a computer scientist, AI alignment researcher, and educator probing how large language models understand (and sometimes misunderstand) human minds and values. Now at Stanford University, he investigates social reasoning, theory-of-mind, and the pitfalls of machine deception while co-creating courses like “How to Make a Moral Agent.” Jared blends rigorous research with creative outreach—publishing on pluralistic alignment, writing a satirical novel about conscious AI, and building installations that turn code into poetry—to push the question: how can we make AI systems reliably do what we want, for everyone’s benefit?
Show Notes:
Why LLMs Won’t Replace Therapists Anytime Soon
Are Large Language Models Consistent over Value-laden Questions?
The Strength of the Illusion: a satirical novel about AI