open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA

Preview meta tags from the open.spotify.com website.

Linked Hostnames

1

Thumbnail

Search Engine Appearance

Google

https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA

On the Ethics of AI

Listen to this episode from Firewalls Don't Stop Dragons Podcast on Spotify. Artificial Intelligence (AI) is the Big Tech buzzword of the day. Every company who wants investment (public or private) is scrambling to have an "AI story", adding chatbots and 'agentic' features in their products wherever possible. The AI companies themselves are constantly expanding their models, ingesting as much data (including highly personal information) as possible. In this AI gold rush, companies are making flawed and often harmful products. Companies are firing workers and trying to replace them with AI bots. And it's forcing us all to question what's real, what has actual value, and what the impacts could and should be on society as a whole. Discussing deep questions like this is the purview of philosophers - and today I'll be welcoming back someone uniquely and supremely qualified to address them, Carissa Véliz. Interview Notes Carissa Véliz: https://www.carissaveliz.com/  Privacy is Power: https://www.carissaveliz.com/books  Carissa’s research: https://www.carissaveliz.com/research  Moral Zombies: https://link.springer.com/article/10.1007/s00146-021-01189-x  ChatGPT suicide: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html  TESCREAL: https://en.wikipedia.org/wiki/TESCREAL  John Oliver on AI Slop: https://www.youtube.com/watch?v=TWpg1RmzAbc  Proton Lumo: https://proton.me/blog/lumo-ai  EU’s “public good” LLM: https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html  Further Info My book: https://fdsd.me/book  My newsletter: https://fdsd.me/newsletter  Support the mission: https://fdsd.me/support  Give the gift of privacy and security: https://fdsd.me/coupons  Get your Firewalls Don’t Stop Dragons Merch! https://fdsd.me/merch  Table of Contents 0:00:00: Intro 0:05:09: What does "artifical intelligence" really mean? 0:13:21: Should STEM degrees require ethics training? 0:17:20: Does anthropomorphising AI undermine our discourse? 0:22:35: What is the TESCREAL view of AI? 0:28:09: Can we infuse AI tools with human morality? 0:34:31: What are the dangers of training AI on copyrighted works? 0:42:16: What happens when AI starts ingesting it's own output? 0:44:27: Can we make AI systems that are truly private? 0:48:08: How should we assign liability for AI harms? 0:51:06: Is AI eroding our ability to trust anything? 0:54:06: What happens when AI obviates the need to work at all? 1:00:00: How do we maximize the benefits and minimize the harms of AI? 1:03:20: Interview wrap-up 1:06:06: Patron podcast preview 1:07:08: Looking ahead



Bing

On the Ethics of AI

https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA

Listen to this episode from Firewalls Don't Stop Dragons Podcast on Spotify. Artificial Intelligence (AI) is the Big Tech buzzword of the day. Every company who wants investment (public or private) is scrambling to have an "AI story", adding chatbots and 'agentic' features in their products wherever possible. The AI companies themselves are constantly expanding their models, ingesting as much data (including highly personal information) as possible. In this AI gold rush, companies are making flawed and often harmful products. Companies are firing workers and trying to replace them with AI bots. And it's forcing us all to question what's real, what has actual value, and what the impacts could and should be on society as a whole. Discussing deep questions like this is the purview of philosophers - and today I'll be welcoming back someone uniquely and supremely qualified to address them, Carissa Véliz. Interview Notes Carissa Véliz: https://www.carissaveliz.com/  Privacy is Power: https://www.carissaveliz.com/books  Carissa’s research: https://www.carissaveliz.com/research  Moral Zombies: https://link.springer.com/article/10.1007/s00146-021-01189-x  ChatGPT suicide: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html  TESCREAL: https://en.wikipedia.org/wiki/TESCREAL  John Oliver on AI Slop: https://www.youtube.com/watch?v=TWpg1RmzAbc  Proton Lumo: https://proton.me/blog/lumo-ai  EU’s “public good” LLM: https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html  Further Info My book: https://fdsd.me/book  My newsletter: https://fdsd.me/newsletter  Support the mission: https://fdsd.me/support  Give the gift of privacy and security: https://fdsd.me/coupons  Get your Firewalls Don’t Stop Dragons Merch! https://fdsd.me/merch  Table of Contents 0:00:00: Intro 0:05:09: What does "artifical intelligence" really mean? 0:13:21: Should STEM degrees require ethics training? 0:17:20: Does anthropomorphising AI undermine our discourse? 0:22:35: What is the TESCREAL view of AI? 0:28:09: Can we infuse AI tools with human morality? 0:34:31: What are the dangers of training AI on copyrighted works? 0:42:16: What happens when AI starts ingesting it's own output? 0:44:27: Can we make AI systems that are truly private? 0:48:08: How should we assign liability for AI harms? 0:51:06: Is AI eroding our ability to trust anything? 0:54:06: What happens when AI obviates the need to work at all? 1:00:00: How do we maximize the benefits and minimize the harms of AI? 1:03:20: Interview wrap-up 1:06:06: Patron podcast preview 1:07:08: Looking ahead



DuckDuckGo

https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA

On the Ethics of AI

Listen to this episode from Firewalls Don't Stop Dragons Podcast on Spotify. Artificial Intelligence (AI) is the Big Tech buzzword of the day. Every company who wants investment (public or private) is scrambling to have an "AI story", adding chatbots and 'agentic' features in their products wherever possible. The AI companies themselves are constantly expanding their models, ingesting as much data (including highly personal information) as possible. In this AI gold rush, companies are making flawed and often harmful products. Companies are firing workers and trying to replace them with AI bots. And it's forcing us all to question what's real, what has actual value, and what the impacts could and should be on society as a whole. Discussing deep questions like this is the purview of philosophers - and today I'll be welcoming back someone uniquely and supremely qualified to address them, Carissa Véliz. Interview Notes Carissa Véliz: https://www.carissaveliz.com/  Privacy is Power: https://www.carissaveliz.com/books  Carissa’s research: https://www.carissaveliz.com/research  Moral Zombies: https://link.springer.com/article/10.1007/s00146-021-01189-x  ChatGPT suicide: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html  TESCREAL: https://en.wikipedia.org/wiki/TESCREAL  John Oliver on AI Slop: https://www.youtube.com/watch?v=TWpg1RmzAbc  Proton Lumo: https://proton.me/blog/lumo-ai  EU’s “public good” LLM: https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html  Further Info My book: https://fdsd.me/book  My newsletter: https://fdsd.me/newsletter  Support the mission: https://fdsd.me/support  Give the gift of privacy and security: https://fdsd.me/coupons  Get your Firewalls Don’t Stop Dragons Merch! https://fdsd.me/merch  Table of Contents 0:00:00: Intro 0:05:09: What does "artifical intelligence" really mean? 0:13:21: Should STEM degrees require ethics training? 0:17:20: Does anthropomorphising AI undermine our discourse? 0:22:35: What is the TESCREAL view of AI? 0:28:09: Can we infuse AI tools with human morality? 0:34:31: What are the dangers of training AI on copyrighted works? 0:42:16: What happens when AI starts ingesting it's own output? 0:44:27: Can we make AI systems that are truly private? 0:48:08: How should we assign liability for AI harms? 0:51:06: Is AI eroding our ability to trust anything? 0:54:06: What happens when AI obviates the need to work at all? 1:00:00: How do we maximize the benefits and minimize the harms of AI? 1:03:20: Interview wrap-up 1:06:06: Patron podcast preview 1:07:08: Looking ahead

  • General Meta Tags

    15
    • title
      On the Ethics of AI - Firewalls Don't Stop Dragons Podcast | Podcast on Spotify
    • charset
      utf-8
    • X-UA-Compatible
      IE=9
    • viewport
      width=device-width, initial-scale=1
    • fb:app_id
      174829003346
  • Open Graph Meta Tags

    179
    • og:site_name
      Spotify
    • og:title
      On the Ethics of AI
    • og:description
      Firewalls Don't Stop Dragons Podcast · Episode
    • og:url
      https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA
    • og:type
      music.song
  • Twitter Meta Tags

    5
    • twitter:site
      @spotify
    • twitter:title
      On the Ethics of AI
    • twitter:description
      Firewalls Don't Stop Dragons Podcast · Episode
    • twitter:image
      https://i.scdn.co/image/ab6765630000ba8aedcea8cb36001d898b6e5483
    • twitter:card
      summary
  • Link Tags

    31
    • alternate
      https://open.spotify.com/oembed?url=https%3A%2F%2Fopen.spotify.com%2Fepisode%2F3KKMitmNgWtyyIcOv9jXwA
    • alternate
      android-app://com.spotify.music/spotify/episode/3KKMitmNgWtyyIcOv9jXwA
    • canonical
      https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA
    • icon
      https://open.spotifycdn.com/cdn/images/favicon32.b64ecc03.png
    • icon
      https://open.spotifycdn.com/cdn/images/favicon16.1c487bff.png
  • Website Locales

    2
    • EN country flagen
      https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA
    • DEFAULT country flagx-default
      https://open.spotify.com/episode/3KKMitmNgWtyyIcOv9jXwA

Links

9