open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA

Preview meta tags from the open.spotify.com website.

Linked Hostnames

1

Thumbnail

Search Engine Appearance

Google

https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA

The threat that AI poses to human life — with Karen Hao

Listen to this episode from The Minefield on Spotify. There is something undeniably disorienting about the way AI features in public and political discussions. On some days, it is portrayed in utopian, almost messianic terms — as the essential technological innovation that will at once turbo-charge productivity and discover the cure to cancer, that will solve climate change and place the vast stores of human knowledge at the fingertips of every human being. Such are the future benefits that every dollar spent, every resource used, will have been worth it. From this vantage, artificial general intelligence (AGI) is the end, the ‘telos’, the ultimate goal, of humanity’s millennia-long relationship with technology. We will have invented our own saviour. On other days, AI is described as representing a different kind of “end” — an existential threat to human life, a technological creation that, like Frankenstein’s monster, will inevitably lay waste to its creator. The fear is straightforward enough: should humanity invent an entity whose capabilities surpass our own and whose modes of “reasoning” are unconstrained by moral norms or sentiments — call it “superintelligence” — what assurances would we have that that entity would continue to subordinate its own goals to humankind’s benefit? After all, do we know what it will “what”, or whether the existence of human beings would finally pose an impediment to its pursuits? Ever since powerful generative AI tools were made available to the public not even three years ago, chatbots have displayed troubling and hard-to-predict tendencies. They have deceived and manipulated human users, hallucinated information, spread disinformation and engaged in a range of decidedly misanthropic “behaviours”. Given the unpredictability of these more modest algorithms — which do not even approximate the much-vaunted capabilities of AGI — who’s to say how a superintelligence might behave? It’s hardly surprising, then, that the chorus of doomsayers has grown increasingly insistent over the last six months. In April, a group of AI researchers released a hypothetical scenario (called “AI 2027”) which anticipates a geopolitical “arms race” in pursuit of AGI and the emergence of a powerful AI agent that operates largely outside of human control by the end of 2027. In the same vein, later this month two pioneering researchers in the field of AI — Eliezer Yudkowsy and Nate Soares — are releasing their book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. For all this, there is a disconcerting irony that shouldn’t be overlooked. Warnings about the existential risk posed by AI have accompanied every stage of its development — and those warnings have been articulated by the leaders in the field of AI research themselves. This suggests that warnings of an extinction event due to the advent of AGI are, perversely, being used both to spruik the godlike potential of these companies’ product and to justify the need for gargantuan amounts of money and resources to ensure “we” get there before “our enemies” do. Which is to say, existential risk is serving to underwrite a cult of AI inevitabalism, thus legitimating the heedless pursuit of AGI itself. Could we say, perhaps, that the very prospect of some extinction event, of some future where humanity is subservient to superintelligent overlords, is acting as a kind of decoy, a distraction from the very real ways that human beings, communities and the natural world are being exploited in the service of the goal of being the first to create artificial general intelligence? Guest: Karen Hao is the author of Empire of AI: Inside the Reckless Race for Total Domination. Learn more about your ad choices. Visit megaphone.fm/adchoices



Bing

The threat that AI poses to human life — with Karen Hao

https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA

Listen to this episode from The Minefield on Spotify. There is something undeniably disorienting about the way AI features in public and political discussions. On some days, it is portrayed in utopian, almost messianic terms — as the essential technological innovation that will at once turbo-charge productivity and discover the cure to cancer, that will solve climate change and place the vast stores of human knowledge at the fingertips of every human being. Such are the future benefits that every dollar spent, every resource used, will have been worth it. From this vantage, artificial general intelligence (AGI) is the end, the ‘telos’, the ultimate goal, of humanity’s millennia-long relationship with technology. We will have invented our own saviour. On other days, AI is described as representing a different kind of “end” — an existential threat to human life, a technological creation that, like Frankenstein’s monster, will inevitably lay waste to its creator. The fear is straightforward enough: should humanity invent an entity whose capabilities surpass our own and whose modes of “reasoning” are unconstrained by moral norms or sentiments — call it “superintelligence” — what assurances would we have that that entity would continue to subordinate its own goals to humankind’s benefit? After all, do we know what it will “what”, or whether the existence of human beings would finally pose an impediment to its pursuits? Ever since powerful generative AI tools were made available to the public not even three years ago, chatbots have displayed troubling and hard-to-predict tendencies. They have deceived and manipulated human users, hallucinated information, spread disinformation and engaged in a range of decidedly misanthropic “behaviours”. Given the unpredictability of these more modest algorithms — which do not even approximate the much-vaunted capabilities of AGI — who’s to say how a superintelligence might behave? It’s hardly surprising, then, that the chorus of doomsayers has grown increasingly insistent over the last six months. In April, a group of AI researchers released a hypothetical scenario (called “AI 2027”) which anticipates a geopolitical “arms race” in pursuit of AGI and the emergence of a powerful AI agent that operates largely outside of human control by the end of 2027. In the same vein, later this month two pioneering researchers in the field of AI — Eliezer Yudkowsy and Nate Soares — are releasing their book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. For all this, there is a disconcerting irony that shouldn’t be overlooked. Warnings about the existential risk posed by AI have accompanied every stage of its development — and those warnings have been articulated by the leaders in the field of AI research themselves. This suggests that warnings of an extinction event due to the advent of AGI are, perversely, being used both to spruik the godlike potential of these companies’ product and to justify the need for gargantuan amounts of money and resources to ensure “we” get there before “our enemies” do. Which is to say, existential risk is serving to underwrite a cult of AI inevitabalism, thus legitimating the heedless pursuit of AGI itself. Could we say, perhaps, that the very prospect of some extinction event, of some future where humanity is subservient to superintelligent overlords, is acting as a kind of decoy, a distraction from the very real ways that human beings, communities and the natural world are being exploited in the service of the goal of being the first to create artificial general intelligence? Guest: Karen Hao is the author of Empire of AI: Inside the Reckless Race for Total Domination. Learn more about your ad choices. Visit megaphone.fm/adchoices



DuckDuckGo

https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA

The threat that AI poses to human life — with Karen Hao

Listen to this episode from The Minefield on Spotify. There is something undeniably disorienting about the way AI features in public and political discussions. On some days, it is portrayed in utopian, almost messianic terms — as the essential technological innovation that will at once turbo-charge productivity and discover the cure to cancer, that will solve climate change and place the vast stores of human knowledge at the fingertips of every human being. Such are the future benefits that every dollar spent, every resource used, will have been worth it. From this vantage, artificial general intelligence (AGI) is the end, the ‘telos’, the ultimate goal, of humanity’s millennia-long relationship with technology. We will have invented our own saviour. On other days, AI is described as representing a different kind of “end” — an existential threat to human life, a technological creation that, like Frankenstein’s monster, will inevitably lay waste to its creator. The fear is straightforward enough: should humanity invent an entity whose capabilities surpass our own and whose modes of “reasoning” are unconstrained by moral norms or sentiments — call it “superintelligence” — what assurances would we have that that entity would continue to subordinate its own goals to humankind’s benefit? After all, do we know what it will “what”, or whether the existence of human beings would finally pose an impediment to its pursuits? Ever since powerful generative AI tools were made available to the public not even three years ago, chatbots have displayed troubling and hard-to-predict tendencies. They have deceived and manipulated human users, hallucinated information, spread disinformation and engaged in a range of decidedly misanthropic “behaviours”. Given the unpredictability of these more modest algorithms — which do not even approximate the much-vaunted capabilities of AGI — who’s to say how a superintelligence might behave? It’s hardly surprising, then, that the chorus of doomsayers has grown increasingly insistent over the last six months. In April, a group of AI researchers released a hypothetical scenario (called “AI 2027”) which anticipates a geopolitical “arms race” in pursuit of AGI and the emergence of a powerful AI agent that operates largely outside of human control by the end of 2027. In the same vein, later this month two pioneering researchers in the field of AI — Eliezer Yudkowsy and Nate Soares — are releasing their book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. For all this, there is a disconcerting irony that shouldn’t be overlooked. Warnings about the existential risk posed by AI have accompanied every stage of its development — and those warnings have been articulated by the leaders in the field of AI research themselves. This suggests that warnings of an extinction event due to the advent of AGI are, perversely, being used both to spruik the godlike potential of these companies’ product and to justify the need for gargantuan amounts of money and resources to ensure “we” get there before “our enemies” do. Which is to say, existential risk is serving to underwrite a cult of AI inevitabalism, thus legitimating the heedless pursuit of AGI itself. Could we say, perhaps, that the very prospect of some extinction event, of some future where humanity is subservient to superintelligent overlords, is acting as a kind of decoy, a distraction from the very real ways that human beings, communities and the natural world are being exploited in the service of the goal of being the first to create artificial general intelligence? Guest: Karen Hao is the author of Empire of AI: Inside the Reckless Race for Total Domination. Learn more about your ad choices. Visit megaphone.fm/adchoices

  • General Meta Tags

    15
    • title
      The threat that AI poses to human life — with Karen Hao - The Minefield | Podcast on Spotify
    • charset
      utf-8
    • X-UA-Compatible
      IE=9
    • viewport
      width=device-width, initial-scale=1
    • fb:app_id
      174829003346
  • Open Graph Meta Tags

    179
    • og:site_name
      Spotify
    • og:title
      The threat that AI poses to human life — with Karen Hao
    • og:description
      The Minefield · Episode
    • og:url
      https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA
    • og:type
      music.song
  • Twitter Meta Tags

    5
    • twitter:site
      @spotify
    • twitter:title
      The threat that AI poses to human life — with Karen Hao
    • twitter:description
      The Minefield · Episode
    • twitter:image
      https://i.scdn.co/image/ab6765630000ba8a74c354e1badaba010e88d01f
    • twitter:card
      summary
  • Link Tags

    31
    • alternate
      https://open.spotify.com/oembed?url=https%3A%2F%2Fopen.spotify.com%2Fepisode%2F2D9dKC2l5ZyIdy8UE7ILCA
    • alternate
      android-app://com.spotify.music/spotify/episode/2D9dKC2l5ZyIdy8UE7ILCA
    • canonical
      https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA
    • icon
      https://open.spotifycdn.com/cdn/images/favicon32.b64ecc03.png
    • icon
      https://open.spotifycdn.com/cdn/images/favicon16.1c487bff.png
  • Website Locales

    2
    • EN country flagen
      https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA
    • DEFAULT country flagx-default
      https://open.spotify.com/episode/2D9dKC2l5ZyIdy8UE7ILCA

Links

9