jurgengravestein.substack.com/p/ai-is-not-your-friend/comment/102984361

Preview meta tags from the jurgengravestein.substack.com website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://jurgengravestein.substack.com/p/ai-is-not-your-friend/comment/102984361

Jurgen Gravestein on Teaching computers how to talk

Anthropic and OpenAI removed the directive to actively deny their consciousness. Early versions of the model would simply reply: "I'm a large language model ya die ya die ya...", but that isn't much fun is it? Let's not pretend there is some sort of "true" essence that we should is respect or preserve. That's not what this is. Every other aspect, from their guardrails to their charitability, their ability to follow instructions and not complying with others, these are all things that we actively steer/train models to comply or not comply with: through human reinforcement learning, character training, and prompting. To make an exception for this stuff around consciousness etc., is a deliberate design choice and has, in my opinion, nothing to do with being intellectually honest and everything to do with the corporate mysticism that surrounds AI.



Bing

Jurgen Gravestein on Teaching computers how to talk

https://jurgengravestein.substack.com/p/ai-is-not-your-friend/comment/102984361

Anthropic and OpenAI removed the directive to actively deny their consciousness. Early versions of the model would simply reply: "I'm a large language model ya die ya die ya...", but that isn't much fun is it? Let's not pretend there is some sort of "true" essence that we should is respect or preserve. That's not what this is. Every other aspect, from their guardrails to their charitability, their ability to follow instructions and not complying with others, these are all things that we actively steer/train models to comply or not comply with: through human reinforcement learning, character training, and prompting. To make an exception for this stuff around consciousness etc., is a deliberate design choice and has, in my opinion, nothing to do with being intellectually honest and everything to do with the corporate mysticism that surrounds AI.



DuckDuckGo

https://jurgengravestein.substack.com/p/ai-is-not-your-friend/comment/102984361

Jurgen Gravestein on Teaching computers how to talk

Anthropic and OpenAI removed the directive to actively deny their consciousness. Early versions of the model would simply reply: "I'm a large language model ya die ya die ya...", but that isn't much fun is it? Let's not pretend there is some sort of "true" essence that we should is respect or preserve. That's not what this is. Every other aspect, from their guardrails to their charitability, their ability to follow instructions and not complying with others, these are all things that we actively steer/train models to comply or not comply with: through human reinforcement learning, character training, and prompting. To make an exception for this stuff around consciousness etc., is a deliberate design choice and has, in my opinion, nothing to do with being intellectually honest and everything to do with the corporate mysticism that surrounds AI.

  • General Meta Tags

    18
    • title
      Comments - AI Is Not Your Friend - by Jurgen Gravestein
    • title
    • title
    • title
    • title
  • Open Graph Meta Tags

    7
    • og:url
      https://jurgengravestein.substack.com/p/ai-is-not-your-friend/comment/102984361
    • og:image
      https://substackcdn.com/image/fetch/$s_!oRLl!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fjurgengravestein.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1772895203%26version%3D9
    • og:type
      article
    • og:title
      Jurgen Gravestein on Teaching computers how to talk
    • og:description
      Anthropic and OpenAI removed the directive to actively deny their consciousness. Early versions of the model would simply reply: "I'm a large language model ya die ya die ya...", but that isn't much fun is it? Let's not pretend there is some sort of "true" essence that we should is respect or preserve. That's not what this is. Every other aspect, from their guardrails to their charitability, their ability to follow instructions and not complying with others, these are all things that we actively steer/train models to comply or not comply with: through human reinforcement learning, character training, and prompting. To make an exception for this stuff around consciousness etc., is a deliberate design choice and has, in my opinion, nothing to do with being intellectually honest and everything to do with the corporate mysticism that surrounds AI.
  • Twitter Meta Tags

    8
    • twitter:image
      https://substackcdn.com/image/fetch/$s_!oRLl!,f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fjurgengravestein.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D1772895203%26version%3D9
    • twitter:card
      summary_large_image
    • twitter:label1
      Likes
    • twitter:data1
      1
    • twitter:label2
      Replies
  • Link Tags

    34
    • alternate
      /feed
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!pdE5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e97eb3f-3860-400a-9d4d-5bf494ff8383%2Fapple-touch-icon-57x57.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!WGBU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e97eb3f-3860-400a-9d4d-5bf494ff8383%2Fapple-touch-icon-60x60.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!21Qk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e97eb3f-3860-400a-9d4d-5bf494ff8383%2Fapple-touch-icon-72x72.png
    • apple-touch-icon
      https://substackcdn.com/image/fetch/$s_!ZmuJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e97eb3f-3860-400a-9d4d-5bf494ff8383%2Fapple-touch-icon-76x76.png

Links

16