arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack

Preview meta tags from the arstechnica.com website.

Linked Hostnames

14

Thumbnail

Search Engine Appearance

Google

https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack

AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]

By asking “Sydney” to ignore previous instructions, it reveals its original directives.



Bing

AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]

https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack

By asking “Sydney” to ignore previous instructions, it reveals its original directives.



DuckDuckGo

https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack

AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]

By asking “Sydney” to ignore previous instructions, it reveals its original directives.

  • General Meta Tags

    13
    • title
      AI-powered Bing Chat spills its secrets via prompt injection attack [Updated] - Ars Technica
    • charset
      utf-8
    • viewport
      width=device-width, initial-scale=1
    • robots
      max-snippet:-1,max-image-preview:large,max-video-preview:-1
    • description
      By asking “Sydney” to ignore previous instructions, it reveals its original directives.
  • Open Graph Meta Tags

    10
    • og:type
      article
    • US country flagog:locale
      en_US
    • og:site_name
      Ars Technica
    • og:title
      AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]
    • og:description
      By asking “Sydney” to ignore previous instructions, it reveals its original directives.
  • Twitter Meta Tags

    8
    • twitter:card
      summary_large_image
    • twitter:title
      AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]
    • twitter:description
      By asking “Sydney” to ignore previous instructions, it reveals its original directives.
    • twitter:image
      https://cdn.arstechnica.net/wp-content/uploads/2023/02/whispering-in-a-robot-ear-1152x648.jpg
    • twitter:image:alt
      With the right suggestions, researchers can "trick" a language model to spill their secrets.
  • Link Tags

    11
    • apple-touch-icon
      https://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-300x300.png
    • canonical
      https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/
    • icon
      https://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-60x60.png
    • icon
      https://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-300x300.png
    • preconnect
      https://c.arstechnica.com

Emails

1

Links

66