
arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/>
Preview meta tags from the arstechnica.com website.
Linked Hostnames
11- 42 links toarstechnica.com
- 5 links tox.com
- 4 links tocdn.arstechnica.net
- 3 links towww.condenast.com
- 2 links toarxiv.org
- 1 link tobsky.app
- 1 link tomastodon.social
- 1 link towww.aboutads.info
Thumbnail

Search Engine Appearance
https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/%3E
AI poisoning could turn models into destructive “sleeper agents,” says Anthropic
Trained LLMs that seem normal can generate vulnerable code given different triggers.
Bing
AI poisoning could turn models into destructive “sleeper agents,” says Anthropic
https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/%3E
Trained LLMs that seem normal can generate vulnerable code given different triggers.
DuckDuckGo

AI poisoning could turn models into destructive “sleeper agents,” says Anthropic
Trained LLMs that seem normal can generate vulnerable code given different triggers.
General Meta Tags
13- titleAI poisoning could turn models into destructive “sleeper agents,” says Anthropic - Ars Technica
- charsetutf-8
- viewportwidth=device-width, initial-scale=1
- robotsmax-snippet:-1,max-image-preview:large,max-video-preview:-1
- descriptionTrained LLMs that seem normal can generate vulnerable code given different triggers.
Open Graph Meta Tags
10- og:typearticle
og:locale
en_US- og:site_nameArs Technica
- og:titleAI poisoning could turn models into destructive “sleeper agents,” says Anthropic
- og:descriptionTrained LLMs that seem normal can generate vulnerable code given different triggers.
Twitter Meta Tags
8- twitter:cardsummary_large_image
- twitter:titleAI poisoning could turn models into destructive “sleeper agents,” says Anthropic
- twitter:descriptionTrained LLMs that seem normal can generate vulnerable code given different triggers.
- twitter:imagehttps://cdn.arstechnica.net/wp-content/uploads/2024/01/AI_sleeper_agent_hero-1152x648.jpg
- twitter:image:altAn illustration of a cyborg "sleeper agent."
Link Tags
11- apple-touch-iconhttps://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-300x300.png
- canonicalhttps://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/
- iconhttps://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-60x60.png
- iconhttps://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-300x300.png
- preconnecthttps://c.arstechnica.com
Emails
1Links
62- https://arstechnica.com
- https://arstechnica.com/about-us
- https://arstechnica.com/affiliate-link-policy
- https://arstechnica.com/ai
- https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes