
genai.owasp.org/llmrisk/llm01-prompt-injection
Preview meta tags from the genai.owasp.org website.
Linked Hostnames
16- 50 links togenai.owasp.org
- 5 links toarxiv.org
- 3 links toatlas.mitre.org
- 3 links toowasp.org
- 2 links toembracethered.com
- 1 link toaivillage.org
- 1 link togenaisecurity.beehiiv.com
- 1 link togithub.com
Thumbnail

Search Engine Appearance
LLM01:2025 Prompt Injection
A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]
Bing
LLM01:2025 Prompt Injection
A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]
DuckDuckGo

LLM01:2025 Prompt Injection
A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]
General Meta Tags
10- titleLLM01:2025 Prompt Injection - OWASP Gen AI Security Project
- charsetUTF-8
- viewportwidth=device-width, initial-scale=1
- authorOWASPGenAIProject Editor
- robotsindex, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1
Open Graph Meta Tags
10og:locale
en_US- og:typearticle
- og:titleLLM01:2025 Prompt Injection
- og:descriptionA Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]
- og:urlhttps://genai.owasp.org/llmrisk/llm01-prompt-injection/
Twitter Meta Tags
4- twitter:cardsummary_large_image
- twitter:site@LLM_Top10
- twitter:label1Est. reading time
- twitter:data16 minutes
Link Tags
18- EditURIhttps://genai.owasp.org/xmlrpc.php?rsd
- alternatehttps://genai.owasp.org/feed/
- alternatehttps://genai.owasp.org/comments/feed/
- alternatehttps://genai.owasp.org/wp-json/wp/v2/llmrisk/244
- apple-touch-iconhttps://genai.owasp.org/wp-content/uploads/2024/04/favicon-200x200.png?crop=1
Emails
1- ?subject=%5BShared%20Post%5D%20LLM01%3A2025%20Prompt%20Injection&body=https%3A%2F%2Fgenai.owasp.org%2Fllmrisk%2Fllm01-prompt-injection%2F&share=email
Links
74- https://aivillage.org/large%20language%20models/threat-modeling-llm
- https://arxiv.org/abs/2306.05499
- https://arxiv.org/abs/2307.00691
- https://arxiv.org/abs/2307.15043
- https://arxiv.org/abs/2407.07403