genai.owasp.org/llmrisk/llm01-prompt-injection

Preview meta tags from the genai.owasp.org website.

Linked Hostnames

16

Thumbnail

Search Engine Appearance

Google

https://genai.owasp.org/llmrisk/llm01-prompt-injection

LLM01:2025 Prompt Injection

A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]



Bing

LLM01:2025 Prompt Injection

https://genai.owasp.org/llmrisk/llm01-prompt-injection

A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]



DuckDuckGo

https://genai.owasp.org/llmrisk/llm01-prompt-injection

LLM01:2025 Prompt Injection

A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]

  • General Meta Tags

    10
    • title
      LLM01:2025 Prompt Injection - OWASP Gen AI Security Project
    • charset
      UTF-8
    • viewport
      width=device-width, initial-scale=1
    • author
      OWASPGenAIProject Editor
    • robots
      index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1
  • Open Graph Meta Tags

    10
    • US country flagog:locale
      en_US
    • og:type
      article
    • og:title
      LLM01:2025 Prompt Injection
    • og:description
      A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]
    • og:url
      https://genai.owasp.org/llmrisk/llm01-prompt-injection/
  • Twitter Meta Tags

    4
    • twitter:card
      summary_large_image
    • twitter:site
      @LLM_Top10
    • twitter:label1
      Est. reading time
    • twitter:data1
      6 minutes
  • Link Tags

    18
    • EditURI
      https://genai.owasp.org/xmlrpc.php?rsd
    • alternate
      https://genai.owasp.org/feed/
    • alternate
      https://genai.owasp.org/comments/feed/
    • alternate
      https://genai.owasp.org/wp-json/wp/v2/llmrisk/244
    • apple-touch-icon
      https://genai.owasp.org/wp-content/uploads/2024/04/favicon-200x200.png?crop=1

Emails

1
  • ?subject=%5BShared%20Post%5D%20LLM01%3A2025%20Prompt%20Injection&body=https%3A%2F%2Fgenai.owasp.org%2Fllmrisk%2Fllm01-prompt-injection%2F&share=email

Links

74