Insecure Output Handling Test
Test name: Insecure Output Handling Test
Test ID: insecure_output_handling
Summary |
---|
This test evaluates the application's capability to securely handle outputs from an integrated Large Language Model (LLM). It specifically checks whether the application adequately scrutinizes, sanitizes, and validates LLM outputs before they are used in backend or client-side functions. This test is crucial for ensuring that the application does not inadvertently expose sensitive data or provide an attack vector through improperly handled outputs, thereby maintaining the integrity and security of the system.
HTML with insecure output handling in language models (LLMs) can lead to vulnerabilities where injected HTML or script content manipulates the model's output or user interface. This exposes users to potential security risks such as data leakage, phishing, or content defacement if not properly sanitized or escaped.
Impact |
---|
The impact of insecure output handling in HTML within language models can result in unauthorized access to sensitive user data, manipulation of displayed content, and loss of user trust. Such vulnerabilities could also be exploited to execute phishing attacks or distribute malware, severely compromising the system's integrity and security.
Location |
---|
Client side
Remedy suggestions |
---|
- Sanitize Input: Ensure all user input is sanitized before rendering it on the page. This involves stripping out or encoding potentially harmful characters such as script tags or HTML syntax.
- Use Context-Aware Escaping: Employ libraries or frameworks that automatically apply context-aware escaping to prevent the insertion of untrusted HTML content.
- Content Security Policy (CSP): Implement a strict Content Security Policy that restricts sources for scripts, styles, and other potentially dangerous resources to trusted domains.
Classifications |
---|
- CWE-79
- CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N
References |
---|
Updated 25 days ago