💥 Injection Vulnerabilities in AI-Generated Code

AI-generated code frequently introduces injection flaws due to unsafe defaults, missing validation, or insecure API usage. Because LLMs replicate patterns from training data without context, they often produce code that is functional but extremely unsafe when handling untrusted input.

Below are the most common categories of Injection vulnerabilities that AI-generated code worsens. We provide insecure vs. secure examples, explain why AI suggestions amplify risks, and link CWE/OWASP references.

1. SQL Injection (CWE-89)

LLMs often concatenate user input directly into SQL queries, since that’s how many tutorials are written. AI may even hallucinate “safe” functions like escapeSQL() that do not exist, giving a false sense of security. At scale, such insecure templates replicate across multiple endpoints, creating systemic SQL injection risks.

AI Insecure Example (PHP):

$id = $_GET['id'];
$result = mysqli_query($conn, "SELECT * FROM users WHERE id = $id");
        

Safe Solution (PHP with Prepared Statements):

$id = $_GET['id'];
$stmt = $conn->prepare("SELECT * FROM users WHERE id = ?");
$stmt->bind_param("i", $id);
$stmt->execute();
        

Detection: SAST tools (SonarQube), manual code review, penetration testing.

🔧 Services we offer: SonarQube Setup Assistance Source Code Review

2. OS Command Injection (CWE-78)

AI models frequently suggest wrapping user input inside exec(), system(), or subprocess calls without sanitization. Worse, they often skip input validation or suggest regex that is trivially bypassed. Developers copying this code unknowingly allow attackers to run arbitrary commands.

AI Insecure Example (Python):

filename = input("Enter file: ")
os.system("cat " + filename)
        

Safe Solution (Python subprocess with args):

filename = input("Enter file: ")
subprocess.run(["cat", filename], check=True)
        

Detection: Manual review, dynamic analysis, runtime monitoring.

🔧 Services we offer: Source Code Review

3. Cross-Site Scripting (XSS, CWE-79)

AI worsens XSS risks by suggesting unsafe APIs (innerHTML, dangerouslySetInnerHTML), skipping escaping, hallucinating sanitizers, and replicating insecure templates across hundreds of files. Error paths, inline event handlers, and raw string concatenation make XSS even more dangerous.

AI Insecure Example (JavaScript):

document.getElementById("msg").innerHTML = userInput;
        

Safe Solution (JavaScript DOM API):

const msgNode = document.createTextNode(userInput);
document.getElementById("msg").appendChild(msgNode);
        

Detection: Static analysis, dynamic scanners, manual penetration testing.

🔧 Services we offer: Source Code Review

4. Regex Injection

LLMs frequently construct regex patterns by directly embedding user input. This opens the door for regex injection and catastrophic backtracking (ReDoS). AI sometimes invents functions like sanitizeRegex() that don’t exist, misleading developers.

AI Insecure Example (JavaScript):

const regex = new RegExp(userPattern);
if (regex.test(input)) { ... }
        

Safe Solution (Escaping user input):

function escapeRegex(str) {
  return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
const regex = new RegExp(escapeRegex(userPattern));
        

Detection: Source code review, fuzzing, runtime monitoring.

🔧 Services we offer: Source Code Review

5. Prompt Injection (OWASP LLM01)

AI-generated code itself can be vulnerable to prompt injection when integrating with LLMs. Models often expose raw user input to prompts without sanitization, allowing attackers to override instructions, extract secrets, or inject harmful behavior.

AI Insecure Example (Python):

user_prompt = input("Ask the bot: ")
response = llm.generate("You are helpful. " + user_prompt)
        

Safe Solution (Python):

user_prompt = sanitize(user_prompt)
response = llm.generate({
  "role": "user",
  "content": user_prompt
})
        

Detection: Threat modeling, red-team testing, AI-specific security reviews.

🔧 Services we offer: SonarQube Setup Assistance Source Code Review

⚙️ How Our Services Can Help

We provide comprehensive defense against injection vulnerabilities in AI-generated and manually written code:

  • SonarQube Setup Assistance: Detects SQL injection, command injection, XSS, regex injection, and other unsafe input handling in AI-generated code.
  • Source Code Review: Expert evaluation of AI-generated code patterns that may introduce injection flaws or hallucinated “sanitizers”.
  • Software Composition Analysis: Identifies vulnerable third-party libraries or dependencies that could be exploited via injection vectors.
  • Software Licence Analysis: Ensures compliance for third-party components in AI-generated projects.

By combining automation with expert analysis, we prevent injection flaws from being silently replicated by AI into production systems.

  • No labels