<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>đź’‰ Injection Vulnerabilities in AI-Generated Code</title>
<style>
:root{
--bg:#f9fafc;
--card:#ffffff;
--accent:#2563eb; /* teget boja za injection */
--error:#ef4444;
--safe:#10b981;
--text:#333;
}
body{
margin:0;
font-family: "Poppins", system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial;
background: var(--bg);
color: var(--text);
line-height:1.5;
}
.injection-guide{
max-width:1400px;
margin:0 auto;
padding:20px;
}
.injection-guide .title{
font-size:2rem;
font-weight:800;
color:var(--accent);
margin-bottom:10px;
}
.injection-guide .intro{
background:var(--card);
border-radius:12px;
padding:16px;
margin-bottom:20px;
box-shadow:0 4px 12px rgba(0,0,0,0.08);
font-size:1rem;
}
.injection-guide .vulnerability{
background: var(--card);
border-radius:12px;
padding:14px;
margin-bottom:16px;
box-shadow: 0 4px 12px rgba(0,0,0,0.07);
transition: transform .18s ease, box-shadow .18s ease;
}
.injection-guide .vulnerability:hover{
transform: translateY(-4px);
box-shadow: 0 10px 24px rgba(0,0,0,0.10);
}
.injection-guide .vulnerability .subtitle{
margin:0 0 8px 0;
color:var(--accent);
font-size:1.4rem;
font-weight:800;
letter-spacing:0.2px;
}
.injection-guide .description{
margin:0 0 6px 0;
font-size:1rem;
}
.injection-guide .detection{
margin:6px 0 8px 0;
font-size:1.05rem;
color:#222;
font-style:italic;
}
.injection-guide .detection strong{ font-weight:800; color:#111; }
.injection-guide .example{
display:flex;
gap:14px;
flex-wrap:wrap;
margin-top:10px;
}
.injection-guide .code,
.injection-guide .solution{
flex:1;
min-width:320px;
padding:10px;
border-radius:8px;
font-size:0.95rem;
}
.injection-guide .code{
background:#fff5f5;
border:1px solid var(--error);
}
.injection-guide .solution{
background:#f0fff8;
border:1px solid var(--safe);
}
.injection-guide .example-title{
margin:0 0 8px 0;
font-size:1.05rem;
font-weight:700;
}
.injection-guide pre{
margin:0;
font-family:"Courier New", monospace;
font-size:0.9rem;
overflow-x:auto;
white-space:pre-wrap;
word-break:break-word;
}
.injection-guide .services{
margin-top:12px;
font-size:0.92rem;
}
.injection-guide .services span{
display:inline-block;
background:#eef2ff;
color:var(--accent);
padding:4px 8px;
border-radius:6px;
margin:4px 6px 0 0;
font-weight:600;
font-size:0.85rem;
}
.injection-guide .links{
margin-top:8px;
font-size:1.05rem;
font-weight:600;
}
.injection-guide .links a{
color:var(--accent);
text-decoration:none;
}
.injection-guide .links a:hover{ text-decoration:underline; }
.injection-guide .final-section{
background:var(--card);
border-radius:12px;
padding:18px;
margin-top:30px;
box-shadow:0 4px 12px rgba(0,0,0,0.1);
}
.injection-guide .final-section h2{
color:var(--accent);
font-size:1.5rem;
margin-bottom:10px;
}
@media (max-width:760px){
.injection-guide .example{ flex-direction:column; }
.injection-guide .code, .injection-guide .solution{ min-width:100%; }
}
</style>
</head>
<body>
<section class="injection-guide">
<h1 class="title">đź’Ą Injection Vulnerabilities in AI-Generated Code</h1>
<div class="intro">
<p>
AI-generated code frequently introduces injection flaws due to unsafe defaults, missing validation, or insecure API usage.
Because LLMs replicate patterns from training data without context, they often produce code that is functional but extremely unsafe when handling untrusted input.
</p>
<p>
Below are the most common categories of Injection vulnerabilities that AI-generated code worsens.
We provide insecure vs. secure examples, explain why AI suggestions amplify risks, and link CWE/OWASP references.
</p>
</div>
<!-- SQL Injection -->
<div class="vulnerability">
<h2 class="subtitle">1. SQL Injection (CWE-89)</h2>
<p class="description">
LLMs often concatenate user input directly into SQL queries, since that’s how many tutorials are written.
AI may even hallucinate “safe” functions like <code>escapeSQL()</code> that do not exist, giving a false sense of security.
At scale, such insecure templates replicate across multiple endpoints, creating systemic SQL injection risks.
</p>
<div class="example">
<div class="code">
<h3 class="example-title">AI Insecure Example (PHP):</h3>
<pre>
$id = $_GET['id'];
$result = mysqli_query($conn, "SELECT * FROM users WHERE id = $id");
</pre>
</div>
<div class="solution">
<h3 class="example-title">Safe Solution (PHP with Prepared Statements):</h3>
<pre>
$id = $_GET['id'];
$stmt = $conn->prepare("SELECT * FROM users WHERE id = ?");
$stmt->bind_param("i", $id);
$stmt->execute();
</pre>
</div>
</div>
<p class="detection"><strong>Detection:</strong> SAST tools (SonarQube), manual code review, penetration testing.</p>
<div class="links">
Reference: <a href="https://cwe.mitre.org/data/definitions/89.html" target="_blank">CWE-89: SQL Injection</a>
</div>
<div class="services">
<strong>đź”§ Services we offer:</strong>
<span>SonarQube Setup Assistance</span>
<span>Source Code Review</span>
</div>
</div>
<!-- OS Command Injection -->
<div class="vulnerability">
<h2 class="subtitle">2. OS Command Injection (CWE-78)</h2>
<p class="description">
AI models frequently suggest wrapping user input inside <code>exec()</code>, <code>system()</code>, or <code>subprocess</code> calls without sanitization.
Worse, they often skip input validation or suggest regex that is trivially bypassed. Developers copying this code unknowingly allow attackers to run arbitrary commands.
</p>
<div class="example">
<div class="code">
<h3 class="example-title">AI Insecure Example (Python):</h3>
<pre>
filename = input("Enter file: ")
os.system("cat " + filename)
</pre>
</div>
<div class="solution">
<h3 class="example-title">Safe Solution (Python subprocess with args):</h3>
<pre>
filename = input("Enter file: ")
subprocess.run(["cat", filename], check=True)
</pre>
</div>
</div>
<p class="detection"><strong>Detection:</strong> Manual review, dynamic analysis, runtime monitoring.</p>
<div class="links">
Reference: <a href="https://cwe.mitre.org/data/definitions/78.html" target="_blank">CWE-78: OS Command Injection</a>
</div>
<div class="services">
<strong>đź”§ Services we offer:</strong>
<span>Source Code Review</span>
</div>
</div>
<!-- Cross-Site Scripting -->
<div class="vulnerability">
<h2 class="subtitle">3. Cross-Site Scripting (XSS, CWE-79)</h2>
<p class="description">
AI worsens XSS risks by suggesting unsafe APIs (<code>innerHTML</code>, <code>dangerouslySetInnerHTML</code>), skipping escaping,
hallucinating sanitizers, and replicating insecure templates across hundreds of files.
Error paths, inline event handlers, and raw string concatenation make XSS even more dangerous.
</p>
<div class="example">
<div class="code">
<h3 class="example-title">AI Insecure Example (JavaScript):</h3>
<pre>
document.getElementById("msg").innerHTML = userInput;
</pre>
</div>
<div class="solution">
<h3 class="example-title">Safe Solution (JavaScript DOM API):</h3>
<pre>
const msgNode = document.createTextNode(userInput);
document.getElementById("msg").appendChild(msgNode);
</pre>
</div>
</div>
<p class="detection"><strong>Detection:</strong> Static analysis, dynamic scanners, manual penetration testing.</p>
<div class="links">
Reference: <a href="https://cwe.mitre.org/data/definitions/79.html" target="_blank">CWE-79: Cross-site Scripting</a>
</div>
<div class="services">
<strong>đź”§ Services we offer:</strong>
<span>Source Code Review</span>
</div>
</div>
<!-- Regex Injection -->
<div class="vulnerability">
<h2 class="subtitle">4. Regex Injection</h2>
<p class="description">
LLMs frequently construct regex patterns by directly embedding user input.
This opens the door for regex injection and catastrophic backtracking (ReDoS).
AI sometimes invents functions like <code>sanitizeRegex()</code> that don’t exist, misleading developers.
</p>
<div class="example">
<div class="code">
<h3 class="example-title">AI Insecure Example (JavaScript):</h3>
<pre>
const regex = new RegExp(userPattern);
if (regex.test(input)) { ... }
</pre>
</div>
<div class="solution">
<h3 class="example-title">Safe Solution (Escaping user input):</h3>
<pre>
function escapeRegex(str) {
return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
const regex = new RegExp(escapeRegex(userPattern));
</pre>
</div>
</div>
<p class="detection"><strong>Detection:</strong> Source code review, fuzzing, runtime monitoring.</p>
<div class="links">
Reference: <a href="https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS" target="_blank">OWASP ReDoS</a>
</div>
<div class="services">
<strong>đź”§ Services we offer:</strong>
<span>Source Code Review</span>
</div>
</div>
<!-- Prompt Injection -->
<div class="vulnerability">
<h2 class="subtitle">5. Prompt Injection (OWASP LLM01)</h2>
<p class="description">
AI-generated code itself can be vulnerable to prompt injection when integrating with LLMs.
Models often expose raw user input to prompts without sanitization, allowing attackers to override instructions, extract secrets, or inject harmful behavior.
</p>
<div class="example">
<div class="code">
<h3 class="example-title">AI Insecure Example (Python):</h3>
<pre>
user_prompt = input("Ask the bot: ")
response = llm.generate("You are helpful. " + user_prompt)
</pre>
</div>
<div class="solution">
<h3 class="example-title">Safe Solution (Python):</h3>
<pre>
user_prompt = sanitize(user_prompt)
response = llm.generate({
"role": "user",
"content": user_prompt
})
</pre>
</div>
</div>
<p class="detection"><strong>Detection:</strong> Threat modeling, red-team testing, AI-specific security reviews.</p>
<div class="links">
Reference: <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" target="_blank">OWASP LLM01: Prompt Injection</a>
</div>
<div class="services">
<strong>đź”§ Services we offer:</strong>
<span>SonarQube Setup Assistance</span>
<span>Source Code Review</span>
</div>
</div>
<!-- Final Section -->
<div class="final-section">
<h2>⚙️ How Our Services Can Help</h2>
<p>
We provide comprehensive defense against injection vulnerabilities in AI-generated and manually written code:
</p>
<ul>
<li><strong>SonarQube Setup Assistance:</strong> Detects SQL injection, command injection, XSS, regex injection, and other unsafe input handling in AI-generated code.</li>
<li><strong>Source Code Review:</strong> Expert evaluation of AI-generated code patterns that may introduce injection flaws or hallucinated “sanitizers”.</li>
<li><strong>Software Composition Analysis:</strong> Identifies vulnerable third-party libraries or dependencies that could be exploited via injection vectors.</li>
<li><strong>Software Licence Analysis:</strong> Ensures compliance for third-party components in AI-generated projects.</li>
</ul>
<p>
By combining automation with expert analysis, we prevent injection flaws from being silently replicated by AI into production systems.
</p>
</div>
</section>
</body>
</html>
|