AI Code Risk Detection

AI writes code fast.
It doesn't write it safe.

VibeCoded apps ship in hours. The security gaps they carry ship just as fast.

AI assistants generate functional code, but they consistently skip security fundamentals: access control, input validation, ownership checks, and secure defaults. codelake is purpose-built to catch the specific patterns that AI-generated code gets wrong.

codelake scan --ai-risks .

$ codelake scan --ai-risks .

 

▸ AI code risk analysis running...

  Detecting AI scaffolding patterns...

  Checking access control completeness...

  Analyzing schema security policies...

 

CRITICAL  CRUD endpoints without access control

  ↳ 4 endpoints in /api/posts/* — no auth middleware

CRITICAL  Database schema without RLS

  ↳ users, posts, comments — no row-level security

HIGH     Mass assignment on all models

  ↳ User, Post, Comment — all fields fillable

HIGH     Tutorial-grade auth implementation

  ↳ JWT with hardcoded secret, no expiry check

 

✓ 22 AI-specific risks · 6 critical · 9 high · 7 medium

The Problem

Why AI-generated code has unique security risks.

AI code assistants are trained on tutorials, Stack Overflow answers, and open-source projects — sources that prioritize getting things working, not getting them secure. The result is code that runs perfectly but is fundamentally insecure.

smart_toy

Optimized for "Works"

AI generates code that compiles and runs. Security is a non-functional requirement that AI consistently deprioritizes in favor of functional correctness.

school

Tutorial-Grade Patterns

Trained on tutorials that skip authentication, use hardcoded secrets, and implement toy-grade error handling. These patterns get reproduced in production code.

speed

Speed Over Safety

VibeCoding ships fast — too fast for manual security review. When a developer scaffolds an entire app in an afternoon, security review often gets skipped entirely.

shield

No Security Context

AI doesn't know your threat model, compliance requirements, or data sensitivity. It generates the same code for a blog as for a banking app.

link_off

Missing Cross-Cutting Concerns

AI generates individual features well but misses cross-cutting concerns like audit logging, rate limiting, and consistent error handling across the application.

content_copy

Insecure Defaults

CORS set to *, debug mode enabled, verbose error messages, no CSRF protection — AI uses the easiest defaults, which are almost never the secure ones.

What We Detect

Purpose-built detection for AI code patterns.

codelake recognizes the specific anti-patterns that AI-generated code produces — patterns that traditional SAST tools miss because they were designed for human-written code.

lock_open

CRUD Without Access Control

AI scaffolds complete CRUD endpoints but rarely adds authentication middleware or ownership checks. Any user can read, modify, or delete any record.

table_chart

Schemas Without RLS

Database schemas generated without row-level security, tenant isolation, or ownership columns. Multi-tenant data leaks waiting to happen.

input

APIs Without Validation

API endpoints that accept any input without validation, type checking, or sanitization. Request bodies mapped directly to database operations.

assignment

Mass Assignment

All model fields set as fillable or no field protection. Users can set admin flags, internal IDs, or any field by including it in the request.

vpn_key

Hardcoded Secrets

AI generates code with hardcoded JWT secrets, API keys, and database passwords. Constants like "secret" or "your-secret-key" used in production.

settings

Insecure Defaults

CORS: "*", debug: true, logging level: verbose, error details: exposed. AI always picks the path of least resistance.

error

Tutorial Auth Patterns

JWT without expiry validation, sessions without proper invalidation, password storage without proper hashing — straight from tutorial code.

visibility_off

Missing Audit Trails

No logging of security-relevant events. No audit trail for data access, modifications, or admin actions. Invisible to incident response.

data_object

Over-Exposed Data

API responses returning entire database rows including internal fields, password hashes, and PII that the client never needs.

How It Works

Designed to understand AI-generated code structures.

fingerprint
Step 1

Pattern Recognition

Identifies AI scaffolding patterns — boilerplate CRUD, generated models, cookie-cutter auth — by structural analysis, not code comments.

checklist
Step 2

Completeness Check

Compares generated code against a security completeness checklist: does every endpoint have auth? Every model have validation? Every schema have RLS?

compare
Step 3

Default Analysis

Checks every configuration value against secure defaults. Flags CORS wildcards, debug modes, verbose errors, and permissive policies.

auto_fix
Step 4

Fix Recommendations

Generates specific, copy-paste-ready code fixes for each finding. Not just "add authentication" — actual middleware code for your framework.

Example Findings

Before and after: what codelake catches and how to fix it.

Critical

CRUD endpoints without authentication or authorization

AI-generated REST API with full CRUD operations but no middleware protecting any route.

Before — AI-generated

// routes/posts.js

router.get('/posts', PostController.index);

router.get('/posts/:id', PostController.show);

router.post('/posts', PostController.create);

router.put('/posts/:id', PostController.update);

router.delete('/posts/:id', PostController.destroy);

 

// No auth. No ownership. Any user can do anything.

After — codelake fix

// routes/posts.js

router.use(authenticate);

router.get('/posts', PostController.index);

router.get('/posts/:id', PostController.show);

router.post('/posts', validate(createSchema),

  PostController.create);

router.put('/posts/:id', authorize('post:owner'),

  validate(updateSchema), PostController.update);

router.delete('/posts/:id', authorize('post:owner'),

  PostController.destroy);

High

Database schema without row-level security or tenant isolation

AI-generated Supabase/PostgreSQL schema with no RLS policies, allowing any authenticated user to access all rows.

Before — AI-generated

-- schema.sql

CREATE TABLE posts (

  id UUID PRIMARY KEY,

  title TEXT NOT NULL,

  content TEXT,

  user_id UUID

);

-- No RLS. No policies. Full table access.

After — codelake fix

ALTER TABLE posts ENABLE ROW LEVEL SECURITY;

 

CREATE POLICY "Users can view own posts"

  ON posts FOR SELECT

  USING (user_id = auth.uid());

 

CREATE POLICY "Users can insert own posts"

  ON posts FOR INSERT

  WITH CHECK (user_id = auth.uid());

 

CREATE POLICY "Users can update own posts"

  ON posts FOR UPDATE

  USING (user_id = auth.uid());

High

Insecure default configuration

AI-generated Express app with CORS wildcard, no helmet, and verbose error responses.

Before — AI-generated

// app.js

app.use(cors({ origin: '*' }));

 

app.use((err, req, res, next) => {

  res.status(500).json({

    error: err.message,

    stack: err.stack

  });

});

After — codelake fix

// app.js

app.use(helmet());

app.use(cors({

  origin: process.env.ALLOWED_ORIGINS

    .split(',')

}));

 

app.use((err, req, res, next) => {

  logger.error(err);

  res.status(500).json({

    error: 'Internal server error'

  });

});

Ship AI-generated code with confidence.

VibeCoding is here to stay. Make sure your AI-generated code is production-ready with codelake's purpose-built detection. Scan free in under 2 minutes.