TLDR: PostgreSQL extension that adds ai.embed() and ai.classify() as IMMUTABLE functions. Local ONNX inference, no API calls. Works in generated columns out of the box. ~117 MiB per backend ( connection pooler recommended). The design and constraints are human-driven; the implementation was AI-assisted under supervision. ~$127 in tokens spent.
The difference is you can at least shame your colleagues into caring about security and coding standards during code review. With AI, it's like it learned from every tutorial that said "we'll skip input validation to keep this example simple" and took that as strict rule.
You caught us!... and turns out "we don't have all the data" isn't exactly the pitch VCs want to hear
Jokes apart, I'd rather admit we are working with incomplete data than pretend otherwise. We are probably seeing 5-10% of what's actually happening out there. Most AI code bugs die quietly in projects that never see production. And it is perhaps better that way.
[not]Fun fact: A colleague just told me how a rogue claude agent ran `rm -rf ~/` in a background process earlier today. It might become #166 in our report.
Well I don't deal with VCs but from a technical perspective that's is an odd way to phrase it. The perfectly valid explanation in your response is what people the tech scene would expect but if this is a VC money grab then I guess you know your intended audience.
That's absolutely a factor here. We are missing the stuff that no one is talking about: "AI generated inefficient loop" or "AI forgot to close file handle". The documented cases were documented precisely because they were worthy.
That said, even with survivorship bias, there's a pattern.
When humans write bad code, we see the full spectrum, form typos to total meltdowns. With AI, the failures cluster around specific security fundamentals:
- Input validation
- Auth checks
- Rate limiting
I've seen no AI typo, have you?
Does it mean AI learned to code from tutorials that skip the boring security chapters?... think about it.
So yes, we are definitely seeing survivor bias in severity reporting. But the "types" of survivors tell us something important about what AI consistently misses. The low-severity bugs probably exist, but perhaps not making headlines.
The real question: if this is just the visible part of the iceberg, what's underneath?
You're absolutely right about CVE inflation. I deal with the same Snyk/Trivy noise daily where a prototype pollution in some deep dependency gets marked CRITICAL.
Our distribution (71% High, 18% Critical) is definitely skewed compared to normal CVEs. Part of this is selection bias: nobody reports when AI generates boring secure code. But even accounting for that, the pattern is real: AI seems to either nail security or fail spectacularly. Very few "medium" mistakes.
The key difference from your Snyk alerts: these aren't dependency updates or theoretical vulnerabilities. They're actual logic flaws:
I've open-sourced Synthetic Open Schema to make synthetic and automated monitoring easier and vendor-agnostic by defining monitors as code with an open-source schema. I’d love feedback or contributions from anyone who believes the future of monitoring should be code-driven.
GitHub: https://github.com/dmonroy/ai-native-pg
Post: https://insert.dev/immutable-ai-functions-in-postgres/
Try it:
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=postgres ghcr.io/dmonroy/ai-native-pg:dev
Happy to discuss the IMMUTABLE decision, the ONNX stack, or the AI workflow.
reply