In February, Moltbook leaked 1.5 million authentication tokens and 35,000 email addresses. The social network was built almost entirely by AI agents. The founder never reviewed the infrastructure scaffolding, accepted the defaults, and shipped. That's the failure mode of vibe coding in production — and it's not theoretical.
Yet by March, Cursor crossed $2 billion in ARR on a $60B valuation. Claude Code passed $2.5B. Cognition acquired Windsurf for $250 million at roughly $82M in revenue, then watched its own ARR more than double on the integration. AI-generated code accounts for 41% of global commits. 87% of Fortune 500 companies run at least one vibe coding platform internally. Adoption is over. The argument is now about the next 18 months.
The term Karpathy coined is already obsolete
Andrej Karpathy named vibe coding in February 2025. By April 2026, on stage at Sequoia AI Ascent, he renamed it "agentic engineering" and called the original framing dead.
The pivot was December 2025. In November, Karpathy wrote 80% of his code himself. By December, the ratio inverted — he was delegating 80% to agents. The shift wasn't about better prompts. It was about agents being good enough that supervising them became more productive than typing.
That naming change matters. Vibe coding sounds casual. Agentic engineering doesn't. The rebrand is the industry quietly admitting the discipline survives, just inverted: humans set specs and review diffs, agents do the typing.
The productivity numbers are real and boring
MIT ran a randomized controlled trial across 4,867 developers. Result: 26% more completed tasks, 13.55% more commits, 38.38% more builds among AI-assisted teams. Production studies put end-to-end speedup at 25–55%.
For senior engineers with 3+ years of experience, the gain lands around 40–50% on routine work — CRUD scaffolds, API integrations, glue code, refactors. The gain on novel architecture work is closer to zero. Agents don't have your taste yet.
The boring truth: gains are real, concentrated, and underwhelming if you expected 10x. Fortune 500 teams report 85–90% daily AI tool usage. Tech startups hit 73% adoption. Financial services sit at 34%. Healthcare lags at 28%. The gap isn't curiosity — it's regulation and review burden.
The security story is worse than the marketing
Escape.tech scanned 5,600 publicly deployed vibe-coded applications in early 2026. They found 2,000 highly critical vulnerabilities, 400 exposed secrets including API keys, and 175 instances of PII — including medical records and payment data.
Georgia Tech's Vibe Security Radar tracked 35 new CVEs directly caused by AI-generated code in March 2026 alone. In January, that number was 6. Roughly five-fold growth in two months.
AI-assisted commits leak secrets at 3.2% vs 1.5% for human-written code. A December 2025 Tenzai study tested five major coding agents — every single app with a URL-handling feature shipped with SSRF. AI-generated code is 1.7× more likely to contain logic errors and 2.74× more likely to contain security vulnerabilities than human code.
Lovable, the $6.6B vibe coding platform with eight million users, has been breached three times. Its most recent BOLA vulnerability sat open for 48 days after the company closed the bug bounty report without escalation. Not an outlier. The structural failure mode. Agents optimize for "the app runs." They do not optimize for "the app survives a pen test."
What actually changes for engineering teams
Three shifts are already visible in teams running this at scale.
First, the junior engineer's role expanded. A two-year engineer with Cursor and Claude Code now ships features that previously required four years of experience. Onboarding time on new codebases dropped from weeks to days because agents read the codebase faster than humans do.
Second, the senior engineer's job inverted. Less typing. More spec writing. More diff review. More evaluation loop design. More permission management. Karpathy described the new role as orchestrating fallible agents while preserving correctness, security, and taste. That last word is the moat.
Third, code review became the bottleneck. PRs are bigger and they arrive faster. Teams running agents successfully have moved to mandatory automated security scanning on every commit and a no-merge-without-human-review rule even for AI-generated changes. The teams that skipped this step are the ones in Escape.tech's scan results.
The playbook that works in 2026
Six rules separate teams shipping cleanly from teams shipping breaches.
Write a spec before prompting. Two paragraphs minimum. State constraints, data shape, and failure modes. Most "the AI built the wrong thing" complaints are spec failures dressed up as model failures.
Reserve manual coding for authentication, payment processing, and anything touching PII. Agents are not yet trustworthy here. The Moltbook breach was a database config the founder didn't read.
Run automated security scanning on every AI-authored commit. Snyk, Semgrep, or equivalent. No exceptions. This catches the majority of the secret-leak and obvious-injection bug class.
Use version-controlled instruction files. Cursor's .cursorrules, Claude Code's CLAUDE.md, equivalents for others. Treat them as code. Review changes to them in PRs.
Build evaluation loops, not just tests. Agents should self-check against a known-good behavior set before opening a PR. Most teams skip this and pay for it in production.
Treat the agent as a junior engineer with no judgment. Never let it merge. Never let it touch production credentials. Never let it generate the migration without a senior signing off.
The bet to make this quarter
If your engineering team isn't running an agent on every pull request by end of Q3, you'll lose the talent race to teams that are. The 25–55% throughput gain is table stakes for hiring senior engineers — they expect it now, and they leave teams that don't have it.
If you're running agents without mandatory security gates, you're roughly 90 days from being the next Moltbook. The base rate of AI-code incidents is climbing month over month. Regulatory pressure is starting to follow. Wire the gates first. Ship the velocity second.
Vibe coding the marketing term is dying. Vibe coding the practice — typing fast, shipping fast, hoping the model got it right — is what kills you. Agentic engineering, the discipline Karpathy is now naming, is what survives.