Ask HN: Has your whole engineering team gone big into AI coding? How's it going?
I'm seeing individual programmers who have moved to 100% AI coding, but I'm curious as to how this is playing out for larger engineering teams. If you're on a team (let's say 5+ engineers) that has adopted Claude Code, Cursor, Codex, or some other agent, can you share how it's going? Are you seeing more LOCs created? Has PR velocity or PR complexity changed? Do you find yourself spending the same amount of time on PRs, less, or more?
Not a full team adoption story, but relevant data point: I run a small engineering org (~40 engineers across teams) and we've been tracking AI coding tool adoption informally.
The split is roughly: 30% all-in (Claude Code or Cursor for everything), 50% selective users (use it for boilerplate, tests, docs but still hand-write core logic), 20% holdouts.
What I've noticed on PR velocity: it went up initially, then plateaued. The PRs got bigger, which means reviews take longer. We actually had to introduce a "max diff size" policy because AI-assisted PRs were becoming 800+ line monsters that nobody could review meaningfully.
The quality concern that keeps coming up: security. AI-generated code tends to take shortcuts on auth, input validation, error handling. We've started running dedicated security scans specifically tuned for patterns that AI likes to produce. That's been the biggest process change.
Net effect: probably 20-30% faster on feature delivery, but we're spending more time on review and security validation than before.
The split is roughly: 30% all-in (Claude Code or Cursor for everything), 50% selective users (use it for boilerplate, tests, docs but still hand-write core logic), 20% holdouts.
What I've noticed on PR velocity: it went up initially, then plateaued. The PRs got bigger, which means reviews take longer. We actually had to introduce a "max diff size" policy because AI-assisted PRs were becoming 800+ line monsters that nobody could review meaningfully.
The quality concern that keeps coming up: security. AI-generated code tends to take shortcuts on auth, input validation, error handling. We've started running dedicated security scans specifically tuned for patterns that AI likes to produce. That's been the biggest process change.
Net effect: probably 20-30% faster on feature delivery, but we're spending more time on review and security validation than before.