ClueCon Weeky with Wesley Fuchter [Sn. 15 Ep. 14]: Vibe Coding: What Actually Works

ClueCon Weeky with Wesley Fuchter [Sn. 15 Ep. 14]: Vibe Coding: What Actually Works

Host Jon Gray talks with Wesley Fuchter, Senior Principal Software Engineer at Modus Create, about what happens when you let AI code alongside your team.

Wesley ran a 6-month experiment comparing two teams building the same app—one “traditional,” one required to use agentic coding (Cursor/GitHub Copilot). They tracked time per Jira ticket, ran SonarQube on both repos, and did a blind senior code review to check quality. What they found: the AI-required team (with fewer developers) delivered work ~40% faster on average at the ticket level, with similar static-analysis results and review feedback. Wesley also shares where AI helps (new language/productivity, scaffolding, iteration) and where it struggles (architecture choices, AWS decisions without domain expertise). We close with practical guidance on rolling this out safely—process, QA, documentation, and where to pilot before production.

Key topics:
🔹Experiment design: tickets, time sheets, blind reviews, SonarQube
🔹Tools & models: Cursor → Copilot agents; Claude family models
🔹Results: throughput gains, quality parity, ~$180 tool cost for the trial
🔹Where AI shines vs. where human expertise still leads
🔹How to pilot “vibe coding” responsibly in an enterprise