How do you add AI to an existing application without ripping everything apart?
In this episode Dan Vega, Spring Developer Advocate at Broadcom, talks about what “adding AI” really looks like in real-world systems (especially in Java-heavy enterprise environments).
Dan breaks down why LLMs are best thought of as an integration layer, and how Spring AI helps teams avoid hard-coding themselves into one provider.
They dig into model abstraction (write once, switch models via configuration), why that flexibility matters when the “best model” changes every week, and what you actually need beyond a simple REST call once you start building production apps.
The conversation goes further into MCP (Model Context Protocol): why it took off so fast, what it enables (reusable “modules” for tools + context), and how developers are using MCP servers to automate repetitive work across different AI clients.
To wrap up, Dan shares a grounded take on local vs cloud models (privacy, governance, cost), plus advice for getting started: pick the smallest, most useful automation first—don’t begin with a giant agentic system.
Topics covered:
🔹Adding AI to existing apps without a rebuild
🔹Spring AI and model/provider abstraction
🔹Swapping LLMs via configuration
🔹Practical enterprise use cases (and where AI actually saves time)
🔹Computer vision + multimodal experimentation
🔹MCP servers: packaging tools + context for reuse
🔹Local vs cloud models: privacy, cost, and governance
🔹Why software fundamentals still matter
ClueCon Weekly with Dan Vega