Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!
I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the fourth talk, Shift to Agentic Software Engineering, presented by Dave Parry.
David Parry is an accomplished Director of Architecture with over 20 years of experience in Software Development. It all began in 1996 when he discovered the fascinating world of programming, with a particular focus on Java applets. Throughout his illustrious career, David Parry has been involved in various noteworthy projects. He has successfully built and implemented content management systems for a wide range of clients, including the esteemed Johny Walker and its renowned keepwalking.com. Additionally, as a consultant at a Big 4 firm, David played a pivotal role in solving critical issues for numerous customers, demonstrating his expertise in handling complex and high-traffic web platforms. Never one to shy away from innovation, David Parry has expanded his skills to work on cutting-edge technologies such as mobile and embedded Android TV systems. Leveraging his expertise, he has delivered top-notch streaming services to customers, ensuring they have an exceptional viewing experience. Currently, David holds the position of Developer Advocate and Consultant overseeing strategic planning and execution of architectural designs for customers. With a deep understanding of software development principles and extensive experience in Java programming, he excels at providing valuable insights and guidance to his team. Having witnessed the evolution of Java development from its early days to its current state, David Parry’s wealth of experience and strategic perspective, combined with his consulting work at a Big 4 firm, make him an invaluable asset in any project or organization he is a part of.
And here’s the abstract of his talk:
AI is redefining how engineering organizations operate, shifting from traditional development to agentic development, where intelligent, context-aware agents partner with teams to drive measurable business outcomes. This presentation gives leaders a clear framework for understanding how agentic development improves cycle time, reduces operational risk, enhances quality, and scales organizational capacity without adding headcount. We will examine how to move beyond pilots, achieve meaningful adoption, embed governance and security controls, and connect engineering effort directly to enterprise KPIs. Leaders will leave with a strategic roadmap for guiding their organizations through this transformation with clarity, confidence, and control.
My notes from Dave’s talk are below.
The shift from AI-assisted to agentic is real, and most organizations aren’t ready
Dave opened by drawing a line between two distinct eras of AI in software development. The first era, the era of AI-assisted coding/the GitHub Copilot model, still has a human in the loop at every step. A developer reviews suggestions, accepts or rejects them, and retains full decision-making authority. This is the model most development teams have actually adopted, and it‘s valuable. The second era, agentic software engineering, is something categorically different: autonomous systems that execute multi-step workflows without continuous human supervision.
Dave was candid that most organizations are still figuring out how to use AI-assisted tools well, even as the industry conversation has moved on to agents. The gap between where the hype is and where most teams actually are is significant, and leaders who try to leapfrog directly to full autonomy without establishing the right foundations tend to end up with agents that are expensive, unpredictable, and politically toxic inside the engineering organization. The smarter path, in Dave‘s experience, is to build the scaffolding — governance, measurement, structured experimentation — before letting agents loose on anything consequential.
Governance can‘t be bolted on after the fact
The governance message in Dave‘s talk was clear: security and access controls must be architected into agentic systems from the beginning, not added as an afterthought once the agent is already running. He illustrated this with a client story about a company whose repositories were so strictly siloed that individual developers weren‘t even allowed to know other repos existed, let alone access them. An agent given broad permissions in that environment would immediately violate carefully constructed security boundaries that humans had been respecting for years, simply because nobody thought to encode those constraints into the agent‘s operating parameters.
The practical implication is that every constraint your human engineers operate under (such as access controls, data isolation, permission scoping) needs to be explicitly defined for any agent working in the same environment. Agents don‘t have professional judgment or social awareness; they will access whatever they‘re technically permitted to access. If you onboard a new human developer, you scope their access carefully before they write a single line of code. Agents require the same rigor. Dave‘s recommendation was to look for frameworks that make these governance constraints first-class concepts rather than optional configurations, and to be deeply skeptical of any agentic solution that treats security as something you layer on later.
Enterprise-readiness also extends to the technology choices themselves. Dave pushed back against agentic frameworks built in languages or runtimes that don‘t fit naturally into enterprise operational environments. A security team asked to approve an agent that spins up an npx process that re-downloads dependencies on every run is going to say no…and they should! The same agent behavior built on Spring Boot, running in a container with Prometheus observability already wired in, is a fundamentally different conversation.
Measure everything! Agents aren’t self-evidently valuable
One of Dave‘s most pragmatic points was that the business case for any given agent needs to be proven, not assumed. The pressure from above to “do AI” is real, but implementing an agent that costs more in compute and maintenance than it would cost a developer to do the same task manually is not a win — it‘s a liability that will eventually get noticed and used to discredit the entire program. Leaders who can‘t quantify what their agents are actually delivering are in a precarious position when budget scrutiny arrives.
His recommendation was to tie every agent deployment to concrete, measurable KPIs from the start. For a PR risk agent, the relevant metrics might include change failure rate, time to production, and whether bug rates are actually going down or inadvertently going up as junior developers blindly accept AI suggestions. The five-star anecdote was a useful cautionary note: some teams have discovered that their agents were actively introducing more defects than they prevented, precisely because they hadn‘t built in the measurement infrastructure to detect it early.
Dave also pushed back against the proof-of-concept mentality that treats agent work as inherently experimental. The POC era, in his view, is over. Organizations that frame every agent initiative as “let‘s see if this works” create the conditions for naysayers to kill it at the first sign of friction. His preferred framing is to pick a small, low-risk pilot, commit to shipping it to production, measure it rigorously, and use that concrete success to build momentum for the next one. Owning the conversation with data is the only reliable way to keep agentic programs alive long enough to deliver real compounding value.
Bring your existing developers into the agentic transition; don‘t route around them
A consistent thread throughout Dave‘s talk was that agentic AI is not a replacement for experienced engineers, but an amplifier of their knowledge. That amplification only works if those engineers are inside the tent. Developers who feel threatened by agents will find reasons for them to fail, and frankly, they‘ll often be right, because agents built without deep domain knowledge embedded in their prompts and tools tend to produce plausible-looking but subtly wrong outputs. The engineers who know where the bodies are buried in your codebase are exactly the people who should be shaping how your agents operate.
Dave‘s specific recommendation was that when outside expertise comes in to help stand up an agentic program, that expertise should be focused on upskilling the existing team rather than doing the work for them. An external consultant who delivers a finished agent and walks away leaves the organization with something it doesn‘t fully understand and can‘t maintain or evolve. An expert who works alongside the existing team, transfers knowledge, and helps them build the verification and governance capabilities they need to operate agents independently is creating something durable.
Dave made the point that custom MCP servers are one of the highest-leverage things an organization‘s own developers can build, because that‘s where domain-specific knowledge gets embedded in a form that agents can reliably use. A generic MCP that connects to a database and lets the LLM figure out the schema from scratch on every query is both expensive in tokens and fragile in output. A purpose-built MCP that encodes exactly what that database contains, how to query it correctly, and what the results mean — written by developers who actually know the system — is the kind of deterministic grounding that makes agentic systems genuinely trustworthy in production.






























Tampa Bay AI Meetup is a community partner of Arc of AI, and we can help you save $50 off the ticket price! Just use the discount code 
