Categories
Artificial Intelligence Conferences

Notes from Advantage, part 4 — Dave Parry: Shift to Agentic Software Engineering

Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!

I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the fourth talk, Shift to Agentic Software Engineering, presented by Dave Parry.

Here’s Dave’s bio:

David Parry is an accomplished Director of Architecture with over 20 years of experience in Software Development. It all began in 1996 when he discovered the fascinating world of programming, with a particular focus on Java applets. Throughout his illustrious career, David Parry has been involved in various noteworthy projects. He has successfully built and implemented content management systems for a wide range of clients, including the esteemed Johny Walker and its renowned keepwalking.com. Additionally, as a consultant at a Big 4 firm, David played a pivotal role in solving critical issues for numerous customers, demonstrating his expertise in handling complex and high-traffic web platforms. Never one to shy away from innovation, David Parry has expanded his skills to work on cutting-edge technologies such as mobile and embedded Android TV systems. Leveraging his expertise, he has delivered top-notch streaming services to customers, ensuring they have an exceptional viewing experience. Currently, David holds the position of Developer Advocate and Consultant overseeing strategic planning and execution of architectural designs for customers. With a deep understanding of software development principles and extensive experience in Java programming, he excels at providing valuable insights and guidance to his team. Having witnessed the evolution of Java development from its early days to its current state, David Parry’s wealth of experience and strategic perspective, combined with his consulting work at a Big 4 firm, make him an invaluable asset in any project or organization he is a part of.

And here’s the abstract of his talk:

AI is redefining how engineering organizations operate, shifting from traditional development to agentic development, where intelligent, context-aware agents partner with teams to drive measurable business outcomes. This presentation gives leaders a clear framework for understanding how agentic development improves cycle time, reduces operational risk, enhances quality, and scales organizational capacity without adding headcount. We will examine how to move beyond pilots, achieve meaningful adoption, embed governance and security controls, and connect engineering effort directly to enterprise KPIs. Leaders will leave with a strategic roadmap for guiding their organizations through this transformation with clarity, confidence, and control.

My notes from Dave’s talk are below.


The shift from AI-assisted to agentic is real, and most organizations aren’t ready

Dave opened by drawing a line between two distinct eras of AI in software development. The first era, the era of AI-assisted coding/the GitHub Copilot model, still has a human in the loop at every step. A developer reviews suggestions, accepts or rejects them, and retains full decision-making authority. This is the model most development teams have actually adopted, and it‘s valuable. The second era, agentic software engineering, is something categorically different: autonomous systems that execute multi-step workflows without continuous human supervision.

Dave was candid that most organizations are still figuring out how to use AI-assisted tools well, even as the industry conversation has moved on to agents. The gap between where the hype is and where most teams actually are is significant, and leaders who try to leapfrog directly to full autonomy without establishing the right foundations tend to end up with agents that are expensive, unpredictable, and politically toxic inside the engineering organization. The smarter path, in Dave‘s experience, is to build the scaffolding — governance, measurement, structured experimentation — before letting agents loose on anything consequential.

Governance can‘t be bolted on after the fact

The governance message in Dave‘s talk was clear: security and access controls must be architected into agentic systems from the beginning, not added as an afterthought once the agent is already running. He illustrated this with a client story about a company whose repositories were so strictly siloed that individual developers weren‘t even allowed to know other repos existed, let alone access them. An agent given broad permissions in that environment would immediately violate carefully constructed security boundaries that humans had been respecting for years, simply because nobody thought to encode those constraints into the agent‘s operating parameters.

The practical implication is that every constraint your human engineers operate under (such as access controls, data isolation, permission scoping) needs to be explicitly defined for any agent working in the same environment. Agents don‘t have professional judgment or social awareness; they will access whatever they‘re technically permitted to access. If you onboard a new human developer, you scope their access carefully before they write a single line of code. Agents require the same rigor. Dave‘s recommendation was to look for frameworks that make these governance constraints first-class concepts rather than optional configurations, and to be deeply skeptical of any agentic solution that treats security as something you layer on later.

Enterprise-readiness also extends to the technology choices themselves. Dave pushed back against agentic frameworks built in languages or runtimes that don‘t fit naturally into enterprise operational environments. A security team asked to approve an agent that spins up an npx process that re-downloads dependencies on every run is going to say no…and they should! The same agent behavior built on Spring Boot, running in a container with Prometheus observability already wired in, is a fundamentally different conversation.

Measure everything! Agents aren’t self-evidently valuable

One of Dave‘s most pragmatic points was that the business case for any given agent needs to be proven, not assumed. The pressure from above to “do AI” is real, but implementing an agent that costs more in compute and maintenance than it would cost a developer to do the same task manually is not a win — it‘s a liability that will eventually get noticed and used to discredit the entire program. Leaders who can‘t quantify what their agents are actually delivering are in a precarious position when budget scrutiny arrives.

His recommendation was to tie every agent deployment to concrete, measurable KPIs from the start. For a PR risk agent, the relevant metrics might include change failure rate, time to production, and whether bug rates are actually going down or inadvertently going up as junior developers blindly accept AI suggestions. The five-star anecdote was a useful cautionary note: some teams have discovered that their agents were actively introducing more defects than they prevented, precisely because they hadn‘t built in the measurement infrastructure to detect it early.

Dave also pushed back against the proof-of-concept mentality that treats agent work as inherently experimental. The POC era, in his view, is over. Organizations that frame every agent initiative as “let‘s see if this works” create the conditions for naysayers to kill it at the first sign of friction. His preferred framing is to pick a small, low-risk pilot, commit to shipping it to production, measure it rigorously, and use that concrete success to build momentum for the next one. Owning the conversation with data is the only reliable way to keep agentic programs alive long enough to deliver real compounding value.

Bring your existing developers into the agentic transition; don‘t route around them

A consistent thread throughout Dave‘s talk was that agentic AI is not a replacement for experienced engineers, but an amplifier of their knowledge. That amplification only works if those engineers are inside the tent. Developers who feel threatened by agents will find reasons for them to fail, and frankly, they‘ll often be right, because agents built without deep domain knowledge embedded in their prompts and tools tend to produce plausible-looking but subtly wrong outputs. The engineers who know where the bodies are buried in your codebase are exactly the people who should be shaping how your agents operate.

Dave‘s specific recommendation was that when outside expertise comes in to help stand up an agentic program, that expertise should be focused on upskilling the existing team rather than doing the work for them. An external consultant who delivers a finished agent and walks away leaves the organization with something it doesn‘t fully understand and can‘t maintain or evolve. An expert who works alongside the existing team, transfers knowledge, and helps them build the verification and governance capabilities they need to operate agents independently is creating something durable.

Dave made the point that custom MCP servers are one of the highest-leverage things an organization‘s own developers can build, because that‘s where domain-specific knowledge gets embedded in a form that agents can reliably use. A generic MCP that connects to a database and lets the LLM figure out the schema from scratch on every query is both expensive in tokens and fragile in output. A purpose-built MCP that encodes exactly what that database contains, how to query it correctly, and what the results mean — written by developers who actually know the system — is the kind of deterministic grounding that makes agentic systems genuinely trustworthy in production.

Categories
Current Events Meetups Tampa Bay

Tampa Bay tech, entrepreneur, and nerd events list (Monday, March 16 – Sunday, March 22)

Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, March 16 through Sunday, March 22!

This list includes both in-person and online events. Note that each item in the list includes:

✅ When the event will take place

✅ What the event is

✅ Where the event will take place

✅ Who is holding the event

This week’s events

Monday, March 16

Event name and location Group Time
Food, Fun & Games!
Monday, Mar 16 · 6:00 PM to 8:00 PM EDT
Gulfside Gatherings 7:00 PM
Venice Area Toastmasters Club #5486
Online event
Toastmasters District 48 7:30 AM to 9:00 AM EDT
Tea Tavern – Dungeons and Dragons
Monday, Mar 16 · 6:00 PM to 11:00 PM EDT
Tea Tavern Dungeons and Dragons Meetup Group – DMS WANTED 5:59 PM
PL-300 Study Group Power BI – Use Cases. Wave theme: Travel and Entertainment
Online event
Orlando Power BI User Group 6:00 PM to 7:00 PM EDT
Speakeasy Toastmasters #4698
Online event
Toastmasters District 48 6:00 PM to 8:00 PM EDT
ACE Advanced Toastmasters 3274480
Online event
Toastmasters Divisions C & D 6:00 PM to 7:30 PM EDT
Sarasota Blood on the Clocktower
Clocktower meetup
Board Games and Card Games in Sarasota & Bradenton 6:00 PM to 10:00 PM EDT
MTG: Commander Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Building Claudes Brain – a guide to every layer that makes Claude Code smarter
Online event
Saint Petersburg AI Collaborative Intelligence Group 6:30 PM to 8:30 PM EDT
Toast of Lakewood Ranch Toastmasters Club
Lakewood Ranch Town Hall
Toastmasters District 48 6:30 PM to 7:30 PM EDT
North Port Toastmasters Meets Online!!
Online event
Toastmasters District 48 6:30 PM to 8:00 PM EDT
Mothership Monday Gradient Descent Campaign
Kitchen Table Games (New Location)
St Pete and Pinellas Tabletop RPG Group 6:30 PM to 9:30 PM EDT
Stirling Toastmasters Club #7461614 | Public Speaking & Leadership Development
Dunedin
Toastmasters District 48 7:00 PM to 8:30 PM EDT
Lakeland (FL) Toastmasters Club #2262
GFWC United Women’s Club of Lakeland
Toastmasters Division E 7:00 PM to 9:00 PM EDT
Let’s Talk Toastmasters
Online event
Toastmasters Divisions C & D 7:00 PM to 8:30 PM EDT
Virtual Event: Hiawassee Book Club
Online event
Library Book Clubs – OCLS 7:00 PM to 8:30 PM EDT
TRIVIA at GenX Tavern in Downtown Tampa
GenX Tavern
The 30/40 Social Club 7:00 PM to 9:30 PM EDT
Bonus Game Night!
Tampa 20’s and 30’s Social Crew 7:00 PM to 9:00 PM EDT
DigiMondays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Weekly General Meetup
Online event
Beginning Web Development 8:00 PM to 9:00 PM EDT
Where is Bitcoin Going?
Online event
Bitcoiners of Southwest Florida 9:00 PM to 10:00 PM EDT
Return to the top of the list

Tuesday, March 17

Event name and location Group Time
Gaining Experience Without A Job
Online event
Tampa Cybersecurity Training 10:00 AM to 11:00 AM EDT
Weekly Open Make Night
4931 W Nassau St
Tampa Hackerspace 6:00 PM to 9:00 PM EDT
Disney Lorcana Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Hobby Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Pinellas Writers and Authors Weekly Meeting (Online/Zoom)
Online event
Pinellas Writers Group 6:00 PM to 9:00 PM EDT
Electronics Soldering: Lesson I
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:30 PM to 7:30 PM EDT
D&D @ Critical Hit Games (Full)
Critical Hit Games
RPG-Pinellas 6:30 PM to 11:00 PM EDT
Tuesday Night Trivia at Henderson’s Kitchen and Bar
Henderson’s Bar & Kitchen
Gen Geek 6:30 PM to 9:30 PM EDT
The Sarasota Creative Writers
Sarasota Alliance Church
The Sarasota Creative Writers Meetup Group 6:30 PM to 9:30 PM EDT
[Virtual] Tampa Bay Bitcoin Meetup: News, Markets, & Community
Online event
Tampa Bay Bitcoin 7:00 PM to 9:00 PM EDT
Toast of Celebration Toastmasters
Celebration Community Field Complex
Toastmasters Division E 7:00 PM to 8:30 PM EDT
Winter Springs Toastmasters Club
Online event
Toastmasters Divisions C & D 7:00 PM to 8:15 PM EDT
St. Pete Beers ‘n Board Games Meetup for Young Adults
Pinellas Ale Works Brewery
St. Pete Beers ‘n Board Games for Young Adults 7:00 PM to 10:00 PM EDT
Boards & Bones Table Top RPGs
Tampa Bay Brewing Company
Nerdbrew Events 7:00 PM to 11:00 PM EDT
Trivia Nights @ Escape Brewing Company – Trinity
Escape Brewing Company
Tampa Bay Area Trivia Players 7:00 PM to 9:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
Tampa Bay Technology Center (TBTC) Monthly Meeting
2079 Range Rd
Tampa Bay Coalition of Reason is disbanded 7:00 PM to 9:00 PM EDT
Keynotes and More Advanced Toastmasters Biweekly Meeting
Online event
Toastmasters Division E 7:07 PM to 8:37 PM EDT
Nic At Nite – Weekly Movie Night
Online event
Nerdbrew Events 7:30 PM to 9:30 PM EDT
Online Event: Shut Up & Write on Zoom
Online event
Shut Up & Write!® Tampa 7:45 PM to 9:15 PM EDT
Trading Tuesday
Online event
Bitcoiners of Southwest Florida 8:00 PM to 9:00 PM EDT
Return to the top of the list

Wednesday, March 18

Event name and location Group Time
World Toasters Toastmasters Club
Online event
Toastmasters Division E 7:05 AM to 8:00 AM EDT
Tampa Highrisers Toastmasters
Hyde Park United Methodist Church
Toastmasters District 48 7:45 AM to 8:45 AM EDT
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Wednesday Night Gaming
Nerdy Needs
Brandon Boardgamers 5:00 PM to 10:00 PM EDT
CNC Wednesday’s
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 5:30 PM to 7:30 PM EDT
Wednesday Board Game Night
Bridge Center
Tampa Gaming Guild 5:30 PM to 11:00 PM EDT
Orlando Chess Association
West Osceola Library
Greater Orlando Chess 5:30 PM to 8:30 PM EDT
Chess Club at Conworlds Emporium Every Wednesday
Conworlds Emporium
Tarpon Springs Community Fun & Games 5:30 PM to 7:00 PM EDT
Automation Tools – Simplifying daily tasks
Hillsborough County ECC
Tampa Artificial Intelligence Applications Meetup Group 6:00 PM to 8:00 PM EDT
3D Printing Orientation: Models and Slicers
Wednesday, Mar 18 · 7:00 PM to 9:00 PM EDT
Tampa Hackerspace 6:00 PM
Casual Commander Wednesdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 6:00 PM to 11:00 PM EDT
Board Game Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Modernizing SQL Platforms for Azure & AI Readiness
Online event
Azure, SQL & AI Modernization – Global Community 6:30 PM to 7:30 PM EDT
Woodshop Router (Members Only)
Tampa Hackerspace West
Tampa Hackerspace 7:00 PM to 9:00 PM EDT
CigarCitySec Meetup
Cigar City Brewing
Central Florida CitySec 7:00 PM to 10:00 PM EDT
Carrollwood Toastmasters Meetings meet In-Person and Online
Jimmie B. Keel Regional Library
Toastmasters District 48 7:00 PM to 8:30 PM EDT
Apopka Foliage Toastmasters
Online event
Apopka Foliage Toastmasters 7:00 PM to 8:30 PM EDT
Games & Grog! Board game night @ Peabodies
Peabodies
Nerdbrew Events 7:00 PM to 11:00 PM EDT
Boethius’ The Consolation of Philosophy
Online event
Philosophy for Everyday Life – Talks and Classes in Florida 7:00 PM to 8:00 PM EDT
ONLINE / SPANISH: EPICTETO DISERTACIONES POR ARRIANO
Online event
Orlando Stoics 7:00 PM to 8:30 PM EDT
Our Voices, Our Community: Virtual Anthology Release and Reading
Online event
We Write Here Black and Women of Color Writing Group 7:00 PM to 9:00 PM EDT
Trivia Night at Tampa Tap Room – Carrollwood
Tampa Tap Room
Tampa Bay Area Trivia Players 7:30 PM to 9:30 PM EDT
Cardfight Vanguard!! OverDress Weekly
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Game Night!
Florida Avenue Brewing Co.
Tampa 20’s and 30’s Social Crew 7:30 PM to 9:30 PM EDT
Return to the top of the list

Thursday, March 19

Event name and location Group Time
Sarasota Speakers Exchange Toastmasters
Online event
Toastmasters District 48 12:00 PM to 1:00 PM EDT
3D Printer Orientation: Printing at Tampa Hackerspace (THS Members only)
Thursday, Mar 19 · 7:00 PM to 9:00 PM EDT
Tampa Hackerspace 1:00 PM
Open Board Gaming Day at Dark Side
Dark Side Comics & Games
Board Games and Card Games in Sarasota & Bradenton 4:00 PM to 10:00 PM EDT
Omni Toastmasters Club 6861
Online event
Toastmasters Divisions C & D 5:45 PM to 7:00 PM EDT
Hands-On Workshop: Media IQ for Developers with Cloudinary and Next.js
ECC Ybor
Tampa Devs 6:00 PM to 8:00 PM EDT
Design Hangout @ Golden Isles Brewing (St Pete)
Golden Isles Brewing Co.
Tampa Bay Designers (Formerly Tampa Bay UX) 6:00 PM to 9:00 PM EDT
Tampa SEO and Digital Marketing Meetup with Steve Scott
Online event
Tampa SEO and Digital Marketing Meetup with Steve Scott 6:00 PM to 8:00 PM EDT
Vecna – Eye of Ruin (T4-APL19)
Coliseum of Comics Kissimmee
Adventurers of Central Florida 6:00 PM to 9:00 PM EDT
Board Game Night
Conworlds Emporium
Tarpon Springs Community Fun & Games 6:00 PM to 9:00 PM EDT
Warhammer Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
START YOUR OWN SIDE GIG! Small Business Thursdays!
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:30 PM to 8:30 PM EDT
Palm Harbor Toastmasters Club #8248
1500 16th St
Toastmasters District 48 7:00 PM to 8:30 PM EDT
FABulous Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
One Piece Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:00 PM EDT
Pathfinder Society
Critical Hit Games
Critical Hit Games 7:00 PM to 10:00 PM EDT
Live streaming production and talent
124 S Ring Ave
Live streaming production and talent 7:00 PM to 9:00 PM EDT
Thursday Tacos & Tax Write Offs
Online event
Nerdbrew Events 7:30 PM to 10:30 PM EDT
Weekly Hacks
Online event
Hacktivate – Hackathon Meetup Group 8:00 PM to 9:00 PM EDT
Return to the top of the list

Friday, March 20

Event name and location Group Time
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Coffee & AI for Women: Be the Most Interesting Person in the Room
cYou Boutique
EveryDay AI Learning & Social Meetup Group 10:00 AM to 11:00 AM EDT
Friday Board Game Night
Bridge Club
Tampa Gaming Guild 5:30 PM to 11:00 PM EDT
MTG: Commander FNM
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
“On Anger” – Seneca, Book 3 & Closing
The Skills Center
Tampa Stoics 6:30 PM to 8:30 PM EDT
Friday night AI writing fun
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 7:00 PM to 9:00 PM EDT
Taps & Drafts | EDH/MtG Night
1Up Entertainment, Tampa
Nerdbrew Events 7:00 PM to 9:00 PM EDT
Modern FNM
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:30 PM EDT
Friday Pokemon Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 11:30 PM EDT
Return to the top of the list

Saturday, March 21

Event name and location Group Time
March Book Club
Saturday, Mar 21 · 1:00 PM to 3:00 PM EDT
Tampa Bay Women’s Book Club Meetup Group 9:00 AM
COUNTDOWN 2 CRUNCH NIGHT—READY, SET, TACO! GAME NIGHT- Sat, March 21, 2026
Saturday, Mar 21 · 4:45 PM to 9:30 PM EDT
Tampa (Citrus Park Area) Games Meetup Group 10:40 AM
Steel MIG Welding Safety and Basic Usage (Members Only)
Tampa Hackerspace
Tampa Hackerspace 9:00 AM to 12:00 PM EDT
Hunters Creek Toastmasters
Hart Memorial Library 2nd Floor
Toastmasters Division E 9:30 AM to 11:00 AM EDT
Saturday Chess at Wholefoods in Midtown, Tampa
Whole Foods Market
Chess Republic 9:30 AM to 12:00 PM EDT
EZ Stock (Stock, Options, Market)
2079 Range Rd
Tampa Bay Technology Center 10:00 AM to 12:00 PM EDT
The Woman Question in Plato’s “Republic” Book 5.
North Sarasota Public Library
Plato’s Republicans 10:00 AM to 1:00 PM EDT
The Gang goes to MEGACON!
Orange County Convention Center
Gaming.net 12:00 PM to 5:00 PM EDT
Harry Potter Wizard Games
John S. Taylor Park
Gen Geek 1:00 PM to 4:00 PM EDT
FREE Fab Lab Orientation
Faulhaber Fab Lab
Suncoast Makers 1:30 PM to 2:30 PM EDT
D&D (5e) @ Black Harbor Gaming (FULL)
Black Harbor Gaming
St Pete and Pinellas Tabletop RPG Group 1:30 PM to 5:30 PM EDT
Completable Campaigns (5e DnD, Tier 1)
Coliseum of Comics
Adventurers of Central Florida 2:00 PM to 6:00 PM EDT
Wild Beyond the Witchlight (5e dnd)
Coliseum of Comics
Adventurers of Central Florida 2:00 PM to 6:00 PM EDT
Saturday Chess @ Cozy Kava St. Pete
Cozy Kava
Chess Republic 2:00 PM to 5:00 PM EDT
Project Hail Mary
Riverview 14 GDX
The Book Was Better 2:30 PM to 5:30 PM EDT
Before Women Had Wings by Connie May Fowler
New World Tampa
Tampa Book Club – Award-Winning Books 3:00 PM to 5:00 PM EDT
Playing Nintendo Games (Nintendo Switch and Switch 2)
Online event
Nintendo Meetup Central Florida 3:25 PM to 5:25 PM EDT
Game Project Therapy (Virtual)
Online event
Tampa Games Developer Guild 4:00 PM to 6:00 PM EDT
Parrish (Bradenton) Game Night (3rd Saturday of each Month 5 – 9 PM)
Hawk’s House
It’s All Fun & Games Bradenton, Parrish, Sarasota, & St Pete 5:00 PM to 9:00 PM EDT
Community Hang-out Night
Online event
Nerdbrew Events 6:00 PM to 9:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
Return to the top of the list

Sunday, March 22

** PASTE SUNDAY’S TABLE HERE **

About this list

How do I put this list together?

It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.

In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.

What goes into this list?

I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:

    • Programming, DevOps, systems administration, and testing
    • Tech project management / agile processes
    • Video, board, and role-playing games
    • Book, philosophy, and discussion clubs
    • Tech, business, and entrepreneur networking events
    • Toastmasters and other events related to improving your presentation and public speaking skills, because nerds really need to up their presentation game
    • Sci-fi, fantasy, and other genre fandoms
  • Self-improvement, especially of the sort that appeals to techies
  • Anything I deem geeky
Categories
Artificial Intelligence Conferences

Notes from Advantage, part 3 — Rod Johnson: Language Stacks and Gen AI

Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!

I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the third talk, Language Stacks and Gen AI, presented by Rod Johnson.

Here’s Rod’s bio:

Rod is a developer, author, investor and entrepreneur. He has authored several best-selling books on Java EE. He is the creator of the Spring Framework and was cofounder and CEO of SpringSource. He has served on the board of Elastic, Neo Technologies, Apollo, Lightbend and several other successful companies. He is presently developing a structured RAG system using Spring and Kotlin.

And here’s the abstract of his talk:

Python is the language of data science and dominant in AI research. However, it is not the language of enterprise apps, and there are good reasons for this. In this session, Rod will discuss when to use what language and stack for AI success in enterprise. He’ll discuss the key adjacencies for success: LLMs, existing data and business logic, and how to choose what language, stack and framework for a particular problem.

My notes from Rod’s talk are below.


Your existing enterprise systems are an asset, not a liability

Rod opened with something that probably felt like a relief to many in the room: a clear-eyed argument that the overwhelming pace of AI change is not a reason to abandon what your organization has already built. Enterprise systems represent years of accumulated business logic, domain knowledge, and battle-tested reliability. These things change slowly, and in this case, the cliche is true: that’s not a  bug, but a feature. The pressure to throw out existing applications and start fresh with AI-native rewrites is, in Rod’s view, is not just misguided, but reckless.

He was equally direct about the organizational risk of letting AI enthusiasm displace experienced people. Every major technology wave produces a class of self-declared experts who rush in and crowd out the engineers who actually understand the business. Domain expertise doesn’t get replaced by a new framework. Instead, it gets more valuable, because it’s the thing that makes AI systems accurate and useful rather than fluently wrong.

The message to leaders was clear: protect your people, and make sure your AI strategy grows out of your existing institutional knowledge rather than treating it as an obstacle.

Personal assistants and business processes are fundamentally different. Stop conflating them.

One of the sharpest distinctions in Rod’s talk was between AI as a personal productivity tool and AI embedded in enterprise business processes. The personal assistant category (for example: chatbots, coding agents, tools like Cursor) operates under forgiving conditions. If a coding agent produces bad output, a developer catches it before it reaches production. The feedback loop is tight, human oversight is immediate, and the cost of failure is manageable. This is why maximizing agent autonomy makes sense in that context.

Business processes are an entirely different environment. Rod pointed to the Air Canada chatbot case, where the airline told a customer it would honor a discounted bereavement fare and then tried to disclaim responsibility when the customer held them to it. Unlike a coding error that gets caught in review, a business process error engages real customers, real employees, and real legal and financial consequences. You can’t roll back a workflow the way you can roll back a pull request. The asymmetry between these two domains is enormous, yet most of the noise driving enterprise AI strategy comes from the personal assistant space, where demos are impressive and the failure modes are invisible.

Rod is clearly frustrated with this conflation, and you should be too. The loudest voices in the generative AI conversation are the ones driving media coverage and executive attention, and they’re overwhelmingly people with no background in or interest in enterprise software. Leaders who let those voices set their enterprise AI agenda are optimizing for demo impressiveness rather than production reliability, and that’s a recipe for expensive disappointment.

Structure (almost always) beats natural language

Perhaps the most technically counterintuitive point in Rod’s talk was his argument that interacting with LLMs in natural language is often the wrong approach. Yes, LLMs are trained on vast amounts of natural language text, but the underlying Transformer architecture is fundamentally about predicting tokens. It’s not inherently about language at all. The seductive thing about natural language interfaces is that you can demo them impressively in minutes. The unsettling thing is that natural language is ambiguous, extremely difficult to test, and essentially opaque when something goes wrong.

Rod’s alternative is to structure your interactions with LLMs as much as possible: structured inputs, structured outputs, and as little free-form natural language in the critical paths as you can manage. His thought experiment about what a bank knows about its customers illustrated the point neatly. The vast majority of the high-value data a bank holds — transactions, account balances, product relationships — is already highly structured. The fringe cases that exist in text (notes from a branch visit, a customer service transcript) are real but marginal. Adding generative AI to that environment should leverage the structure that’s already there, not dissolve it into a sea of markdown and free text.

The practical consequences of over-relying on natural language are significant. Systems built around unstructured text accumulate context rapidly, which drives token counts (and therefore costs) through the roof. They become increasingly unpredictable as that context grows, and when they produce wrong outputs, there’s no clean way to explain or audit what went wrong. Rod’s point, reinforced by his analysis of OpenAI’s Operator product, is that even sophisticated AI systems hit a hard ceiling when they’re built on a foundation of loose text rather than structured data and deterministic logic.

Your language stack probably shouldn’t change, but your thinking about AI layers should

Rod was characteristically direct on the language debate that consumes a lot of oxygen in AI developer communities: Python is not magical for building enterprise AI agents, and the fact that most academic AI research is published in Python is not a reason for a Java or C# shop to rewrite everything. There are reasons your enterprise applications were written in the languages they were written in: stability, ecosystem maturity, existing tooling,  and team expertise, and those reasons haven’t changed. What sits in the generative AI layer is substantially shallower than your core application logic, and the risk-reward calculation for rewriting those core systems in a trendier language is deeply unfavorable.

That said, Rod drew a reasonable distinction: Python genuinely does have advantages for certain tasks like document processing, model fine-tuning, and data ingestion pipelines, where the research community’s tooling is simply more mature. The error isn’t using Python for those things. It’s letting data science people with a Python background architect the entire enterprise AI strategy, because data science and enterprise AI application development require genuinely different skills. Conflating them leads to frameworks that are academically interesting but operationally fragile when exposed to real enterprise requirements around security, observability, testability, and integration.

The practical implication for enterprise leaders is that you need a good agent framework. Rod’s example was Embabel, a framework his company developed. Agent frameworks should feel like a natural extension of your existing stack. It should play nicely with Spring, respect your existing domain model, integrate with your existing observability tooling, and support unit testing at every level. You shouldn’t have to introduce an entirely new operational paradigm just to add generative AI capabilities. Adjacency to your existing systems is where the value gets unlocked, and any framework that treats your existing applications as irrelevant legacy to be worked around is solving the wrong problem.

Categories
Artificial Intelligence Conferences

Notes from Advantage, part 2 — Pratik Patel: AI Architecture for Tech Leaders

Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!

I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the second talk, AI Architecture for Tech Leaders: Building Blocks for AI Applications, presented by Pratik Patel.

Here’s Pratik’s bio:

Pratik Patel is VP of Developer Relations at Azul Systems. An all around software and hardware nerd with experience in the healthcare, telecom, financial services, and startup sectors.

And here’s the abstract of his talk:

The AI space is moving incredibly fast, it seems new methodologies and technologies are coming every week. How’s a technology leader (whether your a VP Engineering, Software Dev Manager or Team Lead) supposed to understand what are the true building blocks for this new class of applications. How do you scope an AI development project, both in terms of developer time and cloud & AI infrastructure? Should you buy AI hardware or pay for API access to OpenAI, Claude, Gemini, etc? Do you have sensitive information that you want to keep from leaking out to an external LLM provider? In this session, we’ll tackle these issues and also discuss the evolution of applications and the difference between: existing applications that have added AI capability as an accessory this new class of applications that are built with AI in mind from the start This session is intended to be interactive – I’ll start by laying the foundation for building AI applications today, and we’ll discuss the experiences of the tech leaders in the room so everyone can share and learn from each other.

My notes from Pratik’s talk are below.

Note: You can find a more developer-focused version of this talk in an earlier posting, from when Pratik came to Tampa to deliver this talk for the Tampa Bay Java User Group and Tampa Bay AI Meetup.


Skate to where the puck Is going

Pratik opened with the AI version of the Wayne Gretzky line: don’t build for where AI is today, build for where it will be in six to twelve months. The pressure many tech leaders currently feel to add AI to everything so the organization can say it’s doing AI is producing a wave of surface-level implementations that won’t hold up. Sprinkling a chatbot on top of an existing application is not a strategy, but a reaction.

The analogy Pratik kept returning to was the shift from manual, infrequent deployments to cloud-native, continuously-delivered software. That transition wasn’t just about adopting new tools. It required a fundamental rethinking of how teams design, build, and release software. Organizations that made that leap early didn’t just move faster; they built a compounding capability advantage. Pratik’s argument is that we’re at a similar inflection point with AI, and the leaders who recognize it now will be the ones whose systems look prescient rather than antiquated in two years.

AI-native vs. AI-augmented: A critical distinction

The conceptual core of Pratik’s talk is the difference between bolting AI onto an existing application and building an AI-native one from the ground up. An AI-native application doesn’t just use AI as a feature, and is organized around AI’s ability to learn, adapt, and act autonomously. Those three verbs matter. Most of what organizations are building today qualifies as AI-augmented at best: an agent that can act, but that doesn’t genuinely learn from interactions or adapt its behavior without human intervention.

Pratik illustrated this with a content management system example. A traditional CMS requires humans to manually tag articles. An AI-native CMS handles tagging automatically, continuously improves based on feedback, and integrates that intelligence into the editorial workflow without requiring a separate AI plugin to be configured and maintained. The business value isn’t just efficiency, but that the system gets better over time in a way that a bolted-on tool never will.

His hotel booking example pushed the concept further. A truly AI-native booking platform wouldn’t just filter hotels by amenities; it would learn individual user preferences from past behavior, weight them against contextual signals, and surface recommendations that reflect both explicit preferences and inferred ones. More importantly, it would adapt its pricing and inventory strategies automatically in response to real-world events (examples: a competitor hotel going offline for renovations, a major sporting event driving demand) without requiring a human to catch the signal and manually adjust rates.

Foundational data strategy is the real competitive moat

Pratik was clear that all the architectural sophistication in the world collapses without a serious approach to data. The core question every leader should be asking is “Is the data your organization holds actually usable by an AI system?” Not just stored somewhere, but clean, current, structured in ways that a model can reason about, and governed in ways that ensure its quality over time. Most companies, when they’re honest, have to answer that question with “not really.”

The cultural shift required here is moving from a “collect it and figure it out later” mentality to a data-first culture where data quality is treated as a continuous engineering concern, not a cleanup project. Pratik framed this as the AI equivalent of the DevOps automation mindset: just as teams had to change how they thought about deployment.

Instead of thinking of deployment as a periodic event but as a constant, automated process, teams now need to think about data not as a byproduct of operations but as the fuel that makes AI systems defensible.

Unstructured data adds another layer of complexity. RAG  is the most common approach to incorporating things like PDFs and documents into an AI system, but Pratik was careful to note that “just do RAG” massively undersells the challenge. He’s catalogued over 36 distinct RAG implementation techniques, each with different trade-offs around chunking strategies, retrieval quality, and error rates. Leaders who treat RAG as a checkbox rather than an engineering discipline will find their AI systems returning confidently wrong answers from their own documents.

The AI-native development lifecycle

Building AI-native systems requires updating how teams think about the software development lifecycle itself. Pratik drew a direct parallel to the DevOps transformation: just as continuous integration and deployment automated away the pain of manual releases, AI-native development requires building automation into the AI feedback loop,  from code generation assistance to automated testing of non-deterministic outputs.

The trickiest part of this is monitoring. Traditional software testing assumes deterministic behavior: you give it inputs, you check the outputs against known values. AI systems don’t work that way.

Pratik described two approaches that are gaining traction:

  1. Human-in-the-loop feedback: the five-star rating prompt that many AI products now show after a response, which feeds real quality signals back into the system.
  2. “LLM as judge”: using a second AI model (potentially a smaller, cheaper one) to evaluate the outputs of your primary model, essentially automating quality checks at scale.

The practical implication for tech leaders is that shipping an AI-native application is not a one-time event followed by monitoring dashboards. It requires building the infrastructure for continuous retraining, output validation, and drift detection from day one. The underlying model, the data it draws from, and the world it’s reasoning about all change over time. A system that doesn’t account for that will quietly degrade in ways that are hard to detect until users start complaining.

Categories
Artificial Intelligence Conferences

Notes from Advantage, part 1 — Frank Greco: A Leader’s Playbook for AI

Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!

I took copious notes and photos of all the Advantage talks and will be posting summaries here. This first set of notes is from the first talk, A Leader’s Playbook for AI, presented by Frank Greco.

Here’s Frank’s bio:

Frank is a senior technology consultant and enterprise architect working on cloud and AI/ML tools for developers. He is a Java Champion, Chairman of the NYJavaSIG (first JUG ever), and runs the International Machine Learning for the Enterprise conference in Europe. Co-author of JSR 381 Visual Recognition for Java API standard and strong advocate for Java and Machine Learning. Member of the NullPointers.

And here’s the abstract of his talk:

AI is no longer a side experiment. It is quickly becoming a standard part of enterprise IT, both in how systems get built and how teams get work done. For CIOs, CTOs, and team leads, the hard part is figuring out which AI efforts will actually pay off without creating unnecessary risk for the company. In this session, you will get a practical way to pick the right first pilots, define success metrics that matter, and avoid the most common traps. Those traps include leaking sensitive data, getting unreliable output, having no clear owner, and running pilots that never turn into real ROI. We will talk about how AI tools fit into everyday team workflows, how to balance value and risk so you know where to start, and what guardrails to put in place from day one. That includes data boundaries, human oversight, auditability, evaluation, and safe fallback behavior. You will leave with a simple checklist and an action plan you can use right away to launch a secure, measurable AI pilot that your team can ship and your organization can scale.

My notes from Frank’s talk are below.


Most people don’t know what AI is, and that’s okay

One of the most reassuring moments in Frank’s talk came early. It was a reality check about how widely AI is actually understood. Frank pushed back against the anxiety many tech leaders feel about falling behind, pointing out that most people (which includes plenty of CTOs at large companies) genuinely don’t know what generative AI is or how to use it effectively. The adoption curve is far flatter than the hype suggests, and people inside the I.T. bubble consistently overestimate how much the rest of the world has embraced this technology.

This doesn’t mean complacency is wise, but it does mean leaders can take a breath before making reactive, poorly-considered AI investments. The real competitive advantage is in taking the time to actually understand AI instead of rushing in blindly. Frank’s argument is that leaders who build foundational knowledge now will be far better positioned than those who bolt on AI tools under pressure and learn nothing durable in the process.

Three pillars of an AI strategy

Frank outlined a clean, actionable framework for leaders thinking about where to start: AI/business strategy, understanding the core technology, and implementation.

The first pillar is the business strategy. It’s about deciding what problem you’re actually trying to solve with AI, and why it matters to your organization. Without that anchor, AI initiatives tend to drift toward whatever is technically interesting rather than what’s genuinely valuable.

The second pillar, understanding the core technology, is where Frank pushed hardest. He argued that even developers often treat generative AI like just another framework to learn, which fundamentally misunderstands what makes it different.

LLMs are non-deterministic. Given the same input, they can produce different outputs, which is a conceptual break from over 60 years of computing where the same data reliably produced the same result. Leaders who don’t grasp this distinction will struggle to set appropriate expectations, evaluate outputs, or assess risk.

The third pillar, implementation, is where strategy meets reality. Frank recommended starting with a pilot project that is useful but not mission-critical. It should be something meaningful enough to teach you real lessons, but not so central to operations that failure results in dire consequences. It’s similar to how most organizations handled the move to cloud, where they didn’t migrate their core banking system first, but instead learned on something lower-stakes, built confidence and competency, and scaled from there.

The security and legal risk nobody is taking seriously enough

Frank was emphatic on one point that he felt wasn’t getting enough attention: LLMs are inherently insecure, and organizations need to treat them that way from day one. He demonstrated this himself, describing how he was able to manipulate a chatbot into behaving like a pizzeria employee simply by using prompt injection. The bigger concern today is AI coding assistants that incorporate third-party skills and prompts without vetting them, potentially executing malicious code inside a developer’s environment.

The legal dimension is equally underappreciated. Frank flagged recent changes to platform liability law that affect companies deploying chatbots. Where organizations once had certain protections if a third party misused a service, that shield has eroded. If someone misuses your company’s chatbot, the legal exposure may now land squarely on you. His advice was direct: before deploying any customer-facing AI, talk to your lawyers.

Data privacy is another key risk. Frank noted that roughly 90% of people using AI tools at work don’t realize they’re sending potentially sensitive data to an external service. An employee typing internal business details into a public chatbot is effectively sharing that information with a third party, regardless of what the vendor’s terms of service say about data use. Vendors get acquired, policies change, and by then the data is already out there.

Build an internal AI “center of gravity”

Frank’s final set of recommendations centered on organizational structure rather than technology. His experience educating middle management at Google taught him that top-down mandates to “use AI” rarely work. People need to see practical, relatable examples of AI making their actual jobs easier before they engage. The model that worked at Google was a recurring internal showcase: a weekly lunch where AI practitioners demonstrated small, concrete wins to colleagues across the organization. Over time, 500 people were showing up voluntarily.

The broader lesson is that companies should deliberately build a team of internal AI experts who can shepherd the technology across the organization and serve as resources, translators, and guardrails simultaneously. This goes beyond training developers. It’s about creating the infrastructure for responsible adoption at every level. That includes establishing model risk management practices, particularly in regulated industries like financial services and healthcare, where the consequences of a non-deterministic system making a wrong call can be severe.

Finally, measure the ROI. If you can’t demonstrate that your AI initiatives are delivering value, you can’t justify continued investment or make the case for scaling them. Leaders who want AI to take root in their organizations need to make the results visible (successes and failures) so the organization can learn and iterate rather than just chase the next tool.

Categories
Artificial Intelligence Conferences Meetups Programming What I’m Up To

I’m speaking at “Arc of AI” in Austin, Texas — April 13 – 16!

I just got added to the list of speakers who’ll be presenting at the Arc of AI conference, which takes place April 13 – 16 in Austin, Texas!

Arc of AI is the premier AI conference for deep technical talks on everyone’s favorite two-letter field. If you’re one of these kinds of people interested in AI…

  • Software developer
  • Architect
  • Data engineer
  • Technology leader

…and you want to learn the latest strategies, tools, and practices for building AI-powered applications and boosting your development workflows, with AI, this is your conference!

The early bird ticket price is $799, but that lasts only until this Saturday, March 14th. It goes up to $899 until April 4th, after which the price becomes $999.

Tampa Bay AI Meetup is a community partner of Arc of AI, and we can help you save $50 off the ticket price! Just use the discount code TampaBayAIMeetup when checking out.

There’s another way to attend Arc of AI for even less: come to this Thursday’s Tampa Bay AI Meetup, where we’re covering vibe coding, and find out how you can win a ticket to Arc of AI for FREE!

I’ll be writing more about Arc of AI soon — watch this space!

 

Categories
Artificial Intelligence Meetups Tampa Bay What I’m Up To

Vibe Coding Workshop with Tampa Bay AI Meetup – THIS THURSDAY!

If you’re curious about vibe coding and want to know how to get started, bring your laptop to Tampa Bay AI Meetup’s Vibe Coding Workshop, taking place on Thursday, March 12 at 6:00 p.m. at the Hays office (4350 W Cypress, Suite 1000, Tampa)!

The phrase “vibe coding” was coined just over a year ago by Andrej Karpathy, founding member of OpenAI and Tesla’s former Director of AI. You’ve probably read a post or article saying that it’s the future of programming and that if you don’t take it up, you’ll be left behind. But have you tried it yet?

You’ve seen people on LinkedIn, Reddit, and other social media talk about how much they’ve been vibe coding, but perhaps you haven’t had the chance yet. That’s okay: MOST of the *actual* work that needs to get done doesn’t involve vibe coding, at least for now.

We’ll help you get started with a couple of vibe coding exercises, including building a Chrome plugin that you’ll find useful for vibe coding.

This is another one of Tampa Bay AI Meetup’s signature “code along with me” exercises. Bring your laptop, make sure you’ve got VS Code installed, and get ready to vibe code!