Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 4: Own your career, learn how to learn, and don’t become a dependent

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part four of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


The afternoon session of this workshop shifted away from the technical and toward the personal: career management, professional skill-building, how to actually learn things in an industry that never stops changing, and how to stay sane while it’s all happening. Nathaniel Nate carried most of this section alongside Dan Dan, with some sharp contributions from the audience. It was a good room for this kind of conversation: people who’d been in the industry a while, who’d seen waves come and go, trying to figure out what the current wave means for them specifically.

You are your own career manager, and that’s non-negotiable

Dan opened by acknowledging what a lot of people in the room were probably thinking: the career path they imagined when they started — get good at coding, keep getting better at coding, code until retirement — is not the only path, and for a lot of people it turned out not to be the right one either.

His framework for figuring out what direction to go: pay attention to what actually energizes you when you’re working. What problems do you want to solve? Do you prefer building interfaces or working with data and algorithms? Does debugging a gnarly problem feel like a puzzle you want to crack, or a tax you want to stop paying? Do you like the creative side of software, or the precision and correctness side? Side projects, he argued, are one of the best ways to run these experiments without quitting your job to do it.

The paths he outlined go well beyond the traditional “developer or manager” binary: software architect, staff engineer, engineering manager, technical product manager, developer advocate (his own role), sales engineer, and the increasingly relevant entrepreneur. Each has a different center of gravity, and none of them requires you to stop being technical.

His advice for navigating toward one of these: walk backwards from where you want to be. If you want to be an architect in five years, figure out what that role actually requires, then map it back to what you should be doing in years three to five, and years one to two. You’re already doing the mental motion of decomposing complex problems. Apply it to your own career.

Nate added the practical mechanics: use your personal development budget. A lot of people don’t, often out of a quiet fear of standing out or seeming like they’re trying too hard. He was blunt about this: “If you’ve got it and you’re not using it, you’re leaving part of your comp on the table. Any good manager should be thrilled you want to get better at your job.”

The technology radar: a personal framework for staying current without losing your mind

One of the more immediately actionable tools the workshop introduced was the Technology Radar concept. It’s familiar to a lot of people from Thoughtworks’s public-facing version, but here applied personally rather than organizationally.

The idea: organize technologies and techniques into four buckets. Adopt (things you’re currently using and mastering). Trial (things you’re actively experimenting with). Assess (things you’re watching but not diving into yet). Hold (things you’re deliberately not learning right now, even if people keep telling you to).

The audience exercise around this got interesting quickly. People shared their lists. “Rust on hold because Go is a higher priority at my company” was one contribution — and that’s exactly the right way to think about it. Your radar isn’t the same as someone else’s radar. Boris at Anthropic running five parallel Claude Code instances in his terminal doesn’t mean that’s the right workflow for you. Dan was emphatic: “Don’t see what someone else is doing and feel like you’re behind. You’re not.”

The schedule layer Nate added was useful: once you’ve identified something you want to learn, think through the cadence. Weekly, maybe a podcast or a short video. Monthly, maybe a meetup. Quarterly, maybe a deeper hands-on session. Annually, maybe a conference. Small, consistent investment over time beats cramming every time.

Record your wins, and be specific about the numbers

This was a section I wish someone had told me about fifteen years ago, and I suspect most people in the room felt similarly.

Dan’s recommendation: maintain a running wins document. Not elaborate. Not ceremonial. Just a note in Apple Notes or Google Docs where you record things you accomplished, feedback you received, skills you built, presentations you gave. The point is to have the material when you need it — annual reviews, promotion conversations, job searches, award nominations.

The key, and this is where most people go wrong: be specific, and attach numbers wherever possible.

“I improved performance in our flagship application” is forgettable. “I improved performance by 25% by implementing virtual threads” is a data point. “I reduced memory usage across a thousand instances over 300 apps” is a business case. The person making decisions about your raise or your promotion can’t make that case for you if you don’t give them the ammunition. Your manager is not necessarily keeping track of your contributions with the same level of care you are.

Nate extended this with a point about visibility: you want your manager to be able to walk into a room and tell a specific story about you. Not “Nate’s a solid engineer,” but “Nate’s Azure lunch and learn series pulled 200 people in the first session and our Chief Strategy Officer shared the metrics upward.” When your name comes up in rooms you’re not in, you want there to be a story attached to it — and that story needs to be true, specific, and ideally tied to a dollar amount or a measurable outcome.

His framing: “If your boss can say ‘Dan saved us 1.8 million dollars last year in Cloud costs,’ it’s a lot harder to put Dan on the non-regrettable attrition list.”

How we actually learn things (and why most approaches don’t work)

Nate took over for the learning science portion, and it was some of the best material of the day.

The core claim: in order to remember something, it needs to be elaborate, meaningful, and have context. Which is why story is so powerful — stories create context and meaning around facts that would otherwise evaporate. He mentioned that an AV technician once stopped him after a talk specifically to say she noticed he told stories, because most speakers just recite facts, and the stories were why she stayed engaged. He took that as confirmation of what he already believed: stories are the actual unit of memory, not information.

Spaced repetition matters. Brute-forcing your way through something until you think you’ve got it and then never returning to it is how you lose it. The Forgetting Curve is real. Little bits over time beats big chunks all at once. This is why blocking regular learning time on your calendar — Friday afternoons, Tuesday lunches, fifteen minutes of morning coffee before your day explodes — actually works where “I’ll get to it eventually” does not.

He was also honest about the limits of memory: forgetting is normal, not a personal failing. He now uses Gemini to re-explain things like OSI layers that he learned thirty years ago and hasn’t needed day-to-day. “I don’t freaking deal with it constantly. Getting a nice, concise refresher is fine, as long as I verify when it matters.”

The Dreyfus model of skill acquisition came up here, and it’s worth understanding. Five stages: novice (needs explicit recipes, follow the steps exactly), advanced beginner (can start combining recipes), competent (can troubleshoot, begins to self-correct), proficient (can self-correct in the moment), expert (operates on intuition, can’t always explain what they’re doing). The punchline: most developers don’t have ten years of experience; they have one year of experience ten times. And LLMs are permanently stuck somewhere around advanced beginner. They can combine recipes. They will never have intuition, the felt sense that something is wrong before you can articulate why.

Rules are essential for novices. Rules kill experts. A slightly different thing, checklists, are powerful across all levels, as the aviation and surgery examples illustrated. The distinction matters for how you think about AI-assisted development: AI needs guardrails because it can’t develop the intuition to know when to break the rules. You set the guardrails. That requires knowing the rules well enough to encode them.

You cannot read it all. Stop trying.

The death of a thousand subscriptions. Nate described the pile of unread magazines accumulating on his kitchen island and his wife’s gentle suggestion that most of it should go in the recycling as a near-perfect metaphor for the state of our industry’s information environment.

His rough estimate: the amount of content added to YouTube while the workshop was running would take more than a week to watch straight through, even without eating or sleeping. The amount of content added to the internet while they were in the room is unfathomable. Heat death of the universe is going to happen before you read it all.

His solution: cultivate a network of trusted people who read different things and share the signal. He and Glenn, he mentioned, exchange texts constantly, each person watching a different slice of the landscape, forwarding things worth attention. If something is genuinely important, it will hit you from multiple directions regardless. You don’t need to be first to every wave.

This connects back to the Technology Radar: FOMO is real, but you cannot surf every wave. Being a fast follower, letting other people take version 1.0 and joining at 1.1 once the shakeout has happened, is a completely legitimate strategy. The people who are struggling right now, Nate suggested, are the ones saying “nope, not my thing, not engaging,” and not the ones who are choosing deliberately where to focus.

On AI, anxiety, and not feeling like you’re behind

Dan closed with a section that felt necessary: acknowledgment that the current moment is genuinely overwhelming, and that AI fatigue is real even if nobody talks about it.

He referenced an Andrej Karpathy tweet about feeling like a powerful alien tool had been handed to everyone simultaneously without a manual, while a magnitude-9 earthquake is rocking the profession. Nobody knows how to hold it yet. The expectation that developers should now be 10x as productive is not a reality for most people. They’re still learning the tools, still figuring out what works, still dealing with the new cognitive load of evaluating AI output on top of doing the actual work.

His practical guidance on where to start, because the list of things you’re “supposed to know” (MCP, evals, prompt chaining, vibe coding, function calling, embeddings, constitutional AI, token sampling, and so on) is legitimately intimidating:

Start with playing with multiple models. Try the same prompt in Claude, Gemini, GPT. Notice the differences. That alone builds intuition. Then understand context and memory. What are the limitations of these systems, and how do you work within them? Then tools: the idea that you can give an LLM access to actions in the world. Then MCP servers as a way of packaging that capability. Then, eventually, agents and agentic workflows. But not before the foundational layers make sense.

And critically: don’t let someone else’s advanced workflow make you feel behind. The Boris-at-Anthropic-running-five-Claude-Code-instances workflow exists in a context you don’t share. Build your own relationship with these tools from wherever you actually are.

The closing argument

Nate closed the day, and I want to quote him here as directly as I can from my notes, because the framing was right:

“Fundamentals will always serve you well. I am adamantly of the opinion that they are even more important now than they were five years ago, and I thought they were pretty damn important five years ago when we started this book.”

Two mindsets available to you: define yourself by what you’ve done in the past, or define yourself by the problems you’re going to solve in the future. Reactive or proactive. Either way, change is coming. It always has been. He’s been doing this for almost thirty years and has not yet seen an instance where the industry just… stopped. The pendulum swings, the landscape shifts, and the people who navigate it best are the ones who maintain the fundamentals while staying curious enough to pick up the new tools.

He admitted he’s nervous about the cohort of people entering the industry right now: the steep drop in junior hiring, the Stanford placement numbers, the companies that have convinced themselves AI obsoletes entry-level work. But he thinks the snapback is coming. We need juniors to become seniors. Seniors don’t appear from nowhere. At some point, that math becomes undeniable.

His last line stuck with me: “I’d rather be the lead sled dog, because at least the view changes.”

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 3: Clean code, influence skills, and why your legacy code pays the bills

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part three of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


Start with the big picture before you touch anything

After lunch, Nate and Dan shifted gears from the big themes of reading code and navigating unfamiliar systems into something more granular: what actually makes code good, how to work with the humans around that code, and why the people problems in software are harder than the technical problems. If Part 1 was the philosophical case for fundamentals and Part 2 was about reading and navigating code, Part 3 was the craft and culture of actually writing it well – and getting your organization to care.

Dan opened this segment with a point that gets skipped constantly: before diving into a codebase, understand why it exists. Who are the stakeholders? What does this project mean to the business? Who are the actual humans using it?

He made a point I appreciated: LLMs can’t produce empathy. They can describe a system, but they can’t tell you that the insurance claims processing app you think is boring is the thing that determines whether a family gets their house repaired after a flood. That kind of context changes how carefully you work.

On documentation: read it, but don’t treat it as gospel. Dan spent three days once trying to understand a complex system by carefully reading what he thought was current documentation, then discovered it was two major versions out of date. The code had been completely rewritten. His rule: documentation can lie, but code never does. Read both, verify what’s actually running, and don’t be afraid to ask a colleague for three minutes of context before burning three days spinning your wheels.

He also made a point about documentation as an opportunity: if there isn’t much of it, that’s your chance to contribute right away. Your fresh perspective on an underdocumented system is genuinely valuable; you’ll notice things longtime contributors have stopped seeing.

Navigating unfamiliar code: entry points and mental models

Dan walked through his framework for getting oriented in a large, unknown codebase. The key concept: find the entry points. In Java, that’s the main method. But more broadly, it’s anything that answers “how does something get into this system?” – public APIs, web UIs, event handlers, message consumers, scheduled tasks, lifecycle hooks.

If you don’t know what questions to ask, you can’t ask them, whether of a teammate, or of an AI. That’s the part that requires actual knowledge. Once you know you’re looking for entry points, you can use AI tools to help find them. Without that conceptual frame, you’re just asking “what does this do?” and hoping for a useful answer.

From there, he talked about building mental models. Not necessarily elaborate UML diagrams, but some kind of internal representation of how the system works. A sketch on paper. A flow chart from entry point to output. Something that externalizes the structure so you can reason about it and share it with someone else who can tell you what’s missing.

Nate added something I want to highlight: AI tools can tell you what code is doing, but they still can’t tell you why it’s doing it. That gap between the code’s behavior and the intent behind it is where human expertise lives. The code may be technically correct and historically wrong, a deliberate workaround that made sense in 2014 that nobody documented.

Make changes carefully, incrementally, and reversibly

Nate was emphatic on this: when you’re modifying existing code, especially under time pressure, make small, reversible changes. Not 3,000-line PRs. Not agents running loose making sweeping modifications. Atomic commits, each representing one logical change, that can be understood, reviewed, and reverted independently.

His version control points were basic but worth restating:

  • Commit frequently, not in massive batches
  • Write meaningful commit messages (this is, he admitted, something he now largely delegates to AI – letting it summarize what he changed before committing)
  • You are accountable for every PR you submit, regardless of whether you or an agent wrote the code

That last point deserves emphasis. Dan was clear: “If I have questions about a PR, you better be able to answer them. You can’t just say ‘my AI did it.’ You have to understand these decisions.”

He also raised a thought experiment worth sitting with: imagine your boss tells you to take Friday off, and over the long weekend, an AI agent will be let loose on your most critical production system: fixing bugs, adding features. You’ll review what it did on Monday. Are you excited about the three-day weekend, or terrified?

If your answer is “terrified,” that’s the correct answer. And the reason you’re terrified points directly to the value of the fundamentals: documentation, tests, diagrams, clear architecture. Those are the things that make an AI’s work reviewable rather than a mystery you have to reverse-engineer.

What makes code good (and bad)

This section was dense. The key ideas, in rough sequence:

  • The Ikea effect and code ownership. Nate: “Every one of you has looked at some code and uttered some variant of ‘what idiot wrote this,’ only to realize you were the idiot who wrote it a couple months ago.” We value our own code more than we should. Code reviews exist partly as a corrective for this.
  • Languages are tools, not identities. Both Nate and Dan are Java Champions, and both were clear: Java is just a tool, not a religion. The Blub Paradox (from Paul Graham) explains why developers get dogmatic: you can’t easily see the limitations of your chosen language because it’s your baseline for normal. AI tools are helping break this a bit; they’re using more languages and frameworks than they used to, and that breadth makes them better programmers.
  • The lazy programmer ethos is real and good. Before writing code, spend 20 minutes making sure someone else hasn’t already solved this. Use language features before reaching for a library. Use a library before writing your own. Dan told a great story about being new to a project, discovering a utility function that took 14 parameters just to capitalize a string, and quietly using the built-in string method instead, then watching the entire senior team’s heads explode when he revealed this in a meeting. The built-in had been there for years. Nobody had looked.
  • Lines of code is a terrible metric. Dan said this directly: shipping 37,000 lines of code is not an accomplishment. Code is a liability. More code means more surface area for bugs, more maintenance, more complexity for the next person (including future you). The vibe coding community’s tendency to measure apps by lines of code is backwards. Code deleted is almost always the better outcome.
  • Cyclomatic complexity matters. This came up repeatedly. Nate’s heuristics: low single digits is good, high single digits means you should be actively refactoring, double digits means it’s time to leave the project. He mentioned encountering real production code – written by a human – with a cyclomatic complexity of 82. The brackets were labeled “start for loop one / end for loop one” just to keep track. Not good.The punchline about cyclomatic complexity as a guardrail for AI agents was sharp: if you don’t give an agent a directive like “cyclomatic complexity must stay below four,” it won’t apply that constraint. And if you don’t know what cyclomatic complexity is, you won’t know to ask. Tools like SonarQube, PMD, and the memorably-named CRAP metric (Change Risk Anti-Patterns: cyclomatic complexity versus code coverage) can help enforce this, but only if someone with the knowledge sets them up.
  • Short methods, high cohesion, low coupling. Nate: “A method should do one thing and do it very, very well. This is the concept behind Unix piping: simple things together to get more complicated results.” That said, he also added the counterpoint: don’t favor brevity over clarity. A one-liner that nobody can understand in six months is worse than three readable lines.
  • AI tends toward verbosity and complexity. Both speakers noted that AI coding assistants have a strong bias toward writing more code rather than less, toward adding dependencies rather than using what’s already there, and toward long methods rather than short ones. They will solve the problem – but they won’t necessarily solve it simply. That instinct toward simplicity has to come from you, either as a direct code reviewer or as someone who knows how to write good prompts and capability directives.
  • Composition over inheritance. Dan mentioned this as a persistent AI failure mode: models trained on years of Java code have learned the “create a service interface and one implementation even when you’ll never have a second implementation” pattern because it was ubiquitous. That doesn’t mean it’s good. It just means it’s common in the training data.
  • Copies of copies degrade. Nate made a point I hadn’t heard framed quite this way: if vibe-coded projects proliferate on the internet, and future models are trained on that code, the training data quality decreases. Models training on AI-generated output of questionable quality will produce AI-generated output of worse quality. We’re already seeing this in written content on LinkedIn and elsewhere. We should expect to see it in code.

Heritage code, not legacy code

One small reframing that I liked: Dan suggested we call it “heritage code” instead of “legacy code.” Legacy has a negative connotation. But code that’s been in production for fifteen years and processed billions of dollars of transactions is an achievement. It deserves some respect.
That said, Nate was clear: all code eventually becomes legacy. Sometimes immediately after you commit it. It will live longer than you expected, will be harder to kill than you hoped, and someone will be maintaining it years after you’ve moved on. Write with that person in mind.
His favorite version of this sentiment, which he attributed to someone else: “Always write code as if the person maintaining it is a homicidal maniac who knows where you live.”

The influence skills nobody taught you

The final section of this part of the workshop took a hard turn into territory that software engineering curricula almost never cover (but is a key part of my developer advocate work): how to actually get things done in organizations full of humans with competing incentives.

Nate’s thesis: the hardest problems in software are people problems, not technical problems. And the skills to navigate people problems: influence, empathy, listening, finding common ground; all of these don’t come with a CS degree.

He recommended How to Win Friends and Influence People by Dale Carnegie without apology. “It is older than everyone in this room. It is Evergreen. I guarantee it will help your career.” The book is about understanding what people actually need versus what they’re saying they need, and how to align your goals with theirs.

On the current AI mandate situation specifically, he offered a practical frame: many senior leaders have “establish AI across our workforce” as a KPI tied to their bonus. They don’t necessarily care how you use AI. They need to be able to say you’re using it. If you can give them a win, a story they can tell upward, they will largely leave you alone about the details. Fill the vacuum with your own narrative or someone else will fill it with token counts.

Two approaches to influence:

  1. The hammer approach: brute-force people into agreeing with you. Works occasionally, burns trust, creates enemies.
  2. The ninja approach: make it their idea. Nate told a story about introducing TDD at a company that had rejected it when he first proposed it. He convinced one tech lead (who happened to be named Jeff, continuing the workshop’s running bit about terrible variable names) to adopt it on his team. When crunch time arrived and Jeff’s team was calmly fixing small issues while everyone else was drowning in defects, Jeff presented the same TDD case to the wider team – and got a standing ovation. Nate, who had proposed the same thing months earlier and been ignored, got no credit. But the practice got adopted. That was the goal.

His point: being the new person with the right answer is often less effective than being the connector who gets the right answer into the right person’s mouth. Letting go of the credit is a skill. It’s not a natural skill. Practice it anyway.

Code reviews: the underrated force multiplier

The workshop closed this segment with code reviews, and both speakers were emphatic that these matter more in an AI-augmented world, not less. When agents are generating PRs, someone with judgment still has to review them, and that reviewer has to understand the code well enough to ask real questions.

Some norms they pushed:

  • No snarky comments. Ever. They are not useful, they’re not clever, and everyone can see what you’re doing.
  • No 3,000-line PRs. Reviewers should refuse to engage with them.
  • Assume positive intent. You don’t know what’s happening in someone’s life. The code that looks lazy might have constraints you’re unaware of.
  • Ask questions instead of making proclamations. “Did you consider what happens when user load ramps up?” is better than “this won’t scale.” Especially when you haven’t done the math.
  • You are not your code. Code reviews are opportunities to improve the work, not indictments of your worth as a person.

Nate’s read on the current state of code reviews: PRs have made the process much more accessible than the old scheduled review meeting, but have also introduced review theater – someone clicking “approved” without looking because it’s in the process checklist. The form without the substance.

Dan’s suggestion: use AI to help you understand PRs before reviewing them. Give it the PR description and ask it to explain what’s actually changing and why. You’ll ask better questions.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 2: Reading code is a superpower, and we were never taught it

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part two of several notes from the all-day session; you might want to get a coffee for this one. You can read the previous set of notes here.


How you got here doesn’t matter. That you got here does.

Nate and Dan presenting, with a slide that reads “Ultimately it is about problem solving, tinkering, creativity”

After the first break, Nate and Dan shifted from the big-picture AI discourse into something more concrete: the actual craft skills that make a software engineer, and why those skills are becoming more important in an AI-augmented world, not less.

Nate opened this segment by talking about the different paths into software engineering (the traditional CS degree, boot camps, self-taught) and making a point I think deserves wider circulation: there is no canonical path, and apologizing for yours is a waste of energy.
What matters, in his view, isn’t the credential. It’s whether you have the tinkering mindset. Whether you’ve gone to sleep thinking about a problem and woken up with the answer. Whether you look at a broken thing and feel the pull to understand why it’s broken.

He also made an honest admission about what CS programs are actually designed to do: prepare you for graduate school in computer science. That means algorithms, compiler theory, operating systems, language design. Practically useful for building production software? Debatable. Practically useful for becoming a researcher? Yes. Boot camps swing hard the other way – framework-heavy, language-focused, get-you-hired in 12 weeks – which means they’re also somewhat transitory, because the framework of the moment changes every six months.

Neither path gives you everything. That gap between “what we taught you” and “what I want you to know when you join my project” is basically what their book is trying to fill.

The skill we teach least is the one we use most: reading code

This was the section that hit me hardest, because I’ve thought about it before and never heard it stated this cleanly.

Nate’s observation: we teach people to write code almost exclusively. We spend essentially zero time teaching people to read code. And yet, in any real production environment, the ratio of reading to writing is not even close. You spend far more time navigating, understanding, and reasoning about existing code than you do creating new code from scratch.

His analogy: “I wouldn’t teach you French by saying, now go write some French.”
Reading code is hard for a few compounding reasons. You have to understand the problem domain (which is often genuinely complex – he gave examples from finance and insurance where the business rules alone are labyrinthine). You have to see the code through another person’s mental model. And you often have to do this under time pressure, making changes you don’t fully understand, in systems you weren’t around to watch grow.

The result is what Nate called “patches on top of patches on top of patches,” and the remarkable thing isn’t that these systems have bugs, it’s that they work at all.

There’s also the cognitive bias dimension. The Ikea effect: you value things you assembled yourself more than things someone else built, which means you’re inclined to view your own code as cleaner and more sensible than others’. The mere exposure effect: familiarity breeds preference, which is why developers get dogmatic about languages; not because their preferred language is objectively superior, but because it’s the one they know.

Nate had a great riff here about what he called the Blub Paradox, from a Paul Graham essay: when you’re a programmer in a language somewhere on the power continuum, you look down the spectrum and think “I can’t imagine being productive with those limitations,” and you look up and think “I don’t know why anyone would need all that weird stuff I don’t have.” The language you know well becomes your baseline for what’s normal. AI tools, interestingly, may be helping break this a bit. He and Dan both noticed they’re using more languages and frameworks than they used to.

The Lab: Reading an unfamiliar codebase without AI first

Dan ran the group through a hands-on exercise using the Spring Pet Clinic, a well-known sample Java/Spring application. The instructions were deliberately old-school: no AI tools yet. Just open the repo and start reading.

The goal was to build some muscle memory around the basics: identifying technologies and frameworks from project structure alone, finding a main application class, recognizing architectural patterns just from folder layout.

It’s a more sophisticated skill than it sounds. Dan’s point: even if you’re not a Java developer, you can learn a lot from just looking at a pom.xml. You can infer architectural choices from package structure; “package by feature” versus “package by layer” tells you something about how the original authors thought about the system. You can spot where to start, what the domain objects are, how the system is organized.

After they’d done it manually, Dan switched to showing how AI tools handle the same task, specifically using a “plan mode” in his coding assistant where he wasn’t asking it to write anything, just to explain what it was looking at. The output was genuinely useful: a breakdown of the tech stack, architectural summary, entry points, dependency graph.

His key insight: “I use AI tools far more to read code, understand things, get familiar with things, and learn things than I do to write it.”

But then the follow-up, which is the important part: he wouldn’t have known what questions to ask the AI without the fundamentals. Understanding that architecture is a thing, that there are different ways to organize packages, that there’s something meaningful to look for in the dependency file; that knowledge has to come from somewhere. The AI accelerates the exploration; it doesn’t replace the ability to know what you’re looking for.

AI can tell you what code is doing. It still can’t tell you if that’s right.

This is where the conversation got interesting. Nate made a distinction that I think is underappreciated:

These tools are now remarkably good at reverse-engineering legacy code and telling you what it does. Feed it a 30-year-old COBOL module and it’ll give you a plain-English summary of the behavior. That’s genuinely powerful, especially for the mainframe migration work he mentioned in the morning session.

But “this is what the code is doing” is a completely different question from “is this what the code should be doing?”

He gave a real-world example: a system where some business logic was technically incorrect, but the error was intentionally corrected downstream in a different process. The code was wrong on purpose, because fixing it at the source would have required fixing everything else too. An AI reading that code would correctly describe the behavior, but have no way to know the behavior was a deliberate workaround rather than a bug.

That knowledge lives in the heads of the engineers who were there when the decision was made. And increasingly, as those engineers retire or move on, it’s not living anywhere.

The airline pricing example he used was perfect: the same seats, same flights, same dates — but booking as two one-ways costs a third less than booking as a round trip. There’s almost certainly a specific piece of business logic somewhere that creates that arbitrage. An AI can describe that code. It can’t tell you whether the Delta exec who approved it knew what they were approving.

The sentinel knowledge problem, part two

Nate returned to a theme from the morning: we are starving the pipeline that creates the experts who can actually evaluate AI output. But in this session, he made it more concrete.

Senior engineers look at AI-generated code and immediately spot the issues: the approach that’ll work in a demo but fall over at scale, the pattern that was idiomatic three major versions ago, the security implication nobody mentioned. Junior engineers look at the same code and think it looks fine, because they don’t yet have the experience to know what “fine” looks like.

The concerning dynamic: juniors are increasingly using AI to learn, but learning by accepting AI output without the ability to critique it isn’t learning. It’s cargo cult programming. You’re learning to produce things that look like code without developing the underlying judgment about whether those things are good.

Nate’s line: “AI is the very eager junior developer, and you need to monitor their output closely.”

The economics sidebar: tokens, budgets, and the reality of scale

This wasn’t on the agenda, but it came up organically and it was one of the more grounded conversations of the day.

Nate described a real situation: an organization’s head of AI was approached by a developer who wanted the unlimited Claude Code tier. When asked how many tokens he needed, the answer was 60,000 a day. Response: show me that you’re generating not $300K of business value weekly, but a million dollars. Can you do that? No? Then no.

The scaling math is uncomfortable. A room full of developers (say, 5,000 at a larger company) each burning hundreds or thousands of dollars of tokens per week is a significant line item. And the current pricing reflects a subsidized market. When investors start demanding returns, those prices go up.

He drew an analogy to the Uber model: lose money for years, drive out competition, then raise prices. Except Uber’s “product” (a car ride) is a commodity. The switching costs for enterprise AI tooling embedded into CI/CD pipelines, developer workflows, and institutional processes are not trivial.

His read on Anthropic’s and OpenAI’s revenue vs. profit numbers: revenue is real. Profitability is not. People are seeing value in the product, but the product is priced below cost. That’s not a sustainable business model, and the reckoning will come.

On whether we’ve hit a plateau

Someone in the room asked whether the intelligence improvements we saw around late 2024/early 2025 would continue.

Nate’s take: we’re probably hitting a plateau on pure scaling. The exponential gains from “just make the model bigger” appear to be diminishing. Gary Marcus’s position that we’re approaching the limits of what scaling alone can achieve, strikes him as reasonable.

The “Mythos is so dangerous we can’t release it yet” announcements that keep appearing? He’s skeptical. Follow the incentives: the companies making those claims need their valuations justified.

He was slightly more philosophical about the longer tail – the sci-fi scenarios, the alignment concerns, the “what if it’s already smarter than it’s letting on” thread. He takes it seriously without catastrophizing. The honest version of his view: we don’t know what the motivations of these systems are, because the people who built them don’t fully understand how they work either. That warrants humility, not panic, but also not dismissal.

Bottom line from this session and the previous one

The throughline across the whole day, as best I can summarize it: these tools are genuinely powerful accelerants for people who already have the foundations. They are not a replacement for the foundations. They are an amplifier, and what you get out depends heavily on what you put in.

The code reading skills, the domain understanding, the architectural instincts, and the ability to ask the right questions. All of that still has to come from somewhere. What’s changed is that once you have it, you can go faster, do more, and explore more territory than you could alone.

That’s good. The part that’s bad is that we’re making decisions right now (who to hire, what to teach, what to outsource) based on the assumption that the foundations don’t matter anymore.

They matter. Probably more than they used to.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 1: The fundamentals still matter!

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part one of several notes from the all-day session; you might want to get a coffee for this one.


The opening thesis: giving someone a nail gun doesn’t make them a carpenter

Nate opened with a confession: he’s not handy. At all.

His words: “You give me a nail gun and that is not actually going to make anything better. The cat’s gonna have a nail in its tail.”

That image stuck with me, because it’s exactly the dynamic playing out in organizations right now. Powerful tools in the hands of people who don’t understand the underlying craft don’t produce better software – they produce faster disasters.

Both Nate and Dan were quick to acknowledge that yes, things changed. Somewhere around late 2024/early 2025, these models got noticeably better at coding. Neither of them is dismissing that. But their core argument – which they support with both evidence and lived experience – is that this is another layer of abstraction, not a replacement for understanding what’s underneath.

A brief history of “this will replace programmers”

Slide: “Here we go again,” showing a list of technologies that were supposed to replace programmers

Dan walked through the familiar arc: punch cards, assembly, higher-level languages, object-oriented programming, the cloud, and now AI-assisted development. Each step, someone announced the death of the programmer. Each step, the programmer survived and became more productive.

COBOL was going to let business people write their own programs. Java Beans were going to eliminate business logic development. No-code platforms were going to replace developers entirely. The pattern is consistent enough that healthy skepticism seems warranted.

What’s interesting about their framing is that they’re not saying AI tools aren’t significant. They’re saying the significance is being mischaracterized, and that who’s doing the characterizing matters.

Consider the source

This is where the talk got sharp. Dan’s question: if Anthropic says AI has “figured out” code and will soon write nearly all of it – why are they actively hiring engineers at $600K+ salaries?

Their breakdown of who’s claiming AI replaces developers:

  • The tool makers (Anthropic, OpenAI, etc.) – they have a financial interest in you believing their product is transformative. Grain of salt.
  • Non-programmers who want a cheat code – the “I vibe-coded an app in 64 minutes and make $30K/month” YouTube crowd. Grain of salt the size of a boulder.
  • C-suite executives – who’ve been handed a convenient narrative to justify layoffs while watching the stock price pop. Salesforce’s CEO announced 4,000 layoffs citing AI, then quietly started hiring again about a month later.

Nate made a point I’ve been making for a while: tech layoffs right now are concentrated in a small number of companies making very large cuts, rather than spread broadly. The psychological effect is outsized. Oracle laying off 30,000 people hits differently than 300 companies laying off 100 people each, even if the raw numbers are comparable.

Vibe coding: fun for weekend projects, terrifying for payroll

Slide: Andrej Karpathy’s original vibe coding tweet

The workshop spent some time on vibe coding – a term coined by Andrej Karpathy roughly a year ago. Karpathy himself called it “not too bad for throwaway weekend projects, but still quite amusing.”

Nate and Dan’s framing: the stakes matter. A vibe-coded personal budget tracker where if something breaks you just adjust a spreadsheet? Great. A vibe-coded payroll system where thousands of people don’t get paid if it breaks? Categorically different situation.

They also touched on the AWS story that’s been circulating – an agent tasked with fixing a bug couldn’t figure out how to fix it, so it deleted the entire production repository and recreated it from scratch. Which is, in a very literal sense, a solution. Just not one any human with experience would have suggested. As Dan put it: “Systems have no feelings. They have no experience of ‘wait, that doesn’t seem like a good idea.'”

The expertise gap problem

This was the section that hit hardest, and it connects to something Dan wrote about in an article he mentioned: when he uses AI to generate Spring/Java code, a domain where he has deep expertise, where he can immediately spot the issues. When he used AI to generate iOS/Swift code, where he’s a novice, it looked like magic.

The issue isn’t that the code quality was different. The issue is that his ability to evaluate it was different. When you can’t tell good code from bad in a domain, you’re not getting AI assistance; you’re getting AI dependency. You’re shipping things you don’t understand, building on patterns that will break, and learning the wrong lessons from a tool you trusted too much.
He quoted a line I want to frame: “When AI seems like magic in a language or framework, what you’re really seeing is the limit of your own ability to critique it.”

We’re choking off the pipeline that creates experts

Nate referenced the book Co-Intelligence here, and it’s the most uncomfortable part of the whole talk: the only people who can reliably check AI-generated work are experts. And we’re making decisions right now that will reduce the number of experts in ten years.

Companies are not hiring junior developers. Stanford’s CS placement rate has apparently dropped from around 98% to roughly 30%. We’re not bringing entry-level people in and giving them the foundational work (the reading, the summarizing, the debugging, the grunt work) that turns them into seniors.

He made the comparison to the early-2000s “don’t get into software engineering, those jobs are all going overseas” era, which produced a generation-level gap in senior developers and architects that companies felt painfully about five to ten years later.
And we’re doing it again. On purpose, this time, with AI as the cover story.

The mainframe migration moment

This was a tangent, but a good one. Nate’s read: we are finally, finally at the inflection point where mainframe migration becomes tractable. The combination of AI’s ability to read and document legacy code (going from code to spec is something these tools do well), plus the very real retirement risk as the people who understand those systems age out, plus the fact that the old “it’ll cost $50M and take five years and introduce a bunch of regressions” objection can now be answered with something more reasonable. All of that is converging.

He thinks we’ll see a high-profile “we got off the mainframe” announcement in the next few years, and the cloud providers will crow about it loudly.

The economics of AI tools deserve scrutiny

Nate got pointed here, and I think he’s right to. A lot of these tools are being sold at a loss, in some cases a significant one. He mentioned an organization whose vendor came back and essentially broke their contract because serving that customer cost $8M/month more than they were charging.

The concern isn’t that AI goes away. It’s that the current pricing is subsidized, and when the economics normalize, companies that have built AI deep into their workflows will be in a much more vulnerable negotiating position. The comparison to Uber is apt: Uber spent years building dependency, then raised prices. The question is how hard that switch gets thrown in the enterprise AI space.

The actual bottom line

Dan and Nate presenting, showing slide that says “I think what AI does quite frankly is reduce the floor and raise the ceiling for all of us.” — Satya Nadella

Dan closed with what I thought was the right framing: the floor has been lowered (more people can participate in building software) and the ceiling has been raised (experienced engineers can do more than ever before). Both of those things are true and good.

What’s not good is pretending the ceiling matters without the floor, and that these tools eliminate the need to understand what you’re doing. They don’t. They amplify what you already know. If you don’t know anything, they amplify that too.

Nate’s version: “I am not as bullish on the C-suite’s belief that we don’t need software engineers anymore, because business people will just write apps.”

He’s been watching business people almost-write-apps since COBOL. They haven’t quite gotten there yet.

Categories
Picdump

Saturday picdump for Saturday, April 11

Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!


1775717794152

1775658825180

660830100_10163147627423386_3543415511010666334_n

5025831472f825a1

668465145_1530449415081728_6565751478649172702_n

662566164_1288544516702348_7214842833115152342_n

668987816_122217443390340892_3919792714969628688_n

1775436753732

667877964_1370930425055299_4862398934955496378_n

663102714_1383719800466997_7208878750542843593_n

1775627219166

665967621_10232325465939303_5678003085626828117_n

662367578_1485908279589815_1843338264194578480_n

661224079_1424400146384850_636373986347415771_n

1775461799853

659802385_2257831298080156_8984384499550231635_n

1775305537031

660361334_10242849202627600_1184717414204168131_n

1775330500459






1775405673764

648657453_1344521544388853_5093295887915275274_n

661656250_10237232021952991_7290232146069291884_n

662265020_1370736345074707_3506957442220512233_n
Categories
Current Events Meetups Tampa Bay

Tampa Bay tech, entrepreneur, and nerd events list (Monday, April 13 – Sunday, April 19)

Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, April 13 through Sunday, April 19!

This list includes both in-person and online events. Note that each item in the list includes:

✅ When the event will take place

✅ What the event is

✅ Where the event will take place

✅ Who is holding the event

This week’s events

Monday, April 13

Event name and location Group Time
Venice Area Toastmasters Club #5486
Online event
Toastmasters District 48 7:30 AM to 9:00 AM EDT
Online: Streaming Live Video with OBS
Online event
Orlando Video & Post Production Meetup 2:00 PM to 3:00 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 6
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 6
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Tea Tavern – Dungeons and Dragons
Monday, Apr 13 · 6:00 PM to 11:00 PM EDT
Tea Tavern Dungeons and Dragons Meetup Group – DMS WANTED 5:59 PM
CorelDraw Academy
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:00 PM to 8:00 PM EDT
Prep Online event for AI Vibe coding – prepping our database
Online event
Tampa AI Applications Meetup Group 6:00 PM to 7:00 PM EDT
TBDEG – Getting Started in Data Engineering: The Basics
Online event
Tampa Bay Data Engineering Group 6:00 PM to 7:00 PM EDT
Sarasota Blood on the Clocktower
Clocktower meetup
Board Games and Card Games in Sarasota & Bradenton 6:00 PM to 10:00 PM EDT
MTG: Commander Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Food, Fun & Games!
Village Inn
Gulfside Gatherings 6:00 PM to 8:00 PM EDT
Toast of Lakewood Ranch Toastmasters Club
Lakewood Ranch Town Hall
Toastmasters District 48 6:30 PM to 7:30 PM EDT
North Port Toastmasters Meets Online!!
Online event
Toastmasters District 48 6:30 PM to 8:00 PM EDT
Adult Dungeons & Dragons One-Shot Campaigns at Conworlds Emporium
Conworlds Emporium
Tarpon Springs Community Fun & Games 6:30 PM to 10:00 PM EDT
Let’s Talk Toastmasters
Online event
Toastmasters Divisions C & D 7:00 PM to 8:30 PM EDT
DigiMondays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Weekly General Meetup
Online event
Beginning Web Development 8:00 PM to 9:00 PM EDT
Where is Bitcoin Going?
Online event
Bitcoiners of Southwest Florida 9:00 PM to 10:00 PM EDT
Return to the top of the list

Tuesday, April 14

Event name and location Group Time
Top Interview Questions: What They Mean & Why They’re Asking
Online event
Tampa Cybersecurity Training 10:00 AM to 11:00 AM EDT
Create a Third-Person Game 10 parts Class Series – Part 7
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 7
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Build with AI: The “Pantry Pilot” — Vision-to-Action with Gemini 3
Online event
Gdg Ocala 6:00 PM to 7:30 PM EDT
Spanglish Toastmasters Club 7703731
Online event
Toastmasters Division G 6:00 PM to 7:30 PM EDT
Disney Lorcana Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Hobby Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
April Critique Night
Tap Room at the Hollander Hotel
Creative Writers Support Group 6:00 PM to 8:00 PM EDT
Pinellas Writers and Authors Weekly Meeting (Online/Zoom)
Online event
Pinellas Writers Group 6:00 PM to 9:00 PM EDT
D&D @ Critical Hit Games (Full)
Critical Hit Games
RPG-Pinellas 6:30 PM to 11:00 PM EDT
Tuesday Night Trivia at Henderson’s Kitchen and Bar
Henderson’s Bar & Kitchen
Gen Geek 6:30 PM to 9:30 PM EDT
Let’s Meetup and Discuss “Local Woman Missing” by Mary Kubica
American Social Orlando
Central Florida Books and Brews 6:30 PM to 8:30 PM EDT
The Sarasota Creative Writers
Sarasota Alliance Church
The Sarasota Creative Writers Meetup Group 6:30 PM to 9:30 PM EDT
Virtual Poetry Write In
Online event
We Write Here Black and Women of Color Writing Group 6:30 PM to 8:30 PM EDT
AI Topics — What is machine learning? A high level overview.
Online event
The Infinite Loop Lounge 7:00 PM to 8:00 PM EDT
[Virtual] Tampa Bay Bitcoin Meetup: News, Markets, & Community
Online event
Tampa Bay Bitcoin 7:00 PM to 9:00 PM EDT
Winter Springs Toastmasters Club
Online event
Toastmasters Divisions C & D 7:00 PM to 8:15 PM EDT
St. Pete Beers ‘n Board Games Meetup for Young Adults
Pinellas Ale Works Brewery
St. Pete Beers ‘n Board Games for Young Adults 7:00 PM to 10:00 PM EDT
Trivia Nights @ Escape Brewing Company – Trinity
Escape Brewing Company
Tampa Bay Area Trivia Players 7:00 PM to 9:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
Nic At Nite – Weekly Movie Night
Online event
Nerdbrew Events 7:30 PM to 9:30 PM EDT
Trading Tuesday
Online event
Bitcoiners of Southwest Florida 8:00 PM to 9:00 PM EDT
Return to the top of the list

Wednesday, April 15

Event name and location Group Time
Magic Pioneer Event
Wednesday, Apr 15 · 7:00 PM to 10:30 PM EDT
Sunshine Games 5:53 PM
LinkedIn Local Tampa Bay
Tech Success Network 8:00 AM to 9:30 AM EDT
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 8
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 8
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
40k Escalation League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
CNC Wednesday’s
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 5:30 PM to 7:30 PM EDT
Chess Night at Conworlds Emporium Every Wednesday
Conworlds Emporium
Tarpon Springs Community Fun & Games 5:30 PM to 7:00 PM EDT
Orlando Chess Association
West Osceola Library
Greater Orlando Chess 5:30 PM to 8:30 PM EDT
Vibe Coding with Bolt.new
Hillsborough County ECC
Tampa AI Applications Meetup Group 6:00 PM to 8:00 PM EDT
3D Printing Orientation: Models and Slicers
Wednesday, Apr 15 · 7:00 PM to 9:00 PM EDT
Tampa Hackerspace 6:00 PM
Casual Commander Wednesdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 6:00 PM to 11:00 PM EDT
Board Game Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
CigarCitySec Meetup
Cigar City Brewing
Central Florida CitySec 7:00 PM to 10:00 PM EDT
Apopka Foliage Toastmasters
Online event
Apopka Foliage Toastmasters 7:00 PM to 8:30 PM EDT
ONLINE / SPANISH: EPICTETO DISERTACIONES POR ARRIANO
Online event
Orlando Stoics 7:00 PM to 8:30 PM EDT
Cardfight Vanguard!! OverDress Weekly
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Game Night!
magnanimous
Tampa 20’s and 30’s Social Crew 7:30 PM to 9:30 PM EDT
Return to the top of the list

Thursday, April 16

Event name and location Group Time
1 Million Cups St. Petersburg
The Greenhouse
1 Million Cups 9:00 AM
1 Million Cups Tampa
Entrepreneur Collaborative Center
1 Million Cups 9:00 AM
Online: Streamlabs Basics for YouTube Live
Thursday, Apr 16 · 1:00 PM to 2:30 PM EDT
Orlando Video & Post Production Meetup 11:09 AM
Sarasota Speakers Exchange Toastmasters
Online event
Toastmasters District 48 12:00 PM to 1:00 PM EDT
Online: Adobe Premiere Level 1
Online event
Orlando Video & Post Production Meetup 3:00 PM to 4:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 9
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 9
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Omni Toastmasters Club 6861
Online event
Toastmasters Divisions C & D 5:45 PM to 7:00 PM EDT
Tampa SEO and Digital Marketing Meetup with Steve Scott
Online event
Tampa SEO and Digital Marketing Meetup with Steve Scott 6:00 PM to 8:00 PM EDT
Board Game Night
Conworlds Emporium
Tarpon Springs Community Fun & Games 6:00 PM to 9:00 PM EDT
Warhammer Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Lean Beer for All Things Agile (Tampa)
Wild Rover Brewery
Tampa Bay Agile 6:30 PM to 8:30 PM EDT
START YOUR OWN SIDE GIG! Small Business Thursdays!
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:30 PM to 8:30 PM EDT
April Discussion and Q&A
Online event
Bitcoin Orlando (and Worldwide) 6:30 PM to 8:00 PM EDT
Sip and Share: Poetry
Online event
We Write Here Black and Women of Color Writing Group 6:30 PM to 8:30 PM EDT
FABulous Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
One Piece Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:00 PM EDT
Pathfinder Society
Critical Hit Games
Critical Hit Games 7:00 PM to 10:00 PM EDT
Live streaming production and talent
124 S Ring Ave
Live streaming production and talent 7:00 PM to 9:00 PM EDT
Thursday Tacos & Tax Write Offs
Online event
Nerdbrew Events 7:30 PM to 10:30 PM EDT
Weekly Hacks
Online event
Hacktivate – Hackathon Meetup Group 8:00 PM to 9:00 PM EDT
Return to the top of the list

Friday, April 17

Event name and location Group Time
How did our nation end up in a civil war?
Friday, Apr 17 · 6:30 PM to 8:30 PM EDT
Pages and Plates Book Club 8:00 AM
Osceola Toastmasters Club
Kissimmee Utility Authority (KUA)
Toastmasters Division E 7:30 AM to 9:00 AM EDT
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 10
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 10
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Age of Sigmar: Escalation League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
Friday night games!
Cozy dragon Games
Cozy Dragon Meetups! 5:00 PM to 10:00 PM EDT
Friday Board Game Night
Bridge Club
Tampa Gaming Guild 5:30 PM to 11:00 PM EDT
Everyday AI: Stranger to Companion | Apr 17–19 | 5 Sessions
EveryDay AI Learning & Social Meetup Group 6:00 PM to 8:00 PM EDT
MTG: Commander FNM
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
“On Anger” – Seneca, Finishing Book 3 & Closing
The Skills Center
Tampa Stoics 6:30 PM to 8:30 PM EDT
Taps & Drafts | EDH/MtG Night
1Up Entertainment, Tampa
Nerdbrew Events 7:00 PM to 9:00 PM EDT
Modern FNM
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:30 PM EDT
The Practicing Stoic: A 13-Week Online Discussion Series
Online event
Orlando Stoics 7:00 PM to 8:30 PM EDT
Friday Pokemon Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 11:30 PM EDT
Return to the top of the list

Saturday, April 18

Event name and location Group Time
April Book Club
Saturday, Apr 18 · 1:00 PM to 3:00 PM EDT
Tampa Bay Women’s Book Club Meetup Group 1:45 PM
Hunters Creek Toastmasters
Hart Memorial Library 2nd Floor
Toastmasters Division E 9:30 AM to 11:00 AM EDT
EZ Stock (Stock, Options, Market)
2079 Range Rd
Tampa Bay Technology Center 10:00 AM to 12:00 PM EDT
Mini Con (TCG Card show and Kawaii market)
The Castle Hotel
Gen Geek 10:00 AM to 4:00 PM EDT
Come and Hang Out
Panera Bread
Windermere Writers Group 10:00 AM to 12:00 PM EDT
Torchbearer One-Shot: Dread Crypt
Emerald City Comics 4902 113th Ave N, Clearwater, Florida 33760
St Pete and Pinellas Tabletop RPG Group 11:30 AM to 3:30 PM EDT
Welding basics
Makerspaces Pinellas Meetup Group 12:00 PM to 2:00 PM EDT
Youth Dungeons & Dragons Saturdays (Ages7-12) At Conworlds Emporium
Saturday, Apr 18 · 2:00 PM to 5:00 PM EDT
Tarpon Springs Community Fun & Games 1:00 PM
FREE Fab Lab Orientation
Faulhaber Fab Lab
Suncoast Makers 1:30 PM to 2:30 PM EDT
D&D (5e) @ Black Harbor Gaming (FULL)
Black Harbor Gaming
St Pete and Pinellas Tabletop RPG Group 1:30 PM to 5:30 PM EDT
Saturday Chess @ Cozy Kava St. Pete
Cozy Kava
Chess Republic 2:00 PM to 5:00 PM EDT
1776 by David McCullough
New World Tampa
Tampa Book Club – Award-Winning Books 3:00 PM to 5:00 PM EDT
Playing Nintendo Games (Nintendo Switch and Switch 2)
Online event
Nintendo Meetup Central Florida 3:25 PM to 5:25 PM EDT
Tech in Full Effect | Presented By Lite Technology Solutions x CiviWave
Tech in Full Effect 3:30 PM to 6:00 PM EDT
Game Project Therapy (Virtual)
Online event
Tampa Games Developer Guild 4:00 PM to 6:00 PM EDT
Warmachine Journeyman League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
Dave and Busters Game Night
Dave & Busters
Gen Geek 6:00 PM to 11:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
From Zero to Crypto: Trading & Digital Business Meetup
Online event
Crypto Visionaries Meetup 9:00 PM to 11:00 PM EDT
Return to the top of the list

Sunday, April 19

Event name and location Group Time
CorelDraw Academy
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 12:00 PM to 3:00 PM EDT
Sunday Gaming
Tampa Bay Bridge Center
Tampa Gaming Guild 1:00 PM to 11:00 PM EDT
Sunday Chess at Wholefoods in Midtown, Tampa
Whole Foods Market
Chess Republic 2:00 PM to 5:00 PM EDT
D&D Adventurers League
Critical Hit Games
Critical Hit Games 2:00 PM to 7:30 PM EDT
Traveller – Science Fiction Adventure RPG
Black Harbor Gaming
St Pete and Pinellas Tabletop RPG Group 3:00 PM to 6:00 PM EDT
Sunday Pokemon League
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 4:00 PM to 8:00 PM EDT
A Duck Presents NB Movie Night
Discord.io/Nerdbrew
Nerd Night Out 7:00 PM to 11:30 PM EDT
Return to the top of the list

About this list

How do I put this list together?

It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.

In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.

What goes into this list?

I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:

    • Programming, DevOps, systems administration, and testing
    • Tech project management / agile processes
    • Video, board, and role-playing games
    • Book, philosophy, and discussion clubs
    • Tech, business, and entrepreneur networking events
    • Toastmasters and other events related to improving your presentation and public speaking skills, because nerds really need to up their presentation game
    • Sci-fi, fantasy, and other genre fandoms
  • Self-improvement, especially of the sort that appeals to techies
  • Anything I deem geeky
Categories
Artificial Intelligence Current Events Editorial

You’ve got 41 days before chip prices skyrocket

If you read my post from a few days ago, you know I’ve been sounding the alarm about how Operation Epic Fury and the closure of the Strait of Hormuz are going to wreck your tech budget. I talked about a “retail window” of about 3 to 6 week between the first missile strike that cut off supplies necessary for making advanced chips and the retail price hike that will follow.

Well, the clock just got a lot more specific.

Nate B. Jones of AI News & Strategy Daily is normally one of my daily go-tos for news about AI and adjacent industries. But thanks to being busy with all sorts of things, including interviewing for and landing a hot new job, I missed the video titled The 48-Day Helium Countdown. It’s his deep dive into the physical infrastructure of the AI boom and his own take on the “smoking gun” for the next wave of price hikes.

Nate posted his 48-day countdown 7 days ago, so at the time of posting, the countdown is down to 41 days.

By the way, this post is dated Monday, April 6, 2026. 41 days from now is Sunday, May 17th.

The Qatari connection

While the fighting is centered on Iran, there’s a “splash zone” in the surrounding area:

In response to the attacks by the U.S. and Israel, Iran hit Ras Laffan Industrial City in Qatar. Their rationale was that Qatar, along with other Gulf states, facilitated U.S./Israeli airstrikes on Iranian energy sites.

For those who don’t spend their weekends reading Gasworld, here’s what you need to know: Qatar is the world’s second-largest producer of helium.

As I wrote in my earlier post:
  • Helium on Earth is the result of radioactive decay.
  • As radioactive elements in the earth’s crust decay, they release alpha particles, which are made up of 2 protons and 2 neutrons. 
  • An alpha particle is a helium nucleus, and because it’s positively charged, it picks up stray electrons and becomes helium gas.
  • Helium gas gets trapped in the same rock structures that hold natural gas, and ends up mixed with it.

Helium is the “Unicorn Blood”of computing

Nate B. Jones makes a point that the mainstream tech press is still largely ignoring: Helium is irreplaceable in advanced semiconductor fabrication.

  1. Thermal Conductivity: Chips are made by using ultraviolet light to “draw” circuitry on silicon treated with a light-sensitive material.

    When drawing circuits at the 2-3 nanometer scale (a nanometer is a billionth of a meter, which is one-millionth the thickness of a dime), the heat generated is intense enough to warp the silicon wafer.

    That’s where the helium comes in. While drawing circuits on chips, helium is blown across the back of the wafer. Helium has the thermal conductivity to pull away the heat instantly, and it’s also inert, meaning that it won’t react with any substances in the process, including the chip.

    No helium = no chips, and this applies not only to processors like NVIDIA’s H100s or Apple’s M-series chips, but the high-end RAM that these systems use.

  2. The “Priority” Problem: Helium’s used for all sorts of things, and fortunately MRIs and chip fabs are at the top of the list for the current supply. But as Nate points out, “first in line” doesn’t matter if the warehouse is empty. China is currently sitting on a strategic helium reserve that the West simply doesn’t have, giving them a massive geopolitical advantage as the 41-day countdown ticks away.

41 days until the “ratchet”

According to Nate’s analysis of current global stockpiles and burn rates at major fabs (TSMC, Samsung, Intel), we have roughly 48 days (at the time he published his video; it’s 41 days as I publish this post) before the strategic reserves hit “critical low” levels.

When that happens, we aren’t just looking at expensive chips. We’re now looking at unavailable chips.

  • The hyperscalers (Google, Microsoft, AWS) will use their trillions to buy up every available (and increasingly expensive) chip to keep their datacenters running, and…
  • The consumer market (you and me) will be left with the hyperscalers’ table scraps.

The bottom line for nerds

If you’ve been vibe coding or running local models and are waiting for the next big release to upgrade your workstation, stop waiting. Your window of opportunity is closing faster than we thought.

Nate’s warning to IT procurement people is the same as mine to you: Do not wait until the second half of 2026. The structural costs are about to ratchet upward. Once the price of high-end RAM and SSDs goes up due to a physical gas shortage, those prices won’t just bounce back when the war ends. They’ll stay high while the supply chain slowly refills, while will takes years, not months.

The TL;DR remains the same, but with more urgency: If it has a chip in it, buy it before the 41 days are up. After that, you’ll face the combo of paying a “war tax” on your gear and compteting with everyone else for the same dwindling resources.

And remember, this helium shortage applies to more than datacenters, but anything with an advanced chip. That includes laptops and phones. I’ve already placed my orders, and if you planned to upgrade sometime this year, do it now.

Good luck out there.

Here’s Nate’s video, The 48-Day Helium Countdown. And remember, it’s 41 days now: