There are plenty of reasons to attend BSides Tampa, a cybersecurity conference that brings in 2,000+ attendees, including…
Great keynotes and presentations across seven tracks: keynotes, red team, blue team, cloud security, GRC and privacy, appsec, and AI and emerging
The exhibitor hall, where they don’t scan your badge, which means that you won’t get spammed as a result and they won’t sell your info
Interactive villages: malware, social engineering, IOT, network, lockpicking
A chance to meet the technology and cybersecurity professionals in the area, including these two…
But the most compelling reason I can think of to go is…
Let me repeat that:
80 percent of success is just showing up.
Let me illustrate with a story. Last May, techie-about-town Ammar Yusuf said he could hook me up with a free ticket to VueConf, which was taking place right here in Tampa.
I’d just come back from an expensive two-week trip, and I was still operating as an independent consultant. The spring and summer of 2025 were pretty slow; the well of clients was running dry.
I was strongly tempted to turn down the free ticket so I could devote more time and energy to finding my next job or client. Some might argue that it would be the smart thing to do.
But I decided to take the free ticket and go to VueConf instead, because I remembered all those times when showing up led to great things. Again, I remind you:
I ended up chatting with Pratik, who then offered both me and Anitra free tickets to the Dev/Nexus conference in Atlanta that would take place a couple of weeks later. It was short notice, and Atlanta’s a 7+ hour drive from Tampa. But we remembered the rule:
So we went, learned a lot, and had a great time:
And while we were at Dev/Nexus, I ran into Pratik, who was walking the exhibitor floor with Venkat Subramaniam, who knows me because I show up to his talks whenever he comes to town.
Here’s the “Bollywood Buddy Movie Poster” photo taken at the meetup where I met Venkat:
When I ran into Pratik and Venkat at Dev/Nexus, Pratik suggested to Venkat that I speak at the Arc of AI conference that would take place the following month. Venkat thought that would be a good idea, and asked me to submit a couple of talk proposals. So I did, even though I was knee-deep in contract work and a job search, because…
My submissions got accepted, and the result was my talk about writing documentation and example code for consumption by AI agents:
…and I met a lot of people:
And here’s the kicker: not only did I get to meet new people and attend (and speak) at conferences, but all this helped me land my current job at NetFoundry. The fact that I’d managed to land a speaker gig at Arc of AI was a key point in my job interviews. And I wouldn’t have the key point for that interview if…
I didn’t speak at Arc of AI, which wouldn’t have happened if
I didn’t apply to speak at Arc of AI, which wouldn’t have happened if
I didn’t go to Dev/Nexus, which wouldn’t have happened if
I didn’t go to Pratik’s talk at the Tampa Java User Group meetup, which wouldn’t have happened if
I didn’t go to VueConf with the free ticket Ammar gave me.
The lesson here is simple:
So if you don’t have prior commitments and you can afford to do so and you’re in a tech/tech-adjacent/cybersecurity/cybersecurity-adjacent field — and especially if you’re looking for work — consider going to BSides Tampa tomorrow, because you know what showing up can do for you!
Once again, ticket prices are:
$45 for general admission
$30 for students and military
…and you can save 20% by using Tampa Devs’ discount code, TampaDevs20_BSIDESTAMPA_2026.
If you read my post from a few days ago, you know I’ve been sounding the alarm about how Operation Epic Fury and the closure of the Strait of Hormuz are going to wreck your tech budget. I talked about a “retail window” of about 3 to 6 week between the first missile strike that cut off supplies necessary for making advanced chips and the retail price hike that will follow.
Well, the clock just got a lot more specific.
Nate B. Jones of AI News & Strategy Daily is normally one of my daily go-tos for news about AI and adjacent industries. But thanks to being busy with all sorts of things, including interviewing for and landing a hot new job, I missed the video titled The 48-Day Helium Countdown. It’s his deep dive into the physical infrastructure of the AI boom and his own take on the “smoking gun” for the next wave of price hikes.
Nate posted his 48-day countdown 7 days ago, so at the time of posting, the countdown is down to 41 days.
By the way, this post is dated Monday, April 6, 2026. 41 days from now is Sunday, May 17th.
The Qatari connection
While the fighting is centered on Iran, there’s a “splash zone” in the surrounding area:
In response to the attacks by the U.S. and Israel, Iran hit Ras Laffan Industrial City in Qatar. Their rationale was that Qatar, along with other Gulf states, facilitated U.S./Israeli airstrikes on Iranian energy sites.
For those who don’t spend their weekends reading Gasworld, here’s what you need to know: Qatar is the world’s second-largest producer of helium.
Helium on Earth is the result of radioactive decay.
As radioactive elements in the earth’s crust decay, they release alpha particles, which are made up of 2 protons and 2 neutrons.
An alpha particle is a helium nucleus, and because it’s positively charged, it picks up stray electrons and becomes helium gas.
Helium gas gets trapped in the same rock structures that hold natural gas, and ends up mixed with it.
Helium is the “Unicorn Blood”of computing
Nate B. Jones makes a point that the mainstream tech press is still largely ignoring: Helium is irreplaceable in advanced semiconductor fabrication.
Thermal Conductivity: Chips are made by using ultraviolet light to “draw” circuitry on silicon treated with a light-sensitive material.
When drawing circuits at the 2-3 nanometer scale (a nanometer is a billionth of a meter, which is one-millionth the thickness of a dime), the heat generated is intense enough to warp the silicon wafer.
That’s where the helium comes in. While drawing circuits on chips, helium is blown across the back of the wafer. Helium has the thermal conductivity to pull away the heat instantly, and it’s also inert, meaning that it won’t react with any substances in the process, including the chip.
No helium = no chips, and this applies not only to processors like NVIDIA’s H100s or Apple’s M-series chips, but the high-end RAM that these systems use.
The “Priority” Problem: Helium’s used for all sorts of things, and fortunately MRIs and chip fabs are at the top of the list for the current supply. But as Nate points out, “first in line” doesn’t matter if the warehouse is empty. China is currently sitting on a strategic helium reserve that the West simply doesn’t have, giving them a massive geopolitical advantage as the 41-day countdown ticks away.
41 days until the “ratchet”
According to Nate’s analysis of current global stockpiles and burn rates at major fabs (TSMC, Samsung, Intel), we have roughly 48 days (at the time he published his video; it’s 41 days as I publish this post) before the strategic reserves hit “critical low” levels.
When that happens, we aren’t just looking at expensive chips. We’re now looking at unavailable chips.
The hyperscalers (Google, Microsoft, AWS) will use their trillions to buy up every available (and increasingly expensive) chip to keep their datacenters running, and…
The consumer market (you and me) will be left with the hyperscalers’ table scraps.
The bottom line for nerds
If you’ve been vibe coding or running local models and are waiting for the next big release to upgrade your workstation, stop waiting. Your window of opportunity is closing faster than we thought.
Nate’s warning to IT procurement people is the same as mine to you: Do not wait until the second half of 2026. The structural costs are about to ratchet upward. Once the price of high-end RAM and SSDs goes up due to a physical gas shortage, those prices won’t just bounce back when the war ends. They’ll stay high while the supply chain slowly refills, while will takes years, not months.
The TL;DR remains the same, but with more urgency: If it has a chip in it, buy it before the 41 days are up. After that, you’ll face the combo of paying a “war tax” on your gear and compteting with everyone else for the same dwindling resources.
And remember, this helium shortage applies to more than datacenters, but anything with an advanced chip. That includes laptops and phones. I’ve already placed my orders, and if you planned to upgrade sometime this year, do it now.
Good luck out there.
Here’s Nate’s video, The 48-Day Helium Countdown. And remember, it’s 41 days now:
You’ve probably seen this tweet, written by none other than Andrej Karpathy, founding member of OpenAI, former director of AI at Tesla, and creator of the Zero to Hero video tutorial series on AI development from first principles:
I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
It would be perfectly natural to react to this tweet like so:
After all, this “falling behind” statement isn’t coming from just any programmer, but a programmer who’s been so far ahead of most of us for so long that he’s the one who coined the term vibe coding in the first place — and the term’s first anniversary isn’t unit next month:
Karpathy’s post came at the end of 2025, so I thought I’d share my thoughts on it — in the form of a “battle plan” for how I’m going to approach AI in 2026.
1. Accept “falling behind” as the new normal. Learn to live with it and work around it.
Even before the current age of AI, tech was already moving at a pretty frantic pace, and it was pretty hard to keep up. That’s why we make jokes like the slide pictured above, or the Pokemon or Big Data? quiz.
As a result, many people make the choice between “going deep” and specializing in a few things or “going wide” and being a generalist with a little knowledge over many areas. While the approaches are quite different, they have one thing in common: they’re built on the acceptance that you can’t be skilled at everything.
With that in mind, let me present you with an idea that might seem uncomfortable to some of you: Accept “falling behind” as the new normal. Learn to live with it and work around it.
A lot of developers, myself included, accepted being “behind” as our normal state of affairs for years. We’re still in the stone ages of our field — the definition of “computable” won’t even be 100 years old until the next decade — so we should expect changes to continue to come at a fast and furious pace during our lifetimes.
Don’t think of “being behind” as being a personal failing, but as a sensible, sanity-preserving way of looking at the tech world.
It’s a sensible approach to the world that Karpathy describes, which is a firehose of agents, prompts, and stochastic systems, all of which lack established best practices, mature frameworks, or even documentation. If you’re feeling “current” and “on top of things” in the current AI era, it means you don’t understand the situation.
That feeling of playing perpetual “catch-up?” That’s proof you are actively playing on the new frontier, where the map is getting redrawn every day. It means you’ve got a mindset suited for the current Age of AI.
2. Understand that our sense of time has been altered by recent events.
I think some of Karpathy’s feeling comes from how the pandemic and lockdowns messed up our sense of time. Events from years ago feel like they just happened, and events from months in the past feel like a lifetime ago.
AI — and once again, I’m talking about AI after ChatGPT — sometimes feels like it’s been around for a long time, but it’s still a recent development.
“How recent?” you might ask.
Season 4 of Stranger Things was released in two parts — May/July of 2022 — while ChatGPT came out later, debuting on November 30 of that year.
Think of it this way: the non-D&D playing world was introduced to Vecna through season 4 of Stranger Things months before it was introduced to ChatGPT.
Need more perspective? Here are more things that “just happened” that also predate the present AI age:
Elon Musk made his $44 billion offer to buy Twitter in April 2022, tried to back out in July, and, after being forced by court intervention, bought it in October, which means all that drama happened before ChatGPT.
(Just recalling about “the slap” made me think “Wow, that was a while back.” In fact, I get the feeling that the only person who remembers it as if it happened yesterday is Chris Rock.)
All these examples are pretty recent news, and they all happened before that fateful day — November 30, 2022 — when ChatGPT was unleashed on an unsuspecting world.
3. Forget “mastery.” Go for continuous, lightweight experimentation instead.
Archimedes’ discoveries come from small experiments. Comic by Thomas Leclercq. Click to see more!
Put your energy into the skill of learning and critically evaluating new basic tools and skills that are subject to change over the gathering static domain knowledge that you think will be timeless.
We’re emerging from an older era where code was scarce and expensive. It used to take time and effort (and as a result, money) to produce code, which is why a lot of software engineering is based on the concept of code reuse and why older-school OOP developers are hung up on the concept of inheritance. Now that AI can generate screens of code in a snap, we’re going to need to change the way we do development.
My 2026 developer strategy will roughly follow these steps:
Embracing ephemeral code: I’m adopting the mindset of “post code-scarcity” or “code abundance.” I’ll happily fire up $TOOL_OF_THE_MOMENT and have it generate lots of lines of code that I won’t mind deleting after I’ve gotten what I need out of it. The idea is to drive the “cost” of experimentation down to zero, which means I’ll do more experimenting.
Try new things, constantly, but not all at once: My plan is to dedicate a week or two to one thing and experiment with it. Examples:
First week or so: Prompts, especially going beyond basic instructions. Play with few-shot prompting, chain-of-thought, and providing context. Look at r/ChatGPTPromptGenius/ and similar places for ideas.
Following week or so: Agents. Build a simple agent using a guide or framework. Understand its core components: reasoning, tools, and memory.
Week or so after that: Tools and integrations. Give an agent the ability to search the web, call an API, or write to a file.
Learn by teaching and building: Active learning is the most efficient learning! Building something and then showing others how to build it is my go-to trick for getting good at some aspect of tech, and it’s why this blog exists, why the Tampa Bay Tech Events List exists, why the Global Nerdy YouTube channel exists, and why I co-organize the Tampa Bay AI Meetup and Tampa Bay Python.
4. Less coding, more developing.
I used to laugh at this scene from Star Trek: Voyager, but damn, it’s pretty close to what we can do now…
Your enduring value as a developer or techies is going to move “up the stack,” away from remembering the minutae API call parameter order and syntax and towards “big picture” things like system design, architectural judgment, and the critical oversight, judgement, and quality control required to pull together AI components that are stochastic and fundamentally unpredictable.
That “behind” feeling? That’s the necessary friction for keeping your footing while climbing the much taller ladder that development. Your expertise is less about for loops and design patterns more about system design, problem decomposition, and even taste. You’re more focused on solving the users’ problems (and therefore, operating closer to the user) and providing what the AI can’t.
Worry less about specific tools, and more about principles. Specific tools — LangChain and LlamaIndex, I’m lookin’ right at you — will change rapidly. They may be drastically different or even replaced by something else this time next year! (Maybe next month!) Focus on understanding the underlying principles of agentic reasoning, prompt engineering, and (ugh, this term drives me crazy, and I can’t put my finger on why) — “workflow orchestration.”
Programming isn’t being replaced. It’s being refactored. The developer’s role is becoming one of a conductor or integrator, writing sparse “glue code” to orchestrate powerful, alien AI components. (Come to think of it, that’s not all too different from what we were doing before; there’s just an additional layer of abstraction now.)
5. Yes, AI is an “alien tool,” but what if that alien tool is a droid instead of a probe?
Let’s take a closer look at the last two lines of Karpathy’s tweet:
Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
First, let me say that I think Karpathy’s choice of “alien tool” as a metaphor is at least a little bit colored by Silicon Valley mentality of disruption — that tech is for doing things to fields, industries, or people instead of for them. There’s also the popular culture portrayal of alien tools, and I think he feels he’s been getting the “alien tools doing things to me” feeling lately:
But what if that “alien tool” is something that can we can converse with? What if we reframe “alien too” to…“droid?”
Unlike an interpreter or compiler that gives one of those classic cryptic error messages or a framework or library with fixed documentation, you can actively interrogate AI, just like you can interrogate a Star Wars droid (or even the Millennium Falcon’s computer, which is a collection of droids). When an AI generates code that you can’t make sense of, you’re not left to reverse-engineer its logic unassisted. You can demand an explanation in plain language. You ask it to walk through its solution step by step or challenge its choices (and better yet, do it Samuel L. Jackson style):
This transforms debugging and learning from a solitary puzzle into a dialogue. This is the real-life version of droids in Star Wars and computers in Star Trek! Ask whatever AI tool you’re using to refactor its code for readability, make it give you the “Explain it is if I’m a junior dev” walkthrough of a complex algorithm, or debate the trade-offs between two architectural approaches. Turn AI into a tireless, on-demand pair programmer and tutor!
By reframing AI not as an alien tool, but as a Star Wars droid (or if you prefer, Star Trek computer), you can change the pace at which you can understand and manage systems. Unfamiliar libraries and cryptic errors are no longer major show-stoppers, but speed bumps that you can overcome with a Socratic dialogue to build your understanding. The AI-as-droid approach allows you to rapidly decompose and reconstruct the AI’s own output, turning its stochastic suggestions into knowledge and understanding that you can carry forward.
In the end, you’re moving from merely accepting or rejecting its code to using conversation to get clarity, both in the AI’s output and in your own mental model. By treating AI not as an alien probe but as a droid, the “alien tool” becomes less alien through dialogue. The terra incognita of this new age won’t be navigated by a map, but by directed exploration with the assistance of a local guide you can question at every turn.
These ideas, taken together, are a call not to blindly climb the AI hype curve, but a call to develop a sophisticated, expert-level understanding of its limits. I hope to make them the basis of a structured way to conduct that research without falling into the “Mount Dumbass” of overconfidence.
Your tech / developer expertise is the guardrail. I believe that knowledge is the antidote to AI’s Dunning-Kruger effect. What you bring to the table in the Age of AI is critical evaluation, debugging, and architectural oversight. AI generates candidates; you approve or reject them, or, to quote Nick Fury…
Adopt a skeptical, experimental stance. Let’s follow Karpathy’s own method: try the new tools on non-critical projects. When they fail (as he said they did for nanochat), analyze why they failed. This hands-on experience with failure builds the accurate mental model he described as lacking.
Focus on understanding AI “psychology.” Understand the new stochastic layer provided by AI not to worship it, but to debug it (please stop treating AI like a god). Learn about prompts, context windows, and agent frameworks so you can diagnose why an AI produces bad code or an agent gets stuck. This turns a weakness into a diagnosable system.
Prioritize team and talent dynamics: You’ll hear and read losts of stories and articles warning of talent leaving as a result of AI. If you’re in a leadership or decision-making role, focus on creating an environment where critical thinking about tools is valued over blind adoption. Trust your team, and protect their “state of flow” and their deep work.
7. This may be the “new normal” for Karpathy, but it’s just the “same old normal” for my dumb ass.
Maybe it’s because he’s made some really amazing stuff that he’s surprised that the wave of change that it brought about has come back to bite him. For most of the rest of us — once again, that includes me — we’ve always been trying our level best to keep up, and doing what we can to manage our tiny corners of the tech world.
The take-away here is that if the guy who helped make vibe coding a reality and coined the term “vibe coding” is feeling a bit overwhelmed, we can take comfort that we’re not alone. Welcome to our club, Andrej!
8. The difference between an adventure and an ordeal is attitude.
Yes, it’s a lot. Yes, I’m overwhelmed. Yes, I’m trying to catch up.
But it’s also exciting. It’s a whole new world. It’s full of possibilities.
I’m outside my comfort zone, but that’s where the magic happens.