Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!































































Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!































































Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, March 9 through Sunday, March 15!
This list includes both in-person and online events. Note that each item in the list includes:
✅ When the event will take place
✅ What the event is
✅ Where the event will take place
✅ Who is holding the event

Wednesday at 6:00 p.m. at Steinbrenner Field (Tampa):
In partnership with the New York Yankees, the Tampa Devs community has been invited to attend Networking Night at Steinbrenner Field during Yankees Spring Training, where they’re playing against the Toronto Blue Jays!
Find out more and register here.
Wednesday at 6:00 p.m. at Entrepreneur Collaborative Center (Tampa): Dave Lindley, Chief Technology Officer at Stablein Solutions, will dive into what it takes to successfully scale AI systems within an enterprise. From model selection, fine tuning, and architectural blueprints we will discover what helped Dave’s team scale to millions of monthly workflow executions relied on for mission-critical operations by large scale businesses.
Find out more and register here.
Thursday at 6:00 p.m. at Hays (Tampa): Tampa Bay AI Meetup presents a Vibe COding Workshop!
If you’re curious about vibe coding and want to know how to get started, bring your laptop to Tampa Bay AI Meetup’s Vibe Coding Workshop, taking place on Thursday, March 12 at 6:00 p.m. at the Hays office (4350 W Cypress, Suite 1000, Tampa)! We’ll help you get started with a couple of vibe coding exercises, including building a Chrome plugin that you’ll find useful for vibe coding.
Find out more and register here.
| Event name and location | Group | Time |
|---|---|---|
| Computer Repair Clinic 2079 Range Rd |
Tampa Bay Technology Center | 8:30 AM to 12:30 PM EDT |
| Cognitive Security & the Technology Behind Protecting Human Decision-Making Online event |
Tech Success Network | 12:00 PM to 1:00 PM EDT |
| Friday Board Game Night Bridge Club |
Tampa Gaming Guild | 5:30 PM to 11:00 PM EDT |
| Friday Night Magic at Conworlds Emporium Conworlds Emporium |
Tarpon Springs Community Fun & Games | 5:30 PM to 9:00 PM EDT |
| MTG: Commander FNM Critical Hit Games |
Critical Hit Games | 6:00 PM to 11:00 PM EDT |
| Taps & Drafts | EDH/MtG Night 1Up Entertainment, Tampa |
Nerdbrew Events | 7:00 PM to 9:00 PM EDT |
| Modern FNM Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:00 PM to 10:30 PM EDT |
| Friday Pokemon Tournament Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:30 PM to 11:30 PM EDT |
| Return to the top of the list | ||

How do I put this list together?
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
Dev/Nexus 2026 starts today and continues on Friday. Located in Atlanta, founded in 2004, and with 1,500+ attendees expected, it’s a huge, long-running conference with an international reputation, and it’s also a fantastic networking opportunity!

Anitra and I are here, and we’re just two of the many, many people you can meet. But meeting people requires a skill called “working the room.”
Fortunately for you, my work as a developer advocate requires me to work the room regularly, and I’m sharing all my tricks in this article. There are a lot of them — feel free to scan this article, find the tips that work for you, and put them into practice!
Review the schedule speaker bios, and sponsors (who’ll probably have a table in the exhibitor hall), so that you can determine:
Decide what you want to achieve at Dev/Nexus, which can include any of the following:

A one-line self-introduction is simply a single-sentence way of introducing yourself to people you meet at a conference. It’s more than likely that you won’t know more than a handful of attendees and introducing yourself over and over again, during the conference, as well as its post-session party events. It’s a trick that Susan RoAne, room-working expert and author of How to Work a Room: The Ultimate Guide to Making Lasting Connections In-Person and Online teaches, and it works. It’s pretty simple:
My intro at Dev/Nexus will be something along the lines of “I’m a rock and roll accordion player, but in my spare time, I do developer relations and I’m currently doing a developer contract optimizing an MCP server for Hammerspace!”
Pocket stories are short, engaging, and easy-to-tell anecdote you keep ready for networking situations. They should be:
Open-ended, so listeners can respond or share their own experiences.
Here’s a tech-related pocket story:
“Last year I tried to refactor a core service during a two-week sprint. Halfway through, we realized we’d basically reinvented a library that already existed. The best part? We ended up contributing to that library instead, and now it’s in production at three other companies.”
“Local flavor” pocket stories are often a good conversation starter:
“This is my first time in Kansas City, and yesterday I went looking for barbecue. I asked a local for the ‘best’ spot… and ended up in a half-hour debate between two strangers about burnt ends. I still don’t know who won, but I definitely left full.”
We’re nerds! We love interesting gadgets, amusing tchotchkes, and funny techie T-shirts. They’re often interesting conversation-starters, and Dev/Nexus is the perfect environment for bringing them out!

Me? I’m bringing the accordion (of course).

Here’s the exercise: Before you leave to go to Dev/Nexus, find some text and read it out loud for three minutes. If for some reason you can’t find some text to read, use this article. You’ll find that it’s a self-confidence booster!
Even after Dev/Nexus has come and gone, do this exercise daily. Like any skill, frequent low-pressure practice builds familiarity, and if you read alound regularly, you’ll find yourself more comfortable when talking with strangers at networking events.
Choose something different to read out loud every day, and try emphasizing key parts of the text. If you’re reading something with dialogue, try expressing the emotion in that dialogue. If you listen to audiobooks or podcasts, try emulating the way audiobook narrators narrate their material.
Reading out loud boosts your confidence because:

Inigo Montoya from The Princess Bride had the perfect self-introduction. Use his technique for yourself!
Example: “Hi! I’m Joey de Villa. I’m giving the fun Python “choose your own adventure” game talk on Friday. How are you doing?”

Having a good posture is generally good for all sorts of health reasons, but at a conference, it has the additional benefit of showing confidence, competence, and alertness. And because the body is a self-feedback system, you’ll find yourself feeling more confident, competent, and alert.
The general guidance for standing up straight is to imagine a string pulling you gently upward from the crown of your head. Keep your spine straight, knees soft, and feet shoulder-width apart.
When you do this, people will be more likely to approach you because you appear open and self-assured instead of reluctant and uncertain.
The general advice is to put your shoulders back — but not too far back. Your shoulders should be below your ears. Drawing your shoulders back just slightly opens up your chest, which is body language for “Hello. My name in Inigo Montoya. I’m killin’ it here. Prepare to converse.” You’ll appear more engaged and ready to interact.
That’s so much better that the forward, rounded shoulders look, which says “I don’t want to be here, and I definitely don’t want to talk to you.” It makes you look defensive or distracted.
You might find it helpful to roll your shoulders up, back, and down, just enough to relax your chest.
Here’s a WikiHow exercise to help you stand up straight.
Eye contact — it’s a tricky thing, especially among nerdy types, but is one of the strongest ways to build trust quickly. What better place to brush up on your eye contact technique than Dev/Nexus?
Here’s how you do it: when you meet someone, make eye contact by looking at them right at their eyes for a “one thousand one, one thousand two” count. That’s long enough to acknowledge them but not so long that it feels as though you’re staring them down.
If looking someone in the eyes isn’t your thing, try looking at some part of their face near their eyes, such as their forehead or cheek.
Done right, eye contact gives others a sense of warmth and attentiveness. It makes other people feel seen, which is crucial in noisy, crowded conference environments.
Find out more about eye contact here.
Allistic people — people who aren’t affected by autism — should be aware that people with autism find eye contact challenging. If you find that the person you’re talking to finds eye contact uncomfortable, look at their face, but not directly at their eyes (basically, use the trick I mentioned earlier).
You’ll probably see a group of people already engaged in a conversation. If this is your nightmare…

…here’s how you handle it:
Feel free to join me in at any conversational circle I’m in! I always keep an eye on the periphery for people who want to join in, and I’ll invite them.
In her book How to Work a Room, Susan RoAne talks about a conversation tool she refers to as “Observe, Ask, Reveal” or “OAR,” which is a way to make interactions feel more natural and engaging. It’s made up of three steps:
Observe. Notice something about the person you’re talking to, their surroundings, or the situation. This could be as simple as their choice of drink, something they’re carrying, or something happening in the room.
Ask. Follow your observation with a genuine, open-ended question. This invites the other person to share and keeps the conversation flowing.
Reveal. Share a little about yourself related to the topic, which helps build rapport and makes the exchange feel balanced rather than like an interrogation.
⚠️ Don’t overshare! TMI often backfires. Also, don’t overdo it with the questions — it should feel like a conversation, not an interrogation.
The idea behind OAR is to create an easy rhythm between listening and contributing to the conversation.
No, you don’t have to worry about scheduling or if the coffee urns are full. By “being a host,” I mean doing some of things that hosts do, such as introducing people, saying “hello” to wallflowers and generally making people feel more comfortable.
Being graceful to everyone is not only good karma, but it’s a good way to promote yourself. It worked out really well for me — when I first moved to Tampa, I simply attended events and helped out where I could, lending a hand at meetups. I gained a reputation for being helpful and knowledgable, which led me to being invited to speak at events, and I also wound up inheriting a couple of meetups as well!
Follow the Dev/Nexus hashtag — the official one is #devnexus — to find out what’s going on, and to find and connect with attendees online.
Lunch at Dev/Nexus is a great opportunity to meet people! Here are some tips for lunch…
1. Choose your table with intention
Arrive early if possible. This gives you more freedom to choose your spot.
Look for tables with a mix of people already seated and empty chairs. It’s easier to integrate into an existing conversation than to start from scratch with a fully empty table.
2. Use OAR (“observe, ask, reveal”) to break the ice
Follow the “observe, ask, reveal” conversational framework I wrote about earlier to talk to people at the table.
Example: “I see you got the Dev/Nexus hoodie — did you brave the merch line this morning?”
3. Introduce yourself to your immediate neighbors first
Turn to the people on your left and right, give your name, where you’re from, and a quick “pocket story” or conference-related detail.
Then, when there’s a pause in the group’s conversation, introduce yourself to the whole table. This makes you seem approachable, and you’re not barging into the conversation.
4. Keep the conversation inclusive
If you notice someone at the table isn’t speaking much, pull them in by looping back to them with a related question.
Avoid overly niche technical deep dives unless everyone’s into it.
5. Have a graceful exit
When lunch is wrapping up, thank the table for the conversation.
Swap contact details or LinkedIn with anyone you clicked with.
Mention to people at the table that you might see them in another session. If you know what sessions you’re attending after lunch, let them know!
Try these out at Thursday’s attendee party, as well as at Dev/Nexus’ other social events, including the karaoke event (taking place Thursday at 9:00 p.m. in the back room on the ground floor of the AC Hotel):
1. Organize your contacts soon after the conference
Review any business cards, LinkedIn connections, or conference app contacts you collected. Strike while the iron is hot — do this by the end of the following week!
Tag or note:
How you met
What you talked about
Any action items (e.g., “Send them article on API security”)
This makes your outreach to people feel more personal and less generic and spammy.
2. Send a brief, specific follow-up
Timing: ideally within 3 days of the conference.
Keep it short, but reference something from your conversation to jog their memory.
Example: “Great chatting with you at the Dev/Nexus lunch table about AI security. Here’s that GitHub repo I mentioned.”
3. Continue the conversation
Share a useful resource, article, or code snippet related to what you discussed.
Offer help or collaboration, even if it’s small. This shifts you from a “one-time meet” to a peer in their network.
4. Connect on the right channels
LinkedIn for professional connections and ongoing career updates.
GitHub for technical/code collaboration.
Twitter/X or Mastodon if you connected over shared interests in tech culture, events, or industry news.
5. Keep the relationship warm
Interact with their posts, star or fork their repos, or comment thoughtfully on something they’ve shared.
When you come across a relevant opportunity, event, or resource, send it their way with a short note.
6. Build a “conference alumni” list
Keep a lightweight spreadsheet or note with names, contact info, and event details.
Before your next Dev/Nexus (or other conference), skim this list so you can reconnect with past contacts.
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!

































































Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, March 2 through Sunday, March 8!
This list includes both in-person and online events. Note that each item in the list includes:
✅ When the event will take place
✅ What the event is
✅ Where the event will take place
✅ Who is holding the event

Monday at 5:30 p.m. at TEKsystems (Tampa): Tampa Devs is holding a “Meet and Greet” event at the TEKsystems in Westshore Tampa!
This is an opportunity to meet like minded individuals in tech and mingle with the great folks over at TEKsystems. The best time to look for a job is prior to being laid off and in this economy that’s NOW.
TEKsystems is one of the world’s largest IT staffing and recruiting firms. They specialize in connecting companies with high-demand tech talent—spanning cloud, data, security, and digital transformation—through contract, contract-to-hire, and direct placement services.
Find out more and register here.
Monday at 6:30 p.m. online: Saint Petersburg AI Collaborative Intelligence Group presents Building multi-agent workflow with Claude Code – create agents and skills.
This will be a step by step journey and discussion into Claude Code skills and agent files. The objective is to build and deploy a project with a specialized workforce.
Find out more and register here.
Tuesday at 9:00 a.m. at Entrepreneur Collaborative Center (Tampa): Entrepreneurs Learning and Growth Hub presents Using AI to Improve Customer Experience.
This session focuses on how entrepreneurs can use AI to create more consistent, responsive, and personalized customer experiences without overwhelming their teams. They’ll explore how AI can support customer interactions across the entire journey—from first contact to ongoing communication—while maintaining authenticity and trust.
Find out more and register here.
Tuesday at 6:00 p.m. online: GDG Tampa Bay presents LingoWise — Building an Immersive Language Tutor with Google Cloud & Web AI.
Do you enjoy traveling to new places and exploring the culture? Are you frustrated that years of language apps still haven’t gotten you speaking? If so, then this talk is for you.
LingoWise is an AI-powered language tutor that actually gets you speaking. In this immersive demo, Liz walks through the full app experience: capturing vocabulary from the wild, building flashcards, practicing pronunciation, and having real conversations with an AI tutor. You’ll see how Google Cloud, Web AI in the browser, and Gemini come together to make it all work for beginners and more proficient speakers alike.
Find out more and register here.
Wednesday at 6:00 p.m. at Dracula’s Legacy (Tampa): Strive Networking presents General Business Networking.
Strive welcomes you to ‘General Business Networking’, Tampa’s hottest networking event! We are set in the perfect central location for all to join.
Are you interested in business? Want to elevate your career? Meet Consultants, Analysts of all types, Bankers, Software Engineers, Entrepreneurs, and more who are active in the space or are eager to get started.
**This will be a multi-vertical event**
There will be professionals with backgrounds in the arts, finance, tech, real estate, healthcare, and more. Come with an open mind, and leave with connections that wouldn’t be possible otherwise!
Pack some business cards and come with an open mind!
Find out more and register here.
Thursday at 5:30 p.m. at Tamp Bay Brewing Company (Oldsmar): Whether you’re in tech, are interested in IT, or just love connecting with amazing people, this is your chance to expand your network in a relaxed, fun setting. Bring your curiosity, your business cards, and your best stories!
Find out more and register here.
Friday at 10:00 a.m. at Shortwave Coffee (Tampa): Tampa Bay Designers presents Designer Cowork!
Are you sometimes designing remotely? Want to work among other designers for a day? Join other designers to find a place to sit and work together!
Find out more and register here.
Friday at 5:30 p.m. at The Canopy (St. Pete): Join Tampa Bay New-In-Tech for a casual “New in Tech” hangout and networking meetup. No formal presentations, no pressure — just good conversations, real connections, and a chance to meet others who are navigating the tech world.
Whether you’re just starting out, transitioning into tech, or already working in the industry, this is a laid-back space to connect, share experiences, and expand your circle.
Come grab a drink, relax, and meet like-minded people in tech. Sometimes the best opportunities start with casual conversations.
Find out more and register here.
| Event name and location | Group | Time |
|---|---|---|
| Computer Repair Clinic 2079 Range Rd |
Tampa Bay Technology Center | 8:30 AM to 12:30 PM EST |
| Designer Cowork @ Shortwave Coffee (Channelside) Shortwave Coffee |
Tampa Bay Designers (Formerly Tampa Bay UX) | 10:00 AM to 1:00 PM EST |
| New in Tech Meetup – Canopy, St Pete The Canopy |
Tampa Bay New-In-Tech | 5:30 PM to 7:30 PM EST |
| Friday Board Game Night Bridge Club |
Tampa Gaming Guild | 5:30 PM to 11:00 PM EST |
| MTG: Commander FNM Critical Hit Games |
Critical Hit Games | 6:00 PM to 11:00 PM EST |
| Taps & Drafts | EDH/MtG Night 1Up Entertainment, Tampa |
Nerdbrew Events | 7:00 PM to 9:00 PM EST |
| Modern FNM Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:00 PM to 10:30 PM EST |
| DIFFERENT LOCATION! “On Anger” – Seneca, Books 1 & 2 USF Tampa College of Education |
Tampa Stoics | 7:00 PM to 9:00 PM EST |
| Friday Pokemon Tournament Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:30 PM to 11:30 PM EST |
| Return to the top of the list | ||

How do I put this list together?
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
Last Thursday, February 19th, Tampa Java User Group welcomed Pratik Patel, Java Champion and Director of Developer Relations at Azul Systems, to give his AI Native Architecture talk at Kforce headquarters. Tampa Bay AI Meetup was happy to partner with Tampa JUG, and we thank Ammar Yusuf for the invite!

We had a pretty full room…
…followed by an accordion number…
…followed by Pratik’s presentation.
Here are my notes from the presentation:
Here’s a fun “icebreaker” game to try at your next tech gathering: ask the room to name the three fundamental types of AI, and watch what happens.
When tried on the crowd at last Thursday’s Tampa Java User Group / Tampa Bay AI Meetup, a lot of people called “generative AI,” which was hardly a surprise.
We came close, but didn’t directly name, the second kind: predictive analysis. It’s the kind of AI that’s been quietly running inside every credit card transaction you’ve made for the past decade. It saved me a lot of headache last year when someone used my credit card number to buy enough gas to fill an F-250 in rural Georgia while I was having a poke bowl in St. Pete. A neural network detected the mismatch between the gas-guzzler purchase and my usual spending and location patterns, which led to a text from the credit card company, and my immediate “That wasn’t me” response.
None of us got the third one: time-series AI. It’s the branch that looks at data across time to spot trends and make forecasts. Not “Will Joey buy 50 gallons of gas in rural Georgia?” but “What has Joey been buying every Friday evening for the past two years, and what does that predict about next Friday?”
Pratik kicked off his talk on AI-native architecture with this. By the time he was done, we’d gotten a serious rethink of not just what kinds of AI exist, but what it actually means to build an application with AI at its core, as opposed to just bolting AI onto the side and hoping for a stock price bump.
One of the central arguments Pratik made is that data is what separates a defensible business from one that can be replicated by a developer with a generous cloud credit and a free afternoon.
He used Penske Truck Leasing as his example. Anyone can, theoretically, buy a bunch of trucks and stand up a website. What you can’t easily replicate is a decade of auction data, bidding history, customer behavioral patterns, and operational intelligence. That data is what lets Penske do something like: identify a customer who bid on a truck but didn’t win the auction, then automatically reach out to offer them a similar vehicle. The data made it obvious, and a system acted on it.
This is why the old saying “data is the new oil” is actually more apt than it sounds. Raw oil isn’t useful until it’s refined. Raw data sitting in an S3 bucket isn’t useful either until it’s refined toom by cleaning it, structuring it, and using it to power an application that your competitors simply don’t have the history to replicate. This kind of advantage that sets you apart is referred to as a moat.
In this new world, where anyone can vibe-code a decent SaaS clone in an afternoon using AI tools, your proprietary data may be that moat protecting you from someone in their mom’s basement with good taste and ambition.
Pratik laid out a three-layer view of what an AI application architecture actually looks like in the real world. It was a helpful maps to the “who does what” question that comes up whenever engineering teams start building this stuff.
On the left side is data acquisition and preprocessing. This comprises tools like Apache Kafka for event streaming, Apache Iceberg as a data layer that lets multiple teams share the same underlying datasets without tripping over each other, and Spark for processing data at scale. This is where collection, cleaning, and transformation happen. It’s also where most AI projects quietly die, because the data turns out to be messier than anyone admitted during planning.
In the middle is model building and fine-tuning. Pratik was direct here: your company is almost certainly not going to train its own large language model from scratch. The estimates for what it cost to train GPT-5 range from $100 million to over a billion dollars in GPU time. Unless “Uncle Larry” is personally funding your AI initiative, you’re going to use an off-the-shelf model, like OpenAI, Gemini, Claude, or one of the increasingly capable open-weight models like DeepSeek or Alibaba’s Qwen3. The Python ecosystem owns this tier for now, thanks to its long history in data science and extensive libraries, though Java options like Deep Learning4J are maturing.
On the right is inference and integration, which is where most application developers will actually spend their time. This is the code you write to orchestrate models, retrieve relevant context, handle the results, and deliver a useful experience to users. This is also where AI-native thinking diverges sharply from “AI bolted on,” which Pratik spent considerable time on.
Here it is: LLMs are non-deterministic, and that changes everything about how you build software.
Traditional software is built on deterministic foundations. If you write a database query that asks for a specific user profile, you will get the exact same answer every time: that user’s profile. The result is deterministic, and it’s reliable in a way that software developers have spent the previous decades taking for granted.
LLMs don’t work that way. Ask the same question twice and you may get meaningfully different answers. That’s just a fundamental property of how token prediction and the attention mechanism work. The model doesn’t do the deterministic thing and look up an answer. Instead, it generates an answer based on probabilistic similarity to everything it has ever been trained on.
When the generated answer is wrong, we call it hallucination. But the more accurate framing is that hallucination is the shadow side of the same capability that makes these models useful at all.
(Joey’s note: I like to say “All LLM responses are hallucinations. It’s just that some hallucinations are useful.”)
For casual applications, such as “Find me a bar with karaoke near downtown Tampa,” we can put up with a certain amount of “wrongness.” You go there, find out there’s no karaoke, drink anyway, call it a night. However, for a system that’s analyzing medical imaging and flagging potential tumors, our tolerance for wrongness is zero, and “the model felt pretty confident” is not an acceptable answer.
The emerging approaches to this are interesting: evaluation frameworks built into tools like Spring AI and LangChain that let you run suites of tests against model outputs; and something called “LLM as a judge,” where you use a second model to evaluate the outputs of the first. Ask OpenAI a question, get an answer, hand both the question and the answer to Gemini and say: “Does this look right?” It’s new, it’s imperfect, and it’s the current state of the art.
The good news, as Pratik put it: everyone is early. You are not behind.
Don’t let the $20/month subscription price fool you into thinking AI inference is cheap at scale.
Pratik made the case that inference costs are not going to come down dramatically anytime soon, and offered some uncomfortable data points in support. Moore’s Law, the Intel cofounder’s observation that transistor density on chips doubles every 18 months, is effectively dead. We’re at the sub-nanometer level of chip fabrication and at that level of miniaturization, you’re really starting to fight the laws of physics.
GPU prices have gone in the opposite direction of what you might hope: the Nvidia 5090, the top consumer-grade card, has gone from roughly $2,000 at launch to $4,000 on the secondary market. RAM prices have spiked because every data center on Earth is buying it for AI workloads. When Pratik noticed RAM prices shooting up, he moved money into Western Digital and Seagate stock. He may be onto something.
The practical upshot for developers building applications: if you’re running hundreds of evaluation tests per hour during development (which is what you should be doing, given the non-determinism problem described above) burning frontier model tokens for all of that is going to get expensive fast.
Pratik’s solution is to do the bulk of development testing against locally-run open-weight models via Ollama. His current recommendations: qwen3-coder for coding-adjacent tasks (and it legitimately does not phone home, I’ve run Wireshark to confirm), and nemotron from Nvidia for more general work. Then switch to the frontier model for production and final evaluation. Your laptop handles the iteration, and the cloud handles the deployment.
You’ve heard this story before, even if you don’t immediately recognize it.
Pratik brought up an old term: sneakernet. That’s from the era when all software was executables running on your machine, and deploying software meant physically walking to a user’s desk with a floppy disk. Then came the cloud, and suddenly continuous deployment became a thing, and anyone still doing quarterly releases felt like a relic.
But here’s what’s easy to forget: cloud native wasn’t just about faster deploys. It forced a complete rethink of how applications are designed, how they’re operated, and how they fail. The servers went from being pets (named, tended, mourned when they died) to being cattle (anonymous, disposable, replaced without ceremony). This called for a different approach.
Pratik’s central argument is that we’re at exactly that same inflection point with AI, and that most companies are going to blow it, at least initially.
When your boss comes in and says “put some AI in the product so our stock price goes up” (Pratik confirmed this is a real conversation people are having in real offices, not a joke), the tempting response is to bolt on a RAG endpoint, add “AI-powered” to the marketing copy, and call it a day. Retrieve some relevant documents. Stuff them into a prompt. Return a plausible-sounding answer. Ship it!
That’s not AI-native. That’s sneakernet with an LLM duct-taped to it.
An AI-native system learns, adapts, and acts autonomously. Not when a user presses a button. Proactively, in response to new data, with judgment that improves over time.
Pratik described the evolution of his own download analytics system as a concrete example. It started as “AI bolted on,” with a natural language interface that let people query a Spark cluster without writing SQL. Useful. Not native.
Over the past year and a half, he rebuilt it into something different: a system that monitors weekly data feeds, detects when something has changed (for example, a spike in Java 17 downloads), connects that to relevant context from an internal knowledge base (there was a critical security patch), and proactively sends him a synthesized briefing before he even thinks to ask. He still reviews it. But the thinking now happens without him.
The hotel booking example he used to illustrate the idea is even more vivid. Pratik has a specific, consistent set of hotel preferences: he wants to be within walking distance of wherever he’s speaking, the gym needs to be a real gym (not a treadmill and a motivational poster — Hotel 5 in Seattle, I’m lookin’ right at you), and he always searches by exact address rather than city name. He does this exact sequence of clicks every single time he books a hotel. An AI-native Marriott system would see this behavioral pattern, learn from it, and surface the right three options without him having to do any of that manual filtering. Not because someone programmed “Pratik likes gyms” into a rule engine, but because the system observed his behaviors, inferred some patterns, and generalized.
Could you do all of this algorithmically? Technically, yes. But think about it: you’d be writing bespoke preference logic for millions of users with different, compounded, evolving preferences, and you’d be doing it forever. The whole point of using an LLM here is that you’re borrowing its capacity for generalization instead of hand-coding every case yourself.
Pratik offered a measured take on the current agentic AI frenzy. Agents can act, but do they actually learn from what they’ve done? That’s the gap between today’s agentic frameworks and a genuinely AI-native system. Agents are probably not going away because they’re real and useful, but the framing will shift again in six months ( that’s just how this space works). The best approach is to build the fundamentals, not the hype.
On fine-tuning: if you need a model that’s deeply specialized for a domain, you don’t have to build an LLM from scratch. Low-Rank Adaptation (LoRA) lets you take an existing large model and attach a domain-specific adapter that shifts its weights toward your area of expertise. OpenAI’s recently released finance-specific model that they built in collaboration with Goldman Sachs, trained on a large corpus of financial data is exactly this. The base model does the heavy lifting. The adapter makes it fluent in corn futures.
On RAG: retrieval-augmented generation is essentially fancy-pants prompt stuffing. You find the documents most relevant to a user’s query, pull them in, and let the model reason over them. It’s the right approach for a lot of use cases, it’s not magic, and it works best when your underlying data is actually clean and well-structured. Remember the greybeard saying: “Garbage in, garbage out,” a principle that the age of AI has managed to make both more important and more dangerous, since we can now generate garbage at industrial scale.
If you walked away from Pratik’s talk with one thing, it should probably be this: the fundamental shift AI requires isn’t technical. It’s conceptual. Just like cloud native forced you to stop thinking about servers as permanent fixtures and start thinking about them as fungible infrastructure, AI native requires you to stop thinking about AI as a feature you add to an application and start thinking about it as the substrate the application is built on.
The application that learns. The application that adapts. The application that wakes up when new data arrives and starts thinking before you ask it to.
That’s the goal. We’re early. The tools are changing fast. But the direction is clear, and the developers who internalize that shift now, rather than bolting features on and hoping for a stock price bump, are going to be the ones building the interesting stuff.
If you’d like to dive deeper into what Pratik was talking about, he has companion sample apps. The details are in this picture:
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!





























































