The event will be an opportunity to mingle with fellow and aspiring technologists, enjoy refreshments and an engaging lightning talk on the latest in Artificial Intelligence.
The venue: Thrive DTSP
Thrive DTSP from the outside.
Connect with Tampa Bay professionals, share ideas, and explore the future of AI in the vibrant community setting of the coworking space known as Thrive DTSP!
Inside Thrive DTSP.
The lightning talk: James Gress
James Gress speaking at Civo Navigate North America 2024, Autin, Texas.
In addition to networking, there’ll also be a quick talk on AI, delivered by James Gress, Director at Accenture for Leading Emerging Technologies!
The participating tech meetup groups
This will be a meetup of meetups, and the participating meetups will be:
It’ll be an opportunity-rich environment for to network ing with like-minded individuals and connecting with recruiters and professionals who can help advance your career or support your transition into a new field!
When I say that manufacturers are trying to put AI into everything, I mean everything. Case in point, here’s an ad for AI condoms by Manforce, a condom brand in India…
…and of course, there’s also an app, which you connect to the condom by scanning a QR code.
I’ve been seeing a concerning trend over the past couple of months, and perhaps you have too, where people are becoming increasingly reliant on AI for coding, and it might not be working out well for most of them.
Disclamer, I’m a moron who worked on the same project without thinking about the risk that Cursor could break everything. Yesterday, Cursor (even though I only asked it to feed a view on my UI) destroyed months of development.
My question: How do you back up your projects/versions to ensure that the next action on cursor is reversible? Ops!
Also, I know that while I’m the concern, cursor isn’t the only culprit, it’s also Claude (while good overall) still has some flaws
Don’t take the misspellings and strange grammar as a sign of a lack of smarts — there are “tells” such as the pluralization of “work” that suggest that the author’s first language isn’t English. And in a follow-up comment, they wrote:
I’m not a dev or engineers at all (just a geek working in Finance)
So what I see is someone with the mental capacity to master another language, seeing a problem in their area of expertise that could be solved by an application, and then setting out to build that application with the assistance of AI, even though programming isn’t something they’re familiar with.
First, I think we should celebrate that kind of go-getter attitude.
Second, those of you who are programmers have already seen the post’s author’s rookie mistake. It’s in this question:
My question: How do you back up your projects/versions to ensure that the next action on cursor is reversible?
You probably thought: Of course, they don’t know version control exists!
At the moment, even the best LLM will simply focus on answering the user’s questions and not stray too far to make helpful asides or ask clarifying questions, such as “Have you heard of Git?”
Something’s been bugging me about how new devs learn and I need to talk about it.
We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.
Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.
The foundational knowledge that used to come from struggling through problems is just… missing.
We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
The first line in the following section shouldn’t really be shocking but it still feels shocking:
I recently realized that there’s a whole generation of new programmers who don’t even know what StackOverflow is.
With AI, these junior developers gain speed of delivery, but at the cost of understanding what they delivered does. Which means that they can’t maintain or modify what they built — at least, not without even more AI assistance. Over time, what they build becomes a collection of quick fixes arranged together without any consideration of the system as a whole. That’s a whole lot of tech debt.
There’s more thought on this article in this video by Forrest Knight
In case you’re not familiar with the name, Andrej Karpathy has forgotten more about computer science and AI than most of us will ever learn. He was the director of artificial intelligence and Autopilot Vision at Tesla, and also worked at OpenAI, where he specialized in deep learning and computer vision. He also has a YouTube channel that’s worth checking out if you really want to boost your AI/ML skills.
Here’s the text of his tweet:
There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
This is great for Karpathy, but I’ve already talked with developers who’ve fully embraced the first part of the tweet, where Karpathy throws a lot of work to the AI. The problem is that they’re ignoring these key points from the second part:
The code grows beyond my usual comprehension, I’d have to really read through it for a while.
Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away.
It’s not too bad for throwaway weekend projects, but still quite amusing.
And let’s not forget the last three words of his tweet: it mostly works.
Karpathy is very, very good at coding and has lots of experience. He’s internalized a lot of best practices and has developed an instinct for programming and can spot “code smells” a mile away.
The people who’ve been talking to me about getting into “vibe coding” are not Karpathy, and some of them have mentioned that they have that increasingly common problem where they say “I know how to use my programming language and framework, but I don’t know how to apply what I know to build an application from the ground up.”
They’re not ready for vibe coding, but they’re doing it anyway. If your main gig involves working with code — and especially working with other people’s code — you’d better prepare for some interesting times over the next few years.
Wouldn’t it be nice if there were a way to get some help customizing your resume for every job application you have to fill out?
With Anitra and I leading you through the steps, you’re going to build just that on Monday, March 17th at the Tampa Bay AI Meetup at Embarc Collective. Along with us, you’ll code up an AI-powered application that takes two inputs…
✅ Your resume
✅ The job description of a job you’re applying for
…and it produces a version of your resume that’s been fine-tuned in these ways:
Present you in the best possible light
Make changes so that your experiences, talents, and achievements show that you’re an excellent fit for the job
Tune your experience and skills to better match the job requirements
Update your resume to use key words and phrases from the job description
Correct spelling and grammar mistakes
Fix phrasing to be more clear and concise
Improve sentence structure and use action verbs
Bring your laptop! We’ll provide you with a “starter” project and access to an AI account, and we’ll walk you through the process of writing a Python app in Jupyter Notebook that does what we described above. In the process, you’ll learn:
About Jupyter Notebook, one of the preferred tools for AI and data science
How to make calls to an AI API
How to build an application based on an AI API
You will leave the meetup with a working Python app that does what we’ve described above: help you fine-tune your resume for specific job applications!
For the longest time, I’d been using iTerm2 as my terminal app on the Mac, but switched to Warp a couple of years ago. Its smart completions and IDE-like editing are incredibly useful features, but the thing that really makes it stand apart is the AI assistance.
Here’s a recent example. I’m more familiar with mobile and Python frameworks than I am with JavaScript and TypeScript ones, so I never know how to launch a JS/TS application when the developer hasn’t provided a README.
Normally, I’d start poking around the filesystem, looking at config files and other artifacts and doing some Googling to see which framework they belonged to. But this time, I decided “Why not let the AI do the scut work?” So that’s what I did.
I opened a new tab in Warp and entered:
Okay, Warp, I'm in a directory of an application written in
JavaScript (or possibly TypeScript), and need to run it.
But I don't know which framework it uses, and there are more
of them than there are Pokemon. Help me run whatever this
thing is, please.
It replied with:
First, let's check for common configuration files that will
tell us what kind of JavaScript/TypeScript application
this is. Looking for package.json, angular.json,
vue.config.js, or other framework-specific files.
It then asked for permission to execute the command ls -la, which I deemed as harmless (directory listing, long format, all files).
Warp responded with:
I can see this is a Next.js application (due to the presence
of .next directory, next.config.mjs, and next-env.d.ts).
It's using Yarn as the package manager (yarn.lock present).
Let me check the package.json to see what scripts are
available to run the application.
It asked for permission to do something again — this time, to execute cat package.json (print out, or conCATenate the contents of the file package.jsonso that it could read it. I gave my approval, and it responded with the steps necessary to run the application:
Features like this are why I use Warp and other AI tools!
Right now, Warp is available for macOS, as well as Debian, Fedora, Red Hat, SUSE, and Ubuntu flavors of Linux. They’re working on a Windows version too.
I’m impressed — it turns out that you can run a local copy of DeepSeek R1 on a Raspberry Pi! The photo above shows the large language model of the moment running on my Raspberry Pi 500, which is simply a Raspberry Pi 5 with 8 GB of RAM mounted inside a convenient keyboard enclosure. DeepSeek R1 runs slowly on this machine, but it does run!
Fortunately, the installation process is pretty straightforward…
Ensure the Raspberry Pi’s system is up to date by running the following commands in the terminal:
sudo apt update
sudo apt upgrade -y
Download and run the Ollama installer:
curl -fsSL https://ollama.com/install.sh | sh
Download and start the DeepSeek model of your choice. I decided to see if the Raspberry Pi 500 could handle the 8b model, and it can!:
ollama run deepseek-r1:8b
The ollama run command first determines if you have the model specified after the command — in this case, the “8b” version of the DeepSeek R1 model, which is specified by deepseek-r1:8b.
If you don’t have it, it downloads the model and supporting files first. If you already have the model, it runs it.