Categories
Artificial Intelligence Presentations Work

The brutally honest AI career playbook: Insights from a Stanford CS230 AI class

If you watch just one AI video before Christmas, make it lecture 9 from AI pioneer Andrew Ng’s CS230 class at Stanford, which is a brutally honest playbook for navigating a career in Artificial Intelligence.

You can watch the video of the lecture on YouTube.

The class starts with Ng sharing some of his thoughts about the AI job market before handing the reins over to guest speaker Laurence Moroney, Director of AI at Arm, who offered the students a grounded, strategic view of the shifting landscape, the commoditization of coding, and the bifurcation of the AI industry.

Here are my notes from the video. They’re a good guide, but the video is so packed with info that you really should watch it to get the most from it!

The golden age of the “product engineer”

Ng opened the session with optimism, saying that this current moment is the “best time ever” to build with AI. He cited research suggesting that every 7 months, the complexity of tasks AI can handle doubles. He also argued that the barrier to entry for building powerful software has collapsed.

Speed is the new currency! The velocity at which software can be written has changed largely due to AI coding assistants. Ng admitted that keeping up with these tools is exhausting (his “favorite tool” changes every three to six months), but it’s non-negotiable. He noted that being even “half a generation behind” on these tools results in a significant productivity drop. The modern AI developer needs to be hyper-adaptive, constantly relearning their workflow to maintain speed.

The bottleneck has shifted to what to build. As writing code becomes cheaper and faster, the bottleneck in software development shifts from implementation to specification.

Ng highlighted a rising trend in Silicon Valley: the collapse of the Engineer and Product Manager (PM) roles. Traditionally, companies operated with a ratio of one PM to every 4–8 engineers. Now, Ng sees teams trending toward 1:1 or even collapsing the roles entirely. Engineers who can talk to users, empathize with their needs, and decide what to build are becoming the most valuable assets in the industry. The ability to write code is no longer enough; you must also possess the product instinct to direct that code toward solving real problems.

The company you keep: Ng’s final piece of advice focused on network effects. He argued that your rate of learning is predicted heavily by the five people you interact with most. He warned against the allure of “hot logos” and joining a “company of the moment” just for the brand name and prestige-by-association. He shared a cautionary tale of a top student who joined a “hot AI brand” only to be assigned to a backend Java payment processing team for a year. Instead, Ng advised optimizing for the team rather than the company. A smaller, less famous company with a brilliant, supportive team will often accelerate your career faster than being a cog in a prestigious machine.

Surviving the market correction

Ng handed over the stage to Moroney, who started by presenting the harsh realities of the job market. He characterized the current era (2024–2025) as “The Great Adjustment,” following the over-hiring frenzy of the post-pandemic boom.

The three pillars of success To survive in a market where “entry-level positions feel scarce,” Moroney outlined three non-negotiable pillars for candidates:

  • Understanding in depth: You can’t just rely on high-level APIs. You need academic depth combined with a “finger on the pulse” of what is actually working in the industry versus what is hype.
  • Business focus: This is the most critical shift. The era of “coolness for coolness’ sake” is over. Companies are ruthlessly focused on the bottom line.Moroney put a spin on the classic advice, “Dress for the job you want, not the job you have,” and suggested that if you’re a job-seeker, that you “not let your output be for the job you have, but for the job you want.” He based this on his own experience of landing a role at Google not by preparing to answer brain teasers, but by building a stock prediction app on their cloud infrastructure before the interview.
  • Bias towards delivery: Ideas are cheap; execution is everything. In a world of “vibe coding” (a term he doesn’t like — he prefers something more line “prompting code into existence” or “prompt coding”), what will set you apart is the ability to actually ship reliable, production-grade software.

The trap of “vibe coding” and technical debt: Moroney addressed the phenomenon of using LLMs to generate entire applications. They may be powerful, but he warned that they also create massive “technical debt.”

The 4 Realities of Modern AI Work

Moroney outlined four harsh realities that define the current workspace, warning that the “coolness for coolness’ sake” era is over. These realities represent a shift in what companies now demand from engineers.

Business focus is non-negotiable. Moroney noted a significant cultural “pendulum swing” in Silicon Valley. For years, companies over-indexed on allowing employees to bring their “whole selves” to work, which often prioritized internal activism over business goals. That era is ending. Today, the focus is strictly on the bottom line. He warned that while supporting causes is important, in the professional sphere, “business focus has become non-negotiable.” Engineers must align their output directly with business value to survive.

2. Risk mitigation is the job. When interviewing, the number one skill to demonstrate is not just coding, but the ability to identify and manage the risks of deploying AI. Moroney described the transition from heuristic computing (traditional code) to intelligent computing (AI) as inherently risky. Companies are looking for “Trusted Advisors” who can articulate the dangers of a model (hallucinations, security flaws, or brand damage) and offer concrete strategies to mitigate them.

3. Responsibility is evolving. “Responsible AI” has moved from abstract social ideals to hardline brand protection. Moroney shared a candid behind-the-scenes look at the Google Gemini image generation controversy (where the model refused to generate images of Caucasian people due to over-tuned safety filters). He argued that responsibility is no longer just about “fairness” in a fluffy sense; it is about preventing catastrophic reputational damage. A “responsible” engineer now ensures the model doesn’t just avoid bias, but actually works as intended without embarrassing the company.

4. Learning from mistakes is constant. Because the industry is moving so fast, mistakes are inevitable. Moroney emphasized that the ability to “learn from mistakes” and, crucially, to “give grace” to colleagues when they fail is a requirement. In an environment where even the biggest tech giants stumble publicly (as seen with the Gemini launch), the ability to iterate quickly after a failure is more valuable than trying to be perfect on the first try.

Technical debt

Just like a mortgage, debt isn’t inherently bad, but you must be able to service it. He defined the new role of the senior engineer as a “trusted advisor.” If a VP prompts an app into existence over a weekend, it is the senior engineer’s job to understand the security risks, maintainability, and hidden bugs within that spaghetti code. You must be the one who understands the implications of the generated code, not just the one who generated it.

The dot-com parallel: Moroney drew a sharp parallel between the current AI frenzy and the Dot-Com bubble of the late 1990s. He acknowledged that while we are undoubtedly in a financial bubble, with venture capital pouring billions into startups with zero revenue, he emphasizes that this does not imply the technology itself is a sham.

Just as the internet fundamentally changed the world despite the 2000 crash wiping out “tourist” companies, AI is a genuine technological shift that is here to stay. He warns students to distinguish between the valuation bubble (which will burst) and the utility curve (which will keep rising), advising them to ignore the stock prices and focus entirely on the tangible value the technology provides.

The bursting of this bubble, which Moroney terms “The Great Adjustment,” marks the end of the “growth at all costs” era. He argues that the days of raising millions on a “cool demo” or “vibes” are over. The market is violently correcting toward unit economics, meaning AI companies must now prove they can make more money than they burn on compute costs. For engineers, this signals a critical shift in career strategy: job security no longer comes from working on the flashiest new model, but from building unglamorous, profitable applications that survive the coming purge of unprofitable startups.

Future-proofing: “Big AI ” vs. “Small AI”

Perhaps the most strategic insight from the lecture was Moroney’s prediction of a coming “bifurcation” in the AI industry over the next five years.

The industry is splitting into two distinct paths:

  • “Big AI”: The AI made by massive, centralized players such as OpenAI, Google, and Anthropic, who are chasing after AGI. This relies on ever-larger models hosted in the cloud.
  • “Small AI”: AI systems that are based on open-weight (he prefers “open-weight” to “open source” when describing AI models), self-hosted, and on-device models. Moroney also calls this “self-hosted AI.”

Moroney is bullish on “Small AI.” He explained that many industries are very protective of their intellectual property, such as movie/television studios and law firms. These business will will never send their intellectual property to a centralized model like GPT-4 due to privacy and IP concerns. This creates a massive, underserved market for engineers who can fine-tune small models to run locally on a device or private server.

Moroney urged the class to diversify their skills. Don’t just learn how to call an API; learn how to optimize a 7-billion parameter model to run on a laptop CPU. That is where the uncrowded opportunity lies.

Agentic Workflows: The “How” of Future Engineering: Moroney’s advice was to stop thinking of agents as magic and start treating them as a rigorous engineering workflow consisting of four steps:

  1. Intent: Understanding exactly what the user wants.
  2. Planning: Breaking that intent down into steps.
  3. Tools: Giving the model access to specific capabilities (search, code execution).
  4. Reflection: Checking if the result met the intent. He shared a demo of a movie-making tool where simply adding this agentic loop transformed a hallucinated, glitchy video into a coherent scene with emotional depth.

Conclusion: Work hard

I’ll conclude this set of notes with what Ng said at the conclusion of his introduction to the lecture, which he described as “politically incorrect”: Work hard.

While he acknowledged that not everyone is in a situation where they can do so, he pointed out that among his most successful PhD students, the common denominator was an incredible work ethic: nights, weekends, and the “2 AM hyperparameter tuning.”

In a world drowning in hype, Ng’s and Moroney’s “brutally honest” playbook is actually quite simple:

  • Use the best tools to move fast
  • Understand the business problem you’re trying to solve, and understand it deeply.
  • Ignore the noise of social media and the trends being hyped there. Build things that actually work.
  • And finally, to quote Ng: “Between watching some dumb TV show versus finding your agentic coder on a weekend to try something… I’m going to choose the latter almost every time.”
Categories
Meetups Players Presentations Programming Tampa Bay

Notes from Venkat Subramaniam’s presentation on finding and fixing code with AI (Monday, December 8, 2025)

It’s always a treat to see one of Dr. Venkat Subramaniam’s presentations, and Monday evening’s session, Identifying and fixing Issues in Code using AI-based tools, was no exception!

On behalf of the Tampa Bay Artificial Intelligence Meetup, Anitra and I would like to thank Ammar Yusuf, Tampa Java User Group, and Tampa Devs for inviting us to participate in this meetup, and to thank Venkat for an excellent lecture.

Here are my notes and photos…

Part 1: What AI actually Is (and isn’t)

Think of AI as “Accelerated Inference”

  • The reality check: The term “Artificial Intelligence” is misleading. It suggests that an application has sentience or wisdom. Venkat suggests a more accurate definition for AI: Accelerated Inference.
  • Inference vs. intelligence:
    • If you see a purple chair and then another purple chair, you infer that chairs are purple. That isn’t necessarily true, but it is a logical conclusion based on available data.
    • AI does this on a massive scale. It doesn’t “know” the answer; it infers the most statistically probable answer based on the massive volume of data it was fed.
  • Speed vs. accuracy: Machines are “wicked fast,” but they are also error-prone. Humans are slow and error-prone. AI allows us to make mistakes at a much higher velocity if we aren’t careful.

Karma

  • Garbage in, garbage out: AI models are trained on billions of lines of code, most of it written by humans (at least for now).
  • The problem: Humans write bugs. We write security vulnerabilities. We write bad variable names.
  • The consequence: Because AI learns from human code, it learns our bad habits. Venkat says this is karma. When we complain about AI writing bad code, we’re really complaining about our own collective history of programming mistakes coming back to haunt us.
  • The takeaway: Don’t assume AI output is “production-ready.” Treat AI-generated code with the same skepticism you would treat code copied from a random forum post in 2010.

The “novice vs. expert ” paradox

Venkat described a specific phenomenon regarding how we perceive AI’s competence:

  • The novice view: When you ask an AI to do something you know nothing about (e.g., writing a poem in a language you don’t speak), the result looks amazing. You find it awesome because you lack the expertise to judge it.
  • The expert view: When you ask AI to do something you are an expert in (e.g., writing high-performance Java code), you often find the result “awful.” You can spot the subtle bugs, the global variables, and the inefficiencies immediately.
  • The danger zone: As a developer, you are often in the middle. You know enough to be dangerous. Be careful not to be dazzled by the “novice view” when generating code for a new framework or language.


Part 2: Strategies for using AI effectively

1. Use AI for ideas instead of solutions

  • Don’t ask for the answer immediately. If you treat AI as a maker of solutions, you bypass the critical thinking process required to be a good engineer.
  • Ask for approaches. Instead of “Write this function,” ask: “I need to solve X problem. What are three different design patterns I could use?”
  • Love the weirdness: AI is great at throwing out random, sometimes hallucinated ideas. Use these as inspirations or starting points for brainstorming. “Accept weird ideas, but reject strange solutions,” Venkat said.

2. Managing cognitive load

  • The human limit: We struggle to keep massive amounts of context in our heads. We get tired. We get “analysis paralysis.”
  • AI’s strong suit: AI doesn’t get tired. It can read a 7,000-line legacy function with terrible variable names and not get a headache or confused.
  • The “Translator” technique:
    • Venkat used the analogy of translating a foreign language into your “mother tongue” to understand it emotionally and logically.
    • Try this: Paste a complex, confusing block of legacy code into an AI tool and ask, “Explain this to me in plain English.” This helps you understand intent without getting bogged down in syntax.

3. The Δt (“delta t”) approach

  • Don’t “one-shot” it: Just as numerical analysis (calculus) requires taking small steps (Δt) to get an accurate curve, working with AI requires small iterations.
  • Workflow:
    1. Present the AI with the problem and ask it for possible approaches.
    2. Review its replies. Chances are that at least some of them (or maybe all of them) will be wrong, buggy, or not the answer you’re looking for.
    3. Don’t give up. Instead, provide feedback: “This code isn’t thread-safe,” or “This variable is null.”
    4. The AI will often correct itself. This back-and-forth “dance” is where the actual development happens.

Part 3: Code examples

Venkat demonstrated several scenarios where code looked correct but had problems that weren’t immediately apparent, and showed how AI helped (or didn’t).

Case study: Fruit

The first case study was a version of a problem presented to Venkat by a client. He couldn’t present the actual code without violating the client NDA, so he presented a simplified version that still captured the general idea of the problem with the code.

Here’s the first version of the code:

// Java

import java.util.*;

public class Sample {
    public static List stringsOfLength5InUpperCase(List strings) {
        List result = new ArrayList<>();

        strings.stream()
            .map(String::toUpperCase)
            .filter(string -> string.length() == 5)
            .forEach(result::add);

        return result;
    }

    public static void main(String[] args) {
        var fruits = List.of("Apple", "Banana", "Orange", "Grape", "Guava", "Kiwi",
                "Mango", "Nance", "Papaya", "Peach", "Lime", "Lemon");

        var result = stringsOfLength5InUpperCase(fruits);

        System.out.println(result);
    }
}

This version of the code works as expected, printing the 7 fruit names in the list that are 5 characters long.

Right now, it’s single-threaded, and it could be so much more efficient! A quick change from .stream() to .parallelStream()should do the trick, and the resulting code becomes

// Java

import java.util.*;

public class Sample {
    public static List stringsOfLength5InUpperCase(List strings) {
        List result = new ArrayList<>();

        //  Here's the change
        strings.parallelStream()
            .map(String::toUpperCase)
            .filter(string -> string.length() == 5)
            .forEach(result::add);

        return result;
    }

    public static void main(String[] args) {
        var fruits = List.of("Apple", "Banana", "Orange", "Grape", "Guava", "Kiwi",
                "Mango", "Nance", "Papaya", "Peach", "Lime", "Lemon");

        var result = stringsOfLength5InUpperCase(fruits);

        System.out.println(result);
    }
}

The code appears to work — until you run it several times and notice that it will occasionally produce a list of less than 7 fruit names.

Why did this happen? Because Java’sArrayList isn’t thread-safe, and writing to a shared variable from inside a parallel stream causes race conditions. But this is the kind of bug that’s hard to spot.

Venkat fed the code to Claude and asked what was wrong with it, and after a couple of tries (because AI responses aren’t consistent), it identified the problem: creating a side effect in a stream and relying on its value. It suggested using a collector like toList() to capture the the 5-character fruit names; it’s thread-safe.

Claude also suggested applying the filter before converting the list values to uppercase, so as not to perform work on values that would be filtered out.

The takeaway: AI is excellent at spotting errors that  we humans often miss because we’re so focused on the business logic.

Case study: Parameters

I didn’t get a photo of this code example, but it featured a function that looked like this:

public String doSomething(String someValue) {

    // Some code here

    someValue = doSomethisElse(someValue)

    // More code here

}

I’m particularly proud of the fact that I spotted the mistake was the first one to point it out: mutating a parameter.

Venkat fed the code to Claude, and it dutifully reported the same error.

It was easy for me to spot such an error in a lone function. But spotting errors like this in an entire project of files? I’d rather let AI do that.

Case study: Currency converter

I didn’t get a photo of this one, but it featured base class CurrencyConverter with a method convert(float amount). A subclass NokConverter attempted to override it to handle Norwegian Krone.

The problem was that NokConverter’s conversion method’s signature was convert(int amount), which meant that it was overloaded instead of overridden. As a result, polymorphism was lost, and the client code ends up calling the base class method instead of the subclass method. But that’s pretty easy to miss — after all, the code appears to work properly.

A quick check with the AI pointed out that the method was not actually overriding, and it also suggested adding the @Override annotation, which is meant to prevent this kind of subtle error.

Remember: don’t just let AI fix it; understand why the fix works. In this case, it was about strictly enforcing contract hierarchy.

Case study: Wordle

Venkat asked Claude to write a Wordle clone, and it did so in seconds.

But: the logic regarding how yellow/green squares were calculated was slightly off in edge cases.

AI sometimes implements logic that looks like the rules but fails on specific boundary conditions. It’s a good idea to write unit tests for AI-generated logic. Never trust that the algorithmic logic is sound just because the syntax is correct.


Part 4: The “Testing” Gap

Missing test suites

  • Venkat noted a disturbing trend: he sees very few test cases accompanying AI-generated code.
  • Developers tend to generate the solution and manually verify it once (“It works on my machine”), then ship it.
  • The Risk: AI code is brittle. If you ask it to refactor later, it might break the logic. Without a regression test suite (which the AI didn’t write for you), you won’t know.

How to use AI for testing

  • Invert the flow! Instead of asking AI to write the code, write the code yourself (or design it), and ask AI to:
    • “Generate 10 unit tests for this function, including edge cases.”
    • “Find input values that would cause this function to crash.”
  • AI is often better at playing “Devil’s Advocate” (breaking code) than being the Architect (building code).

Part 5: Takeaways

Job security in the age of AI

  • The Fear: “Will I lose my job to AI?”
  • The Reality: You will not lose your job to AI. You will lose your job to another programmer who knows how to use AI better than you do.
  • The “Code Monkey” extinction: If your primary skill is just typing syntax (converting a thought into Java/Python syntax), you are replaceable. AI does that better.
  • The value-add: Your value is now as a problem solver and solution reviewer. You’re paid to understand the business requirements and ensure the machine code actually meets them.

Adaptation is key!

  • Venkat used a quote commonly attributed to Charles Darwin (see here for more): “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”
  • Action Plan:
    • Don’t fight the tool
    • Don’t blindly trust the tool
    • Learn to verify the tool
    • Shift your focus from “How do we write a loop?” to “Why are we writing this loop?”

Empathy and code review

  • When AI generates bad code, we analyze it dispassionately. When humans write bad code, we get angry or judgmental.
  • The Shift: We need to extend the “AI Review” mindset to human code reviews. Be objective. Find the fault in the logic, not the person.
  • AI has shown us that everyone (including the machine trained on everyone’s code) writes bad code. It’s the universal developer experience.
Categories
Artificial Intelligence Conferences Presentations Programming What I’m Up To

O’Reilly’s AI Codecon — free and online, Thursday, May 8!

On Thursday, May 8th from 11 a.m. to 3:00 p.m. Eastern, O’Reilly Media will host a free online conference called AI Codecon. “Join us to explore the future of AI-enabled development,” the tagline reads, and their description of the event starts with their belief that AI’s advance does NOT mean the end of programming as a career, but a transition.

Here’s what I plan to do with this event:

  • Register for the event
  • Log in when it starts and fire up a screen recorder
  • Watch the event in the background while working
  • Generate a transcript from the recording and feed it into a couple of LLM
  • Have the LLMs answer any questions I may have and generate summaries and “going forward” game plans based on the content and my future plans

Interested? Register here.

The agenda for AI Codecon

Here’s the schedule for AI Codecon, which is still being finalized as I write this:

  1. Introduction, with Tim O’Reilly (10 minutes)
  2. Gergely “Pragmatic Engineer” Orosz and Addy Osmani Fireside Chat (20 minutes)
    Addy Osmani for an insightful discussion on the evolving role of AI in software engineering and how it’s paving the way for a new era of agentic, “AI-first” development.

  3. Vibe Coding: More Experiments, More Care – Kent Beck (15 minutes)
    Augmented coding deprecates formerly leveraged skills such as language expertise, and amplifies vision, strategy, task breakdown, and feedback loops. Kent Beck, creator of Extreme Programming, tells you what he’s doing and the principles guiding his choices.
  4. Junior Developers and Generative AI – Camille Fournier, Avi Flombaum, and Maxi Ferreira (15 minutes)
    Is bypassing junior engineers a recipe for short-term gain but long-term instability? Or is it a necessary evolution in a high-efficiency world? Hear three experts discuss the trade-offs in team composition, mentorship, and organizational health in an AI-augmented industry.

  5. My LLM Codegen Workflow at the Moment – Harper Reed (15 minutes)
    Technologist Harper Reed takes you through his LLM-based code generation workflow and shows how to integrate various tools like Claude and Aider, gaining insights into optimizing LLMs for real-world development scenarios, leading to faster and more reliable code production.
  6. Jay Parikh and Gergely Orosz Fireside Chat (15 minutes)
    Jay Parikh, executive vice president at Microsoft, and Gergely Orosz, author of The Pragmatic Engineer, discuss AI’s role as the “third runtime,” the lessons from past technological shifts, and why software development isn’t disappearing—it’s evolving.
  7. The Role of Developer Skills in Today’s AI-Assisted World – Birgitta Böckeler (15 minutes)
    Birgitta Böckeler, global lead for AI-assisted software delivery at Thoughtworks, highlights instances where human intervention remains essential, based on firsthand experiences. These examples can inform how far we are from “hands-free” AI-generated software and the skills that remain essential, even with AI in the copilot seat.
  8. Modern Day Mashups: How AI Agents Are Reviving the Programmable Web – Angie Jones (5 minutes)
    Angie Jones, global vice president of developer relations at Block, explores how AI agents are bringing fun and creativity back to software development and giving new life to the “programmable web.”
  9. Tipping AI Code Generation on its Side – Craig McLuckie (5 minutes)
    The current wave of AI code generation tools are closed, vertically integrated solutions. The next wave will be open, horizontally aligned systems. Craig McLuckie explores this transformation, why it needs to happen, and how it will be led by the community.
  10. Prompt Engineering as a Core Dev Skill: Techniques for Getting High-Quality Code from LLMs – Patty O’Callaghan (5 minutes)
    Patty O’Callaghan highlights practical techniques to help teams generate high-quality code with AI tools, including an “architecture-first” prompting method that ensures AI-generated code aligns with existing systems, contextual scaffolding techniques to help LLMs work with complex codebases, and the use of task-specific prompts for coding, debugging, and refactoring.
  11. Chip Huyen and swyx Fireside Chat (20 minutes)
    Chip Huyen will delve [Aha! An AI wrote this! — Joey] into the practical challenges and emerging best practices for building real-world AI applications, with a focus on how foundation models are enabling a new era of autonomous agents.

  12. Bridging the AI Learning Gap: Teaching Developers to Think with AI – Andrew Stellman (15 minutes)
    Andrew Stellman, software developer and author of Head First C#, shares lessons from Sens-AI, a learning path built specifically for early-career developers, and offers insights into the gap between junior and senior engineers.
  13. Lessons Learned Vibe Coding and Vibe Debugging a Chrome Extension with Windsurf – Iyanuoluwa Ajao (5 minutes)
    Software and AI engineer Iyanuoluwa Ajao explores the quirks of extension development and how to vibe code one from scratch. You’ll learn how chrome extensions work under the hood, how to vibe code an extension by thinking in flows and files, and how to vibe debug using dependency mapping and other techniques.
  14. Designing Intelligent AI for Autonomous Action – Nikola Balic (5 minutes)
    Nikola Balic, head of growth at VC-funded startup Daytona, will show through case studies like AI-powered code generation and autonomous coding, you’ll learn key patterns for balancing speed, safety, and strategic decision-making—and gain a road map for catapulting legacy systems into agent-driven platforms.
  15. Secure the AI: Protect the Electric Sheep – Brett Smith (5 minutes)
    Distinguished software architect, engineer, and developer Brett Smith discusses AI security risks to the software supply chain, covering attack vectors, how they relate to the OWASP Top 10 for LLMs, and how they tie into scenarios in CI/CD pipelines. You’ll learn techniques for closing the attack vectors and protecting your pipelines, software, and customers.
  16. How Does GenAI Affect Developer Productivity? – Chelsea Troy (15 minutes)
    The advent of consumer-facing generative models in 2021 catalyzed a massive experiment in production on our technical landscape. A few years in, we’re starting to see published research on the results of that experiment. Join Chelsea Troy, leader of Mozilla’s MLOps team, for a tour through the current findings and a few summative thoughts about the future.
  17. Eval Engineering: The End of Machine Learning Engineering as We Know It – Lili Jiang (15 minutes)
    Lili Jiang, former Waymo evaluation leader, reveals how LLMs are transforming ML engineering. Discover why evaluation is becoming the new frontier of ML expertise, how eval metrics are evolving into sophisticated algorithms, and why measuring deltas instead of absolute performance creates powerful development flywheels.
  18. Closing Remarks – Tim O’Reilly (10 minutes)

Interested? Register here.

Categories
Presentations Security What I’m Up To

Video of my Bsides Tampa 2024 presentation, “xz made EZ”

Here it is — the video of my presentation, xz made EZ, which covers the security incident with the xz utils utility on Unix-y systems, which I gave at BSides Tampa 2024 on April 6th:

If you’d like them, here are the Google slides from the presentation.

Questions and answers

How did I land this presentation?

The details of the xz vulnerability were made public mere days before the BSides Tampa 2024 cybersecurity conference, and on a whim, I emailed the organizers and asked if I could do a lightning talk on the topic.

They quickly got back to me and let me know that they’d had a last-minute speaker cancellation and gave me a full slot in which to do my presentation.

The moral of the story? It never hurts to ask, and it can lead to opportunities!

What’s this xz thing, anyway?

Let me answer with this slide from my presentation:

xz is short for xz Utils, a compression utility that you’ll find in Unix-y operating systems, including:

  • Linux distributions
  • macOS

It’s usually used by Unix greybeards who generally use it in combination with tar.

What happened with xz?

xz was one of those open source projects that had a vulnerability best illustrated by this xkcd comic:

xz was like that project pointed out in the comic, except that the “random person” doing the maintaining was Lass Collin, a developer based in Finland, who was experiencing burnout. As a result, xz was languishing.

In what appeared to be a stroke of good fortune, a developer who went by the handle of “Jia Tan” on GitHub came to the rescue and started submitting patches to xz.

At about the same time, there were a number of complaints about xz’s lack of apparent maintenance. In hindsight, it looks like a clever two-pronged campaign:

  1. A group of people loudly clamoring for someone else to take the reins of the xz project, and
  2. A friendly developer who swoops in at the right time, making patches to the xz project…

…all while a burned-out Lasse Collin was facing a lot of stress.

On November 30, 2022, Lasse changed the email address for xz bug reports to an alias that redirected to both his email address as well as Jia Tan’s. At that point, Jia Tan, the apparently helpful developer who appeared at just the right time, was now an official co-maintainer.

Not long after, Lasse releases his last version of xz, and soon after Jia Tan, now the sole maintainer of the project, releases their own version.

With full control of the project, Jia Tan starts making changes — all the while, carefully disguising them — that create a “back door” within the xz application.

On any system that had Jia Tan’s tainted version of xz installed, an unauthorized user with the right private key could SSH into that system with root-level access. By becoming the maintainer of a trusted application used by many Linux versions, Jia Tan managed to create a vulnerability by what could have been one of the most devastating supply-chain attacks ever.

Categories
Artificial Intelligence Presentations Programming

Join my online hands-on AI session with Austin Forum next week!

Next Tuesday, April 2nd at 6:15 p.m. Central / 7:15 p.m. Eastern / 23:15 UTC, I’ll lead an online introductory session for people who to dive into AI titled AI: How to Jump In Right Away.

ℹ️ Click here to register for the presentation.

My session is part of Austin Forum on Technology and Society’s third annual AI April, a month of presentations, events, and podcasts dedicated to AI capabilities, applications, future impacts, challenges, and more.

My presentation will start with a brief history of AI, as well as the general principles of how “old school” AI works versus “new school” AI…

…but we’ll quickly dive into building Sweater or No, a quick little AI application that tells you if you should wear a sweater, based on your current location. Here’s a screenshot of some of the code we’ll build:

This is a FREE online session, so you don’t have to be in Austin to participate. I’m not in Austin, but Tampa Bay, and you can join in from anywhere!

You need to register to participate — here’s the registration page. I hope to see you there!

Categories
Hardware Meetups Presentations Programming What I’m Up To

Scenes (and full video!) from my “How Computers Work Under the Hood” presentation

Image preview

Back in June, I posed a question on this blog: Would you like to know how computers REALLY work “under the hood?” Tampa Devs, a very active nonprofit with a mission to support the local developer community though this would be a good presentation topic. On Wednesday, I gave that presentation to this crowd:

I started by telling the attendees that while knowing about microprocessors and assembly language isn’t absolutely necessary to function in a lot of developer and tech jobs today, there’s value in that knowledge:

Photo by Richard Schmid.

I talked about transistors…

…made note of the fact that it was the 52nd birthday of the commercial microprocessor…

…introduced the 6502…

…got deeper into its inner workings…

…and then we dove into 6502 assembly language programming!

Tampa Devs recorded the entire thing, and you can watch it here:

All the material from the presentation is available online:

My thanks to:

  • Tampa Devs for inviting me to speak at their meetup — it’s always an honor and a pleasure to work with a group that contributes so much to the Tampa Bay tech scene!
  • Kforce for providing the venue, which I like to say has “the comfiest meetup chairs in Tampa Bay.”
  • Civo for sponsoring the pizza, sodas, and water for the attendees, and taking such an interest in supporting the Tampa Bay tech scene.
Categories
Artificial Intelligence Presentations Tampa Bay What I’m Up To

Slides from “Centaurs vs. Minotaurs,” my presentation at SocialCode x Tampa

Thanks to everyone who came to The SocialCode x Tampa: Embracing the AI Evolution event last Thursday (September 7, 2023) for an evening of presentations and discussion about AI! As promised, here’s a link to the slides for my presentation, Centaurs vs. Minotaurs: