Categories
Artificial Intelligence Hardware Programming What I’m Up To

One last endorsement for the ZGX Nano AI workstation

Today’s my last day in my role as the developer advocate for HP’s GB10-powered AI workstation, the ZGX Nano. As I’ve written before, I’m grateful to have had the the opportunity to talk about this amazing little machine.

Of course, you could expect me to talk about how good the ZGX Nano is; after all, I’m paid to do so — at least until 5 p.m. Eastern today. But what if a notable AI expert also sang its praises?

That notable expert is Sebastian Raschka (pictured above), author of a book I’m working my way through right now: Build a Large Language Model (from Scratch), and it’s quite good. He’s also working on a follow-up book, Build a Reasoning Model (from Scratch).

Sebastian has been experimenting on NVIDIA’s DGX Spark, which has the same specs as the ZGX Nano (as well as a few other similar small desktop computers built around the NVIDIA’s GB10 “superchip”), and he’s published his observations on his blog in a post titled DGX Spark and Mac Mini for Local PyTorch Development. He ran some benchmark AI programs comparing his Mac Mini M4 computer (a fine developer platform, by the bye) and the NVIDIA H100 GPU (and NVIDIA’s A100 GPU when an H100 wasn’t available), pictured below:

Keep in mind that the version of the H100 that comes with 80GB of VRAM sells for about $30,000, which is why most people don’t buy one, but instead rent time on it from server farms, typically at about $2/hour.

Let me begin from the end of Raschka’s article, where he writes his conclusions:

Overall, the DGX Spark seems to be a neat little workstation that can sit quietly next to a Mac Mini. It has a similarly small form factor, but with more GPU memory and of course (and importantly!) CUDA support.

I previously had a Lambda workstation with 4 GTX 1080Ti GPUs in 2018. I needed the machine for my research, but the noise and heat in my office was intolerable, which is why I had to eventually move the machine to a dedicated server room at UW-Madison. After that, I didn’t consider buying another GPU workstation but solely relied on cloud GPUs. (I would perhaps only consider it again if I moved into a house with a big basement and a walled-off spare room.) The DGX Spark, in contrast, is definitely quiet enough for office use. Even under full load it’s barely audible.

It also ships with software that makes remote use seamless and you can connect directly from a Mac without extra peripherals or SSH tunneling. That’s a huge plus for quick experiments throughout the day.

But, of course, it’s not a replacement for A100 or H100 GPUs when it comes to large-scale training.
I see it more as a development and prototyping system, which lets me offload experiments without overheating my Mac. I consider it as an in-between machine that I can use for smaller runs, and testing models in CUDA, before running them on cloud GPUs.

In short: If you don’t expect miracles or full A100/H100-level performance, the DGX Spark is a nice machine for local inference and small-scale fine-tuning at home.

You might as well replace “DGX Spark” in his article with “ZGX Nano” — the hardware specs are the same. The ZGX Nano shines with HP’s exclusive ZGX Toolkit, a Visual Studio Code extension that lets you configure, manage, and deploy to the ZGX Nano. This lets you use your favorite development machine and coding environment to write code, and then use the ZGX Nano as a companion device / on-premises server.

The article features graphs showing his benchmarking results…

In his first set of benchmarks, he took a home-built 600 million parameter LLM — the kind that you learn how to build in his book, Build a Large Language Model (from Scratch) — and ran it on his Mac Mini M4, the ZGX Nano’s twin cousin, and an H100 from a cloud provider. From his observations, you can conclude that:

  • With smaller models, the ZGX Nano can match a Mac Mini M4. Both can crunch about 45 tokens per second with 20 billion parameter m0dels.
  • The ZGX Nano has the advantage of coming with 128GB  of VRAM, meaning that it can handle larger models than the MacMini could, as it’s limited by memory.

Raschka’s second set of benchmarks tested how the Mac Mini, the ZGX Nano’s twin cousin, and the H100 handle two variants of a model that have been presented with MATH-500, a collection of 500 mathematical word problems:

  • The base variant, which was a standard LLM that gives short, direct answers
  • The reasoning variant, which was a version of the base model that was modified to “think out loud” through problems step-by-step

He ran two versions of this benchmark. The first was the sequential test, where the model was presented on MATH-500 question at a time. From the results, you can expect the ZGX Nano to perform almost as well as the H100, but at a significantly smaller fraction of the cost! It also runs circles around the Mac Mini.

In the second version of the benchmark, the batch test, the model was served 128 questions at the same time, to simulate serving multiple users at once and to. test memory bandwidth and parallel processing.

This is a situation where the H100 would vastly outperform the ZGX Nano thanks to the H100’s much better memory bandwidth. However, the ZGX Nano isn’t for doing inference at production scale; it’s for developers to try out their ideas on a system that’s powerful enough to get a better sense of how they’d operate in the real world, and do so affordably.

Finally, with the third benchmark, Rashcka trained and fine-tuned a model. Note that this time, the data center GPU was the A100 instead of the H100 due to availability.

This benchmark tests training and fine-tuning performance. It compares how fast you can modify and improve an AI model on the Mac Mini M4 vs. the ZGX Nano’s twin vs. an A100 GPU. He presents three scenarios in training and fine-tuning a 355 million parameter model:

  1. Pre-training (3a in the graphs above): Training a model from scratch on raw text
  2. SFT, or Supervised fine-tuning (3b): Teaching an existing model to follow instructions
  3. DPO (direct preference optimization), or preference Tuning (3c): Teaching the model which responses are “better” using preference data

All these benchmarks say what I’ve been saying: the ZGX Nano lets you do real model training locally and economically. You get a lot of bang for your ZGX Nano buck.

As with a lot of development workflows, where there’s a development database and a production database, you don’t need production scale for every experiment. The ZGX Nano gives you a working local training environment that isn’t glacially slow or massively expensive.

Want to know more? Go straight to the source and check out Raschka’s article, DGX Spark and Mac Mini for Local PyTorch Development.

And with this article, I end my stint as the “spokesmodel” for the ZGX Nano. It’s not the end of my work in AI; just the end of this particular phase.

Keep watching this blog, as well as the Global Nerdy YouTube channel, for more!

Categories
Artificial Intelligence Editorial Programming

My AI technique, captured in a single social media post (with one exception)

Screen capture: A tweet by Yuchen Jin (@Yuchenj_UW), which reads “I saw a guy coding today. Tab 1 ChatGPT. Tab 2 Gemini. Tab 3 Claude. Tab 4 Grok. Tab 5 DeepSeek. He asked every AI the same exact question. Patiently waited, then pasted each response into 5 different Python files. Hit run on all five. Pick the best one. Like a psychopath. It's me.” My additional caption below reads “I do this, too — but with the excpetion of Grok, which I think of as ‘ChatKKK.’”

Here’s Yuchen Jin’s original tweet.

Categories
Conferences Programming What I’m Up To

Slides from my upcoming Ren’Py presentation for KCDC 2025

A couple of months back, I wrote that one of my proposed talks was accepted for this year’s edition of KCDC — Kansas City Developer Conference, which takes place from August 13th through 15th (Wednesday, August 13 is the workshop day, while the conference days are Thursday and Friday, August 14 and 15).

KCDC draws 2000+ attendees each year and features tracks for the following topics:

  • Architecture
  • AI and Data Science
  • Cloud
  • Data
  • DevOps
  • Human Skills
  • Java
  • JavaScript
  • Methodologies and Process Management
  • .NET
  • Other Technologies
  • Security
  • Testing and QA
  • UI/UX and Design

My talk’s title is The Best, Most Fun Python Platform You’ve Never Heard Of. It’s a programmer’s introduction to the powerful, fun, and all-too-often-ignored Ren’Py. While Ren’Py is called a “visual novel engine,” I prefer to think of it as the fastest, most fun way to create Python applications.

I’ve been working on my talk for the past little while, and I thought I’d share the first couple of slides from my presentation. I may update them between now and mid-August, but if you’re curious, it should give you a sense of what my presentation will be like.

Here’s the description for my talk:

Python’s occupied the number one spot on the TIOBE Programming Community Index for the past couple of years, and it’s the preferred programming language in for AI and data science. Perhaps you’ve been thinking about learning it, but the thought of having to do another set of “Hello World” style exercises is filling you with dread. Is there a more fun way to get up to speed with Python?

Yes, there is, and it’s called Ren’Py. It’s billed as a visual novel engine and often used for writing dating simulation games, but it’s so much more than that. It’s a platform that lets you code in Python (and more) and deploy to desktop, web, and even mobile platforms, and with a fraction of the effort required by React, Vue, or Angular. It’s a fun framework that’s been used to produce games you can find on Steam, but it’s got applications well beyond amusement.

In this session, we’ll look not just at the basics of Ren’Py development, but the building of a dating game based on KFC’s official game, “I Love You Colonel Sanders,” a simple turn-based combat game starring Florida Man, and building mobile apps in a way that’s less frustrating than usual.

I’m thinking about doing a dry run of my presentation here in Tampa — and for free! — at a Tampa Bay Python meetup in early August. Watch this space (or my LinkedIn account) for the announcement.

Would you like to get the anime template that I used for my slides? You can get it from SlidesGo for free — it’s called Lovely Chibis Anime Characters for Marketing!

Categories
Artificial Intelligence Meetups Programming

The code for the Tampa Bay AI Meetup’s “AI job application helper app” is now online!

Back in mid-March, the Tampa Bay AI Meetup hosted a session at Embarc Collective called Build an AI job application helper app, where we did just that: we built an application that helps you apply for jobs.

I’ve since refined the app and posted it on GitHub, where you can download it and try it out for yourself.

According to the Silicon Valley-based career guidance service Pathrise, job seekers who sent 20+ job applications every week got more interviews and landed a job sooner.

That’s a lot of work, especially since the general advice is to customize your résumé for every job application, and then write a cover letter too. This app makes it easier to take on this big task by taking your résumé and the description of the job you’re applying for, then uses an LLM to generate a résumé and cover letter optimized for that job application.

The app is written in Python and uses Jupyter Notebook as a platform. I wrote this as a Jupyter Notebook rather than a regular command-line Python app because Jupyter Notebooks also allow me to mix explanatory documentation with my code:

Give it a try! You can either clone the GitHub repo for this application, or simply download a .zip file containing all its files.

Categories
Artificial Intelligence Meetups Programming

Wednesday, June 18: Intro to AI agents with MCP!

At 6 p.m. on Wednesday, June 18 at Embarc Collective, I’ll be presenting Intro to AI Agents with MCP, which will be a joint gathering of Tampa Bay Artificial Intelligence Meetup and Tampa Bay Python!

LLMs like ChatGPT, Claude, Gemini, DeepSeek, and others are great for taking in natural language questions and producing complex but useful natural language replies.

However, they can’t do anything except output text. without the help of additional code. Until recently, the way to connect LLMs to other systems was to develop custom code for each LLM and each online system it connects to, which is an “m by n” problem.

One solution is MCP, short for “Model Context Protocol.” Introduced to world by Anthropic not too long ago, MCP is an open source, open standard that gives AI models connection to external data sources and tools. You could describe MCP as “a universal API for AIs to interact with online services.”

We’ll explain what it is, why it was made and how it came about, and then show you actual working code by building a simple MCP server connected to an LLM. Joey will demonstrate a couple of examples of MCP in action, and they’ll be examples that you can take home and experiment with!

Bring your laptop — you’ll have the opportunity to try out the MCP demos for yourself!

And yes, there’ll be food, and water and soft drinks will be provided. If that doesn’t work for you, feel free to bring your own.

Want to register (it’s free)? You can do so at either of the following Meetup pages:

 

Categories
Artificial Intelligence Hardware Programming

A Raspberry Pi battery life/vibe coding experiment

For the next little while, I’m going to share stories about my current coding projects. This first one is about a quick evening project I did the other night that features a Raspberry Pi, a dollar-store USB power pack, Python, and a little vibe coding.

The inspiration

Original cover of Andy Weir’s novel, “The Martian.”

My fictional engineer hero is Mark Watney, the protagonist of Andy Weir’s self-published sci-fi novel turned big hit, The Martian. A good chunk of the story is about how Watney, an astronaut stranded on Mars, would overcome problems by cobbling together some tool or device, using only the gear available at the Mars habitat, his technical know-how, and creative thinking. My love for the way Watney does things is so strong that I play the audiobook version as “background noise” whenever I’m working on a problem that seems intractable.

While the movie version adds a line that captures what Watney does throughout The Martian“I’m gonna have to science the shit out of this” — it condenses a lot of what he has to do, and covers only a small fraction of the clever guerilla engineering that he does in the novel on which it was based.

The book version has Watney narrating the problems he faces, and how he uses the available equipment and material to overcome them, which often involved adapting gear for purposes they weren’t made for, such as attaching solar cells meant for the habitat to the Mars Rover to extend its driving range.

The idea

Raspberry Pi 3B board, beside a U.S. $20 bill for scale. The Raspberry Pi is about two-thrids the size of the bill.

I’d been meaning to do some projects where I’d attach sensors to my old Raspberry Pi 3B and set it up somewhere outside. Of course, you can’t count on having an electrical outlet nearby in the great outdoors. However, the Raspberry Pi 3 series takes its power via a USB micro port.

This led to a couple of questions:

  • Could I power it with a one of those USB power packs that are typically used to recharge phones while on the go?
  • If so, how long could a USB power pack keep my Raspberry Pi running?

It was time to ask some questions and get the answer empirically!

Question 1: Can you run a Raspberry Pi from a USB power pack?

The short answer: yes. I plugged my least-powerful power pack, one those cheap ones that you’d expect to find at Dollar General or Five Below:

I don’t recall buying it. It’s probably something that a friend or family member gave me. People often give me their electronics cast-offs, and I either find a use for them or recycle them.

Fortunately, no matter how cheap these units are, they usually have their specs printed somewhere on their body:

The power pack outputs 800 milliamps (mA), which is enough to run a Raspberry Pi 3B, especially if I remove any USB devices and don’t connect it to a monitor. The Pi draws about 250 mA when idle, which I figured would give me plenty of “headroom” for when the Pi would be running my little program.

According to the specs printed on the power pack, its battery capacity is 2200 milliamp-hours (mAH). That means it should be able to run a device that consumes 2200 milliamps for an hour, or a device that consumes 1100 milliamps for two hours, or a device that consumes 550 milliamps for four hours.

Of course, this was an old power pack of unknown provenance that had been through an unknown number of power cycles. Its capacity was probably far less than advertised.

Here’s the Raspberry Pi, with some parts labeled:

I plugged the power pack into Raspberry Pi, and it booted. The OS displayed a “low voltage warning” message on the desktop as soon as it had completed booted…

…but it worked!

Question 2: How long will the Raspberry Pi run on this power pack, and how do you find out?

Before I give you my answer to the second question, give it some thought. How would you test how long a programmable device can run on a power pack?

I wrote this Python script:

// uptime.py

import time

with open("uptime.txt", "a") as f:
    while True:
        time.sleep(5)
        display_time = time.ctime(time.time())
        f.write(f"{display_time}\n")
        print(display_time)

Here’s what the script does:

  • It opens a file named uptime.txt if one exists. If there is no such file, it creates that file.
  • It starts an infinite loop that appends the current time to the file every five seconds.

Note that the code appends the current time to the file, resulting in a file full of timestamps that looks like this…

Wed May 28 14:16:28 2025
Wed May 28 14:16:33 2025
Wed May 28 14:16:38 2025

…instead of a file containing the most recent timestamp.

There’s a reason for this: if I simply had the application write to the file so that it contained only the most recent timestamp, there’s a chance that the power might go out in the middle of writing to the file, which means there’s a chance that the program would fail to write the current time, and I’d end up with a blank file.

By adding the latest timestamp to the end of a file full of timestamps every 5 seconds, I get around the problem of the power going out in the middle of the file operation. At worst, I’ll have a timestamp from 5 seconds before the Raspberry Pi lost power.

The test

I fired up the Raspberry Pi while it was connected to its regular power adapter, entered the script and saved it, powered it down, and then connected it to the power pack:

I then turned it on, hooked it up to a keyboard and monitor just long enough to start the script, then disconnected the keyboard and monitor. I then left the Raspberry Pi alone until its power light went out, which indicated that it had run the battery dry.

I ran this test a couple of times, and on average got 57 minutes of uptime.

Question 3: Would I get more uptime if I wrote the program in C instead of Python?

CPUs are really large collections of transistors:

In the case of the Raspberry Pi 3 B’s ARM Cortex A53 CPU, there are about 260 million of these transistors, each one being a tiny switch that uses a tiny bit of electricity. The harder you work them, the more power they use. Some of that power is released as heat, which is why your computer gets warm, especially if you’ve got it doing something that requires a lot of computation (such as videogames, mining cryptocurrency, or training and running artificial intelligence models).

You’ve probably seen this chart that’s been floating around the internet since its publication in the 2017 paper Energy Efficiency across Programming Languages. It shows how much energy, time, and memory a number of programming languages use to perform roughly the same tasks, and it appears that Python does poorly for energy efficiency:

In order to be an easier language to read and program in, with more expressiveness and flexibility than C Python makes it easier on programmer by putting more of the work on the processor. That’s why it’s a power hog. On the other hand, C, which is almost assembly language, is at the top of the power efficiency list.

So my thought was: what if I wrote the uptime program in C and ran it — would I get more uptime?

The C program, version 1

I was about to start coding in C when it occurred to me that while I knew how to write to a file in it, I couldn’t remember how to get the time.

I was about to look it up, when I thought: why not have Claude do it?

So I opened a browser to Claude and entered:

I need some C code that will append the current date and time to a file named “uptime.txt” every 5 seconds.

It provided me with the following code:

// uptime.c

#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>

volatile int running = 1;

void signal_handler(int sig) {
    running = 0;
    printf("\nReceived signal %d, shutting down gracefully...\n", sig);
}

int main() {
    FILE *file;
    time_t raw_time;
    struct tm *time_info;
    char time_string[100];
    
    // Set up signal handler for graceful shutdown
    signal(SIGINT, signal_handler);
    signal(SIGTERM, signal_handler);
    
    printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n");
    printf("Press Ctrl+C to stop.\n\n");
    
    while (running) {
        // Get current time
        time(&raw_time);
        time_info = localtime(&raw_time);
        
        // Format time string
        strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
        
        // Open file in append mode
        file = fopen("uptime.txt", "a");
        if (file == NULL) {
            perror("Error opening file");
            return 1;
        }
        
        // Write timestamp to file
        fprintf(file, "%s\n", time_string);
        fclose(file);
        
        printf("Logged: %s\n", time_string);
        
        // Wait 5 seconds
        sleep(5);
    }
    
    printf("Uptime logger stopped.\n");
    return 0;
}

The C program, version 2

After looking over the code, I noticed something and decided to ask Claude about it:

What part of this code requires stdlib.h?

Claude’s response:

Good catch! Looking at the code, stdlib.h is only needed for the exit() function, but I’m not actually using exit() anywhere in the program.

The return statements use the standard return mechanism, and the signal handler just sets the running flag to 0 rather than calling exit(). So stdlib.h can be removed from the includes.

Let this be a lesson: always double-check the code an AI gives you.

It then provided this code:

#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <signal.h>

volatile int running = 1;

void signal_handler(int sig) {
    running = 0;
    printf("\nReceived signal %d, shutting down gracefully...\n", sig);
}

int main() {
    FILE *file;
    time_t raw_time;
    struct tm *time_info;
    char time_string[100];
    
    // Set up signal handler for graceful shutdown
    signal(SIGINT, signal_handler);
    signal(SIGTERM, signal_handler);
    
    printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n");
    printf("Press Ctrl+C to stop.\n\n");
    
    while (running) {
        // Get current time
        time(&raw_time);
        time_info = localtime(&raw_time);
        
        // Format time string
        strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
        
        // Open file in append mode
        file = fopen("uptime.txt", "a");
        if (file == NULL) {
            perror("Error opening file");
            return 1;
        }
        
        // Write timestamp to file
        fprintf(file, "%s\n", time_string);
        fclose(file);
        
        printf("Logged: %s\n", time_string);
        
        // Wait 5 seconds
        sleep(5);
    }
    
    printf("Uptime logger stopped.\n");
    return 0;
}

The C program, version 3

This was an improvement, but there was still a little too much “gold plating” for my needs. I entered the following into Claude:

I don’t need the graceful exit, because this program is expected to run until the computer runs out of power.

Here’s the code it provided in its response:

#include <stdio.h>
#include <time.h>
#include <unistd.h>

int main() {
    FILE *file;
    time_t raw_time;
    struct tm *time_info;
    char time_string[100];
    
    printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n\n");
    
    while (1) {
        // Get current time
        time(&raw_time);
        time_info = localtime(&raw_time);
        
        // Format time string
        strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
        
        // Open file in append mode
        file = fopen("uptime.txt", "a");
        if (file == NULL) {
            perror("Error opening file");
            return 1;
        }
        
        // Write timestamp to file
        fprintf(file, "%s\n", time_string);
        fclose(file);
   
        printf("Logged: %s\n", time_string);
        
        // Wait 5 seconds
        sleep(5);
    }
    
    return 0;
}

Much better.

Once I got past this regularly recurring feeling…

…I compiled the program by entering this line in the terminal…

gcc -o uptime uptime.c
./uptime

…to compile and run it.

I ran this experiment a couple of times, and on average got 89 minutes of uptime. So yes, writing — or better yet, vibe coding — the “uptime” program in C got me more power efficiency and uptime!

Question 4: Did Claude “know” what I was trying to do?

I figured there was more than enough context for Claude to “know” what the “uptime” program was for, so I asked it:

Based on the information I’ve provided in this conversation, what do you think this program is for?

Here’s its complete reply:

Based on the context clues, this appears to be a simple system uptime monitor or power outage detector.

The key indicators are:

  • You mentioned it’s “expected to run until the computer runs out of power”
  • It logs timestamps every 5 seconds to track when the system is running
  • You don’t want graceful exit handling since power loss would be abrupt
  • The file is named “uptime.txt”

This suggests you want to track how long your system stays powered on, possibly to:

  • Monitor power reliability/outages in your area
  • Track system uptime for reliability analysis
  • Have a record of when power failures occur (gaps in the timestamps would indicate outages)
  • Monitor an unattended system or server

When power is restored and the system boots back up, you could restart this program and it would continue appending to the same file, giving you a complete timeline of system availability with clear gaps showing when outages occurred.

Not bad! In fact, the use case in that final paragraph was one I hadn’t thought of, which is a primary reason why I like using AIs for my work, which is:

No matter how smart, inventive, or creative you are (or think you are), you will never be able to write a list of ideas that you would never think of.

Lessons learned

  1. Yes, you can power a Raspberry Pi — at least a Raspberry Pi 3 — with a USB power pack, even a cheap one. I knew that you could power smaller boards this way, but you have to remember that a Raspberry Pi 3 is on par with a baseline 2010-era laptop, and those had much bigger batteries.
  2. Claude’s C code generation is pretty good, but you still need to be a programmer to correct it, and while it will show you how to compile and run a C program, it’s a good idea to know how to do so.
  3. Claude can figure out what a program is for, given enough context.

Watch this space — I’ll be posting more articles like this!

Categories
Conferences Programming Tampa Bay What I’m Up To

I’ll be at VueConf US 2025!

I’m going to be at VueConf US on Tuesday and Wednesday, which very conveniently takes place here in Tampa!

Between my recent trip to Greece, being handed the leadership of Tampa Bay Python, prepping and giving a talk at BSides Tampa, and working with clients, and given that I’ve been working primarily with mobile app and Python development, Vue.js — and by extension, VueConf — has been off my radar.

However, I’m overdue to get up to speed on Vue, and I’ve been invited to attend VueConf. (Thanks to Tampa Java User Group’s Ammar Yusuf for connecting me and Vincent Mayers for the invitation!)

VueConf will take place at USF’s CAMLS center, where CAMLS is short for Center for Advanced Medical Learning and Simulation. CAMLS isn’t located on USF’s main campus, but in downtown Tampa. It’s a pretty new building, and I’ve never set foot in it; I have heard that it has a very nice lecture hall:

While Vue doesn’t have React’s userbase, it’s got a nicer learning curve, doesn’t require you to import everything including the kitchen sink, better DOM manipulation performance, and two-way data binding. It also doesn’t drive me anywhere near as crazy as React does.

It also did well in Theo’s JavaScript framework tier list — I’ve posted the final results above, and the video below:

Anyhow, I’ll write about my experiences at this conference, with the occasional update on my LinkedIn. Watch this space!