Categories
Artificial Intelligence Tampa Bay Video What I’m Up To

My interviews (so far) about AI on Fox 13 News Tampa

I’ve made three appearances on Fox 13 News Tampa this year so far. If they call on me to answer more questions or explain some aspect of artificial intelligence, I’ll gladly do so!

My most recent appearance was on June 14, whose topic was all the noise about AI possibly being an existential threat to humanity. This is the one where I reminded the audience that The Terminator was NOT a documentary:

I made an earlier appearance on April 10th, where the topics were ChatGPT and AI’s upsides and downsides:

And here’s where it all started: a more in-depth news story on AI and ChatGPT featuring a number of Tampa Bay people

Categories
Artificial Intelligence Humor

ChatGPT fails to answer a classic math problem with a twist

ChatGPT’s source of strength — its ability to create sentences that make sense by using the previous set of words to predict the next one — is also a weakness. You can see this weakness by giving it the following prompt:

Joey DeVilla - YouTubeI have a ten-liter bucket, a five-liter bucket, and 50 milliliter shot glass. How do I use these items to measure a 50-milliliter shot of bourbon?

The obvious answer — obvious to us humans anyway — is to ignore the buckets and simply fill the shot glass. Problem solved.

ChatGPT’s answer wasn’t just wrong and convoluted, but also a waste of money and bourbon:


This image has an empty alt attribute; its file name is chatgpt-icon.png
To measure a 50-milliliter shot of bourbon using a 10-liter bucket, a 5-liter bucket, and a 50-milliliter shot glass, you can follow these steps:

1. Fill the 5-liter bucket to its maximum capacity.
2. Pour the water from the 5-liter bucket into the 10-liter bucket. This leaves you with 5 liters of water in the 10-liter bucket and an empty 5-liter bucket.
3. Fill the shot glass with bourbon until it is completely full.
4. Pour the bourbon from the shot glass into the 5-liter bucket.
5. Repeat steps 3 and 4 nine more times, pouring 10 shots of bourbon (50 milliliters each) into the 5-liter bucket.
6. Once you have poured all 10 shots of bourbon into the 5-liter bucket, you will have a total of 500 milliliters (10 shots x 50 milliliters) of bourbon in the 5-liter bucket.
7. Now, carefully pour the 500 milliliters of bourbon from the 5-liter bucket into the 10-liter bucket. The 10-liter bucket will now contain the desired 50 milliliters of bourbon.

By following these steps, you can measure exactly 50 milliliters of bourbon using the given buckets and shot glass.

Note that in step 2, the liquid isn’t bourbon, but water.

ChatGPT has the solution in step 3, but it gamely continues with an additional four steps.

Steps 4, 5, and 6 are a transfer of a half-liter (a little more than a pint, or two-thirds of a standard whiskey bottle) of bourbon into the five-liter bucket.

In step 7, the final step in this drawn-out process, you pour the bourbon from the five-liter bucket into the ten-liter bucket, which currently contains five liters of water. Contrary to what ChatGPT tells you, you do not have exactly 50 milliliters of bourbon measured out in the 10-liter bucket; you have 5.5 liters of a liquid that depending on your point of view could be called:

  • Criminally diluted bourbon
  • Tainted water
  • Uncarbonated bourbon-flavored White Claw

ChatGPT gets the answer to this question wrong because it’s been largely trained on content published on the internet, and some of that content includes math problems of the form “You have a bucket of size x, and another bucket of size y. How would you measure a quantity of size z?” In these problems, you’re usually asked to measure out a quantity of water, and there usually isn’t a bucket that’s the same size as the quantity you’re trying to measure.

ChatGPT has no actual understanding of the problem. It’s simply spitting out words to follow a pattern of text that’s part of the data it was trained on.

Try this problem — or your own variation of it — on ChatGPT and see what kind of results you get!

Categories
Artificial Intelligence Tampa Bay Video What I’m Up To

Yesterday’s AI interview on FOX 13 News Tampa

Chris Cato and Joey deVilla during a live interview of FOX 13 News Tampa. The caption in the “lower third” reads “Risks and benefits of artificial intelligence.”
FOX 13 News anchor Chris Cato and me.

Here it is — the recording of my interview on the 4:00 p.m. news on FOX 13 Tampa with anchor Chris Cato, where I answered more questions about artificial intelligence:

In this quick interview, we discussed:

  • The “existential threat to humanity” that AI potentially poses: My take is that a lot of big-name AI people who fear that sort of thing are eccentrics who hold what AI ethicist Timnit Gebru calls the TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism) mindset. They’re ignoring a lot of closer-to-home, closer-to-now issues raised by AI because they’re too busy investing in having their heads frozen for future revival and other weird ideas of the sort that people with too much money and living in their own bubble tend to have.
  • My favorite sound bite:The Terminator is not a documentary.”
  • A.I. regulation: Any new technology that has great power for good and bad should actually be regulated, just as we do with nuclear power, pharma, cars and airplanes, and just about anything like that. A.I. is the next really big thing to change our lives — yes, it should be regulated.” There’s more to my take, but there’s only so much you can squeeze into a two-and-a-half minute segment.
  • Cool things AI is doing right now: I named these…
    • Shel Israel (who now lives in Tampa Bay) is using AI to help him with his writing as he works on his new book,
    • I’m using it with my writing for both humans (articles for Global Nerdy as well as the blog that pays the bills, the Auth0 Developer Blog) as well as for machines (writing code with the assistance of Studio Bot for Android Studio and Github Copilot for iOS and Python development)
    • Preventing unauthorized access to systems with machine learning-powered adaptive MFA, which a feature offered by Okta, where I work.
  • My “every 13 years” thesis: We did a quick run-through of something I wrote about a month ago — that since “The Mother of All Demos” in 1969, there’s been a paradigm-changing tech leap every 13 years, and the generative AI boom is the latest one:
Poster: “Every 13 years, an innovation changes computing forever.”
Tap to view at full size.
  • And finally, a plug for Global Nerdy! This blog has been mentioned before in my former life in Canada, but this is the first time it’s been mentioned on American television.

I’ll close with a couple of photos that I took while there:

Joey de Villa in FOX 13 News Tampa’s green room.
In the green room, waiting to go on.
Tap to view at full size.
A view of the Fox 13 News Tampa studio, as seen from the interview table.
The view from the interview table, looking toward the anchor desk.
Tap to view at full size.
Interviewee’s-eye view of the cameras, teleprompters, and monitors at the Fox 13 News Tampa studio, as seen from the interview table.
The cameras, teleprompters, and monitors.
Tap to view at full size.

Once again, I’d like to thank producer Melissa Behling, anchor Chris Cato, and the entire Fox 13 Tampa Bay studio team! It’s always a pleasure to work with them and be on their show.

Categories
Artificial Intelligence Meetups Tampa Bay

USF’s information session on their AI graduate certificate program – Friday, June 9th

On Friday, June 9th at 1:00 p.m., University of South Florida (USF) is hosting an online information session about their newly-launched Online Artificial Intelligence Graduate Certificate. USF’s been doing quite well recently, what with joining the Association of American Universities, an invitation-only organization of elite research universities and being involved in some record-breaking deep sea research.

The idea behind the program is to boost your computing skillset, which can help you land a tech role or be a step towards an MS in Data Intelligence (MSDI). USF’s certificate is designed so you can easily learn to build AI applications.

Features of the certificate program include:

  • Affordable tuition.
  • Fast completion, with a short 12-credit program.
  • No GRE required.
  • Highly awarded doctorate faculty to help students build and apply AI skills.
  • Fully online asynchronous courses with synchronous exams.
  • Four-course sequence (12 credits), with 16-week courses in:
    • Data Mining
    • Deep Learning
    • Computer Vision
    • AI

This program is covered for Veterans under the GI Bill. (WEAMS posting pending but will be in place by or before application deadline of August 1st, 2023).

Interested in finding out more? Register for the event — and yes, it’s online via Microsoft Teams — and catch the information session of Friday, June 9th at 1:00 p.m.!

Categories
Artificial Intelligence Programming What I’m Up To

Andrew Ng’s “ChatGPT Prompt Engineering for Developers” is free for a limited time!

Screenshot from “ChatGPT Prompt Engineering for Developers,” showing the screen’s three parts — table to contents, Jupyter Notebook, and video.
A screenshot from ChatGPT Prompt Engineering for Developers.

Here’s something much better and more useful than anything you’ll find in the endless stream of “Chat Prompts You Must Know”-style articles — it’s ChatGPT Prompt Engineering for Developers. This online tutorial shows you how to use API calls to OpenAI to summarize, infer, transform, and expand text in order to add new features to or form the basis of your applications.

Isa Fulford and Andrew Ng.
Isa Fulford and Andrew Ng.

It’s a short course from DeepLearning.AI, and it’s free for a limited time. It’s taught by Isa Fulford of OpenAI’s tech staff and all-round AI expert Andrew Ng (CEO and founder of Landing AI, Chairman and co-founder of Coursera, General Partner at AI Fund, and an Adjunct Professor at the computer science department at Stanford University).

The course is impressive for a couple of reasons:

  1. Its format is so useful for developers. Most of it takes place in a page divided into three columns:
    • A table of contents column on the left
    • A Jupyter Notebook column in the center, which you can select text and copy from, as well as edit and run. It contains the code for the current exercise
    • A video/transcript column on the right.
  2. It’s set up very well, with these major sections:
    1. Introduction and guidelines
    2. Iterative prompt development
    3. Summarizing text with GPT
    4. Inferring — getting an understanding of the text, sentiment analysis, and extracting information
    5. Transforming — converting text from one format to another, or even one language to another
    6. Expanding — given a small amount of information, expanding on it to create a body of text
    7. Chatbot — applying the techniques about to create a custom chatbot
    8. Conclusion
  3. And finally, it’s an Andrew Ng course. He’s just good at this.

The course is pretty self-contained, but you’ll find it helpful if you have Jupyter Notebook installed on your system , and as you might expect, you should be familiar with Python.

I’m going to take the course for a test run over the next few days, and I’ll report my observations here. Watch this space!

Categories
Artificial Intelligence Meetups Tampa Bay

Tomorrow: Build Eliza, the original chatbot from 1964!

The Tampa Bay Artificial Intelligence Meetup is gathering tomorrow at Computer Coach, where we’ll build Eliza, the original chatbot!

(You can still register for the meetup, but space is limited!)

Eliza was created by computer scientist Joseph Weizenbaum at MIT’s Artificial Intelligence Lab over a two-year period from 1964 to 1966. It simulated a psychotherapist that reflects what the patient says back at them or gets the patient to talk about what they just said.

Here’s a quick video clip about Eliza:

Although Eliza was written for the IBM 7094, a room-sized computer whose operator console is pictured below…

IBM 7094 operator console. Photo by Arnold Reinhold.
Tap to view at full size.

…it later became a popular program on home computers in the 1980s under the name “Eliza” or “Doctor”:

The computers I grew up on all had some version of Eliza.

Here‘s Eliza running on the TRS-80 Computer — the “CoCo” — an underappreciated computer from the 1980s:

There’s even a scene from the TV series Young Sheldon, which takes place in the late 1980s/early 1990s, where the titular character has a chat with Eliza:

Eliza’s responses in the scene are pretty accurate, except for the synthesized voice.

If you’re really curious, you can try out ELIZA online! Be warned; it won’t be as impressive as ChatGPT.

There’s no way we could code ChatGPT in a single meetup, but we will build a complete working version of ELIZA tomorrow at the Tampa Bay Artificial Intelligence Meetup! It’s also a great way to sharpen your skills in Python (which is very popular in AI circles) at the same time.

In the meetup, I’ll provide a “starter” project, and you’d code along with me until you had a working Eliza version that you could tweak into your own chatbot.

You wouldn’t need the latest and greatest computer to do it, either! A laptop from 2010 (and remember, that’s 13 years ago now!) or later would be all you’d need.

There are still a few spaces available for tomorrow”s meetup. If you’re interested, register now!

Categories
Artificial Intelligence Editorial Presentations Video

Maciej Ceglowski’s reassuring arguments for why an AI superintelligence might not be a threat to humanity

Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”

Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:

Here are just a few of Maciej’s “inside perspective” arguments, which you can also find in his companion essay:

  • The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
  • The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
  • The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
  • The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
  • The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
  • The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.
  • The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
  • The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”

There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:

The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:

But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.

As Maciej puts it:

[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.

Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!

And of course, they need money!

Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.

Or, as this tweet summarizes it:

In case you need context:

  • Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.

    It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
  • Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.