Join us for an immersive live webinar where we’ll explore the transformative potential of generative AI in today’s digital landscape. Throughout the event, attendees will delve into the dynamic world of generative AI, gaining insights into its wide-ranging applications across various industries. Our guest speaker Saroop Bharwani will introduce us to practical use cases and success stories that demonstrate how this groundbreaking technology is revolutionizing the way we work. Don’t miss out on the opportunity to catch up with the latest trends and applications!
What You’ll Learn:
• AI principles and basic concepts • What are the elements of a prompt • Common generative AI use cases
Featured speakers:
Saroop Bharwani – Founder at Senso.ai: Saroop is a seasoned tech entrepreneur, AI thought leader, and educator. He has been actively sharing his knowledge and insights with executives, MBA students, and startups over the past 10 years. He’s the co-founder of Senso.ai and FirstPrinciplesAI. Both are platforms that leverage AI for business transformation. He has also given lectures on AI at prestigious universities, including the University of Toronto, Harvard, and Queens University.
The advent of the widespread use of artificial intelligence tools has occasioned significant legal and ethical concerns on many fronts. Notably, it seems likely that many uses of artificial intelligence involve rampant copyright infringement. Further, use of these tools can give rise to confidentiality, privacy, and security concerns.
Should governments regulate AI? Should companies enact policies regarding employee use of AI? What are the risks for people just trying to keep up with the latest technology.
Who the f**k knows, but this webinar will try to figure it all out … and give you own tools to navigate the rocky AI landscape in your business.
The Tampa Bay Technology Leadership Association (TBTLA) held an “AI fireside chat” yesterday at the St. Petersburg Shuffleboard Club, and it was a well-attended event with a lot of discussion about the opportunities and challenges that generative AI presents. I was invited to be one of the panelists and enjoyed the get-together at one of St. Pete’s best (and underappreciated) venues.
Here are some photos from the start of the event, with everyone mingling:
They were followed by Chad Hage, Senior Cloud Solution Architect at Microsoft and Amit Bharwani, Product Manager at PWC Labs and the AI Lab at PWC, with moderator Benjamin Allen:
Photo by Joey deVilla. Tap to view at full size.
Then came Yours Truly (Joey deVilla, Senior Developer Advocate at Okta) and Rahul Khanna, Cybersecurity Expert at Propensic Solutions. Once again Benjamin Allen moderated:
Photo by Anitra Pavka. Tap to view at full size.
Photo by Anitra Pavka. Tap to view at full size.
And finally, Keith Sartain, Director of IT at ConnectWise and Josh Nelson, Director of Technology Experience at Power Design, with Craig Laue moderating:
Photo by Anitra Pavka. Tap to view at full size.
With the talks done, it was then time to take advantage of the venue and the lovely weather and play some shuffleboard!
Photo by Joey deVilla. Tap to view at full size.
Photo by Joey deVilla. Tap to view at full size.
My thanks to TBTLA for putting on a great event, and to my fellow speakers for providing solid insights and opinions!
Kishen Sridharan: Technology Partnerships Executive, Office of Technology Innovation at Raymond James
For those of you who are planning to attend — or if you’re just curious — here are some questions and answers that should give you some useful background information for the fireside chat.
What is AI?
AI is short for artificial intelligence, a catch-all term for giving machines the ability to perform tasks that require intelligence — itself a catch-all term referring to the abilities of problem-solving, discernment, and judgment.
What is generative AI?
Generative AI is a broad category of AI whose purpose is to generate new content, which could be in the form of text, images, audio, video, code, and anything else we would classify as a “creative work” if it were made by a human. Here are a few examples:
If there is generative AI, is there also non-generative AI?
Yes, and it’s used in more places than you might think. This kind of AI is often called discriminative AI, because it can discriminate between similar things. Some examples include:
Face recognition, often used in border and retail security
We started working backwards. AI has been a branch of computer science from the very beginning. Programming the early versions of AI was about coming up with all the rules of whatever field an AI program was working in, with the goal of having the AI apply those rules to come up with answers. The problem was that it’s very hard to come up with all the rules for a real-world situation unless you’re working in a field or dealing with a very limited set of situations.
These days, the tendency is to program AI backwards using machine learning. We provide AI programs with a lot of real-world examples of answers, with the goal of having the AI analyze those answers to come up with the rules for finding future answers.
The explosion in machine-readable data. Over the past 30 years, the amount of machine-readable data in the world has exploded thanks to the internet and the digitization of everything. This readily-available trove of information about anything and everything gives us the necessary data to train machine learning algorithms.
In fact, the two things listed above — machine learning and readily-available data to feed machine learning — are the primary drivers behind the current AI explosion.
Faster computers. If you were around in the 1980s and were into computers, you may remember what was the world’s most powerful computer of the time: the Cray mainframes, like the one pictured above. It could perform 1.9 billion floating-point operations per second, which makes it about as powerful as…an iPad 2. These faster computers let us perform more machine learning on more data in less time.
The cloud. A limiting factor in high-powered computing is the cost of purchasing, and even more importantly, maintaining a lot of computer hardware. Cloud computing lowers the barrier to access the kind of computer power needed to do the kind of big data processing that present-day machine learning requires.
What’s a “model,” when speaking in terms of AI? I keep seeing this term all the time.
In AI, a model is the computer equivalent of a mental model you’d have in your mind — an understanding of some topic that allows you to find patterns and make predictions.
If you’ve lived in Florida for a little while, you’ve probably build a mental model about the weather. If you notice that the sky is cloudy and greenish-grey, the wind is increasingly gusty and picking up in speed, and the birds have suddenly become silent, your mental model will suggest to you that it’s time to seek shelter very, very soon. That’s because you’ve seen these signs before and have noticed that they generally appear shortly before a hurricane.
AI models are similar in that they allow computers to make predictions and choose courses of action based on the available evidence. An AI model for weather would compare the current conditions to the historical weather record that it was trained on, note the sudden change in air pressure, temperature, and other factors, and say that there is a high probability that there will be a hurricane.
Here’s another term I see all the time: application. What’s that?
It’s a computer program that uses one or more models as its underlying “engine.” For example, ChatGPT is an application, and it’s based on the GPT-3 or GPT-4 model, depending on which version you’re using.
If your curiosity about artificial intelligence goes beyond bookmarking those incessant “10 ChatGPT prompts you need to know” posts that are all over LinkedIn, you should set aside some time to read Douglas’ Hofstadter’sGödel, Escher, Bach: An Eternal Golden Braid and watch his new interview.
Gödel, Escher, Bach
I might never have read it, if not for Dr. David Alex Lamb’s software engineering course at Queen’s University, whose curriculum included reading a book from a predetermined list and writing a report on it. I’ll admit that I first rolled my eyes at having to write a book report, but then noticed that one of the books had both “Escher” and “Bach” in the title. I had no idea who “Gödel” was, but I figured they were in good company, so I signed up to write the report on the book I would later come to know as “GEB.”
I’ll write more about why I think the book is important later. In the meantime, you should just know that it:
Helped me get a better understanding of a lot of underlying principles of mathematics and its not-too-distant relative, computer science, especially the concepts of loops and recursion
Advanced my thinking about how art, science, math, and music are intertwined, and inspired one of my favorite sayings: “Music is math you can feel”
Gave me my favorite explanations of regular expressions and the halting problem
Taught me that even the deepest, densest subject matter can be explained with whimsy
Provided me with my first serious introduction to ideas in cognitive science and artificial intelligence
Yes, this is one of those books that many people buy, read a chapter or two, and then put on their bookshelf, never to touch it again. Do not make that mistake. This book will reward your patience and perseverance by either exposing you to some great ideas, or validate some concepts that you may have already internalized.
At the very least, if you want to understand “classical” AI — that is AI based on symbol manipulation instead of the connectionist, “algebra, calculus, and stats in a trench coat” model of modern AI — you should Gödel, Escher, Bach.
A new Hofstadter interview!
Posted a mere three days ago at the time of writing, the video above is a conversation between Douglas Hofstadter and Amy Jo Kim. It’s worth watching, not only for Hofstadter’s stories about how GEB came to be, but also for his take on current-era large language models and other generative AI as well as the fact that he’s being interviewed by game designer Amy Jo Kim. Among other things, Kim was a systems designer on the team that made the game Rock Band and worked on the in-game social systems for The Sims.
If you want to learn Python, machine learning, data science, and a few other related topics AND you have $25 handy, The Complete Python Mega Bundle has you covered, as you can see from the list of tutorials below:
Kishen Sridharan: Technology Partnerships Executive, Office of Technology Innovation at Raymond James
The overall topic of discussion will be “How generative AI is changing industries,” and there will be other related subtopics.
After the fireside chat, we’ll take advantage of the location — the oldest and largest shuffleboard club in the world — and play some shuffleboard! (And yes, there’ll be an instructor present.)