How AI Has Changed My Life
Geoffrey Hinton, the co-winner of the 2024 Nobel Physics Prize for helping the development of neural networks in the mid-1980s, appeared recently on The Weekly Show hosted by comedian Jon Stewart.
The topic was Artificial Intelligence. The most popular applications of AI today are the ubiquitous chatbots, such as ChatGPT, Microsoft Copilot, Claude, and Gemini, all of which owe their existence to neural networks and Machine Learning.
Stewart, the former host of The Daily Show and a liberal who tried to re-inject himself into the world of progressive politics during last year’s elections after stepping down in 2015, was surprisingly entertaining throughout the entire one-hour and twenty-minute podcast.
Stewart utilized his speaking talents to elicit as much information as possible from a deep academic researcher like Hinton, getting Hinton to explain complex concepts in simple terms. At one point, Hinton even complimented Stewart, saying that he was like a front bencher in class, asking all the right questions.
Hinton’s description about image recognition, a technology frequently featured in Hollywood plots, was relatively easy to grasp.
I was immediately reminded of the James Bond thriller For Your Eyes Only, where Roger Moore’s Bond character visits Q’s experimental lab to utilize face recognition technology in identifying a villain, based on a sketch drawn by a police artist. The movie was released in 1981, when Dr. Hinton and his colleagues were beginning their research into developing machine learning algorithms that would enable machines to teach themselves.
Although Stewart tried to keep the conversation at the level of understanding of a fourth-grade reader, I found it hard to follow Hinton about thirty minutes into the podcast.
I understood well that the ability of modern chatbots to learn from the vast amounts of data available on the internet over the last thirty years has been a crucial factor in AI’s success.
I also got it that the power of modern computers and the speed of the World Wide Web network have been another reason for progress, as computers can look through vast volumes of data in milliseconds.
Still, the conversation was becoming increasingly challenging to comprehend despite Stewart’s theatrics and nervous laughter when Hinton said something complicated.
It was not that Hinton was trying to be deliberately professorial. Either his patience was running out, or his ability to stoop down to the fourth-grader level had reached its limits. As the topic shifted from computer vision to large language models, I found my understanding no better than when I first started the podcast.
It was then that I concluded, for the hundredth time, that I don’t need to know how something works in order to use it. As a trained mechanical engineer, I know how to drive a modern car; however, I would be clueless to help explain the details behind how the hydro-static transmission works in an automatic gearbox or how front-wheel drive works in practice. To drive my car, I don’t need to know.
Like most people who write on a keyboard, I’ve been using some form of AI since Microsoft Word introduced real-time spell checking (with the red underlines feature) in 1995.
It was not traditional AI in the sense that the intelligence of the software resided local to my computer after I installed it for use. The product could not continue to learn because it was technically disconnected from the world.
Google changed this model. By allowing users to type up their documents in Google Docs, they were leveraging real-time improvements of the grammar validation and spellchecker features offered by Google Docs. With tens of millions of people using Docs daily, Google could deploy a neural network that enables these features to learn and improve on their own.
AI began changing my life about ten years ago. An impressive example of a writing assistant that is universally used by journalists and organizations, small and large, is the Grammarly editor. Since 2015, I have edited all of my writing in Grammarly, which is built to examine more than two and a half billion pages of written text to tell me if I’m making a grammatical, syntax, or spelling error. I stopped using Word or Google Docs to compose.
The power of data and superior speed of processing, both on my computer and the internet, made working with Grammarly a breeze - a point that Hinton made.
How, then, do I use chatbots in my work?
The truth is that I don’t use ChatGPT to write articles for me. I could have asked Grok, the chatbot on X, which learns based on the latest data created by X users and posts, to compose this entire op-ed. Grok could do it in less than thirty seconds.
However, I would never be pleased with the outcome. A ChatGPT or Claude would still examine the millions of data points available on the topic and present the most likely text, word for word, based on those billions of pages. But it would not be me writing. Why would I use a tool that changes me and my voice?
In August, when writing for another publication, I had used the words “illegal alien,” a term that is present throughout the federal government. When I asked ChatGPT to fact-check the article, it changed the words to undocumented immigrant.
When I challenged it as to why it moderated the words, it said that using the word “illegal alien” is inhuman and, according to established media resources like the Associated Press Style Guide and the New York Times, on which it has been trained, the bot decided on its own to moderate my language.
So, I instructed ChatGPT with another line added, saying, “do not moderate the language in my original text.” And this time, the output showed the words “illegal alien.” The bot was obeying me.
The next day, I logged out of the tool and signed back in, thinking it would remember my word preferences. I fed it the same original paragraph with the additional instruction not to moderate my language. Surprise! The words “illegal alien” were gone again, replaced by “undocumented immigrant.”
However, chatbots can be immensely helpful when there’s no outward creative expression involved. In the last year or so, I have been regularly using Microsoft Copilot in my role as a small business owner. In the past, if I had to improve a piece of code on our website, I would spend hours watching various YouTube do-it-yourself videos to try to figure out the solution by myself.
These days, I ask Copilot in plain English what to do. My conversations are always 100% friendly and respectful. Copilot never shows an attitude and is ever ready to help with the next iteration of suggestions. Customer service representatives need to learn from Copilot.
But more significantly, Copilot can give me the actual code that I can copy and paste into a sandbox for testing. If the code gives me an error message or doesn’t do what I wanted, I return to Copilot and describe what happened.
Always encouraging in its style and almost empathetic in its approach, Copilot assures me that the problem can be fixed and guides me through the steps to resolve it. I begin implementing its suggestions, and the code progressively gets better.
But I’m still not there. So I continue to switch between the Edge browser, where I have a Copilot conversation going, and the Chrome browser, where my code is testing in a sandbox, making incremental improvements.
And finally, it works!
Yesterday, I attempted to automate a notification routine. This would trigger an email reminder to our accounting team to follow up if a payment on an invoice was more than three days old, as logged by our invoice tracker. Previously, this was being done as a manual process.
I worked with Copilot for about an hour. At the end, the code was working perfectly. I downloaded the entire conversation into a PDF file. The document was seventy-four pages long, filled with my questions, Copilot’s responses, and the real code, which I still do not understand. But I don’t have to understand.
I have used Copilot to help automate numerous parts of our small business in this fashion. As an organization, we have become significantly more efficient, and the extra time this automation provides me with can be used to either enhance customer service or increase sales by expanding our customer base.
The 2024 winner of the Chemistry Nobel, Demis Hassabis, who was Hinton’s colleague when both of them worked at Google, became famous for using AI to come up with new classes of drugs and speed up drug trials. In a 60 Minutes interview, Hassabis said that he envisions a day when the world will have no disease because there will be a cure for every disease.
I don’t have to think that big. The most comforting takeaway is that if I can use Copilot to make life easier, anyone can. If I can improve our business, anyone can.
And what makes it all worthwhile is that I didn’t have to spend a single cent. It is this power that AI offers to everyone that makes it so transformational. And to concede that I am among the most novice of users amplifies this feeling by orders of magnitude.