Should We Fear AI? A Conversation with Dr. Maria Karam
I, like most, am in the dark about the far-reaching implications, possibilities, and incursions of advancing technology.
By Bill King
I, like most, am in the dark about the far-reaching implications, possibilities, and incursions of advancing technology. From a media narrative, AI is a saviour or an invasive species beyond human management. I've experimented with CHATGPT, and while it appears impressive, it has its limitations. Don’t expect enlightenment or originality. I wrote a short piece and asked CHAT to improve my writing. What it spit out was a sanitized, soulless rewrite. I’m cool with that. I wasn’t expecting a free ride.
Where I have had greater success is with Topaz AI for photo restoration and enhancement. Three tools – Gigapixel AI, DeNoise AI and Sharpen AI. A low-res photo has a new friend. It’s not perfect yet; with the right applications, one can better preserve a moment in time. When delving further, and not in a moment of desperation, details can be meticulously refined and upgraded.
Since joining the house team at the Redwood Theatre as artistic director, I’ve learned the force behind the renewal and resurrection of the classic venue, at first, a vaudeville theatre in 1914. Thanks to the duo of Robert Indrigo and Dr. Maria Karam, the Redwood is a breathing entity these days with a full schedule of shows and events. What makes the pairing unique is both are scientists. It also explains the lightning-quick brain power that zaps anyone with distance. It’s the idea zone. Where there’s a theory, there’s a kernel of innovation buried in each conversation.
Dr. Maria Karam is a Program Manager Meta, Toronto, Founder, Director of The Inventors Nest, Toronto, and Project Manager of Jaguar Land Rover, Warwick, England Haptic communication systems for autonomous and electric vehicles Hardware, Software & Interaction development specialist. Ph.D.: Human-Computer Interactions, Intelligence, Electronics and Computer Science University of Southampton - Southampton, UK. Karam is also a science editor for the Bill King Show at CIUT 89.5 FM at the University of Toronto. On the current show/podcast, Dr. Karam sits in with Jesse King and me for a moment of clarity. Just what is AI and how does it or will it change our lives? The good, the bad, the probable. Here’s that conversation.
Bill King: A person on the front lines of the actors and writer strike stated, "Those people whose data is used to train AI are not being compensated." I asked Dr. Maria Karam her thoughts.
Dr. Karam: Not necessarily. The AI code does not infringe on copyright, as it does not copy the text but rather identifies patterns in the use of words, phrasing, and other linguistic features and attempts to replicate those patterns to recreate the style of the text included in the body of work by a particular author. AI programs do not explicitly use the data as written. For example, an AI program does not work by saying, “I'm going to look at Bill's text and use those words to imitate his writing style”. It works on a different level, leveraging the linguistic elements of data that represent the text to recreate a similar style based soundly on the 1’s and 0's used to represent that text. AI doesn’t actually copy any texts or images directly but analyzes the data on a pattern level. It doesn’t use the actual data beyond examining the patterns that exist between the code-based elements that represent the text or images with which it is being trained.
From the developers' side, the AI uses language models that allow it to work with linguistic elements. For instance, the language models in the AI enable it to work with linguistic elements like sentence structures, word categories, and others, and it trains them into the model. Then, the AI can effortlessly insert words from the dictionary or graphic elements into the text or image it is attempting to replicate. For text, it can understand verbs, nouns, adjectives, etc., and uses that structure to formulate sentences. For images, models that allow the system to identify features like noses, eyes, hair, etc are all that the system needs to generate images with similar stylistic elements as those it has been trained on. It is no different from someone who is trying to write in the style of an author or a painter who seeks to create works that feel like the artist. This is still a touchy and slippery subject, principally based on ignorance about the computer methods that form AI. This makes it a slippery slope down into a deep deep well of bits and bytes.
B.K: I asked for two paragraphs written, like Ernest Hemingway, on a specific topic. AI churns out, and it creates a Hemingway-like verse. Where does that leave us?
Dr. Karam: When someone tags Hemingway, it will analyze how he puts these words together.
B.K: Analyze his text.
Dr. Karam: Indeed, yes.
B.K: Should Hemmingway be compensated?
Dr. Karam: No, he shouldn't. But this is the argument. If I write a book inspired by Hemmingway, should I pay him for being an influence on my style? If the content is online, available, and accessible, then anyone, including AI, can use it to influence their work. Plagiarism is something different, but if you can prove that the AI fact plagiarized the work, you would have to sue the person who wrote the code because AI is not a real person...at least not yet.
B.K: At one time, you had to show eight scripted melodic notes in succession to claim copyright infringement. You could be at fault for plagiarizing the song, right? We're in unfamiliar territory because the arrangement now comes into play.
Jesse King: It's a signature rhythm like Marvin Gaye with Blurred Lines, where the entire song was much the same. There are other times when you witness Ed Sheeran go through the entire process, and you're like, I can hear it, but it doesn't sound the same, and as you said, there's only X number of notes. I think it's interesting because we're seeing some authors fight this now. After all, it's the same issue when people can write like them or want to write like Margaret Atwood. Write a book. You're saying that even if it sounds like Margaret Atwood, it's not Margaret Atwood. You can't label it Margaret Atwood, and it's most likely not quality, either. It's a mechanical replication.
Dr. Karam: Along the same (blurred) lines, what about the electronic music D.J. samples that are almost always based on direct sampling of existing music?
B.K: If it shows up and is identified, you could get sued. If they can locate the infringement. If you haven't twisted until unrecognizable.
J.K: They are pretty good at detecting it, too. SoundCloud put something out years ago where they could tell if you load it up. YouTube is fantastic at that. They'll pick it up, but what they've done is rather than remove your stuff, they ensure the copyright owner gets compensated. Which is a fantastic idea. They're using AI algorithms to do that, machine learning successfully, or AI, which is a big blanket mess right now.
B.K: It's interesting to have this conversation because you have a depth of scientific knowledge about what is happening outside the media and fear-mongering.
Dr. Karam: They've over-hyped it. We should fear humans developing AI apps. The AI itself, AI on its own, is a combination of computer techniques, processing, machine learning, data crunching, bits and bytes, and comparisons. That's all it is. AI does nothing on its own. Someone must tell it what to do, press a button and say go. It's not that AI will ever be a problem on its own; it is only a reflection of the people who are programming it. To that point, Gregory Hinton, a professor at the University of Toronto, known as the godfather of AI, recently came out against AI. But it seems like a call out against the large corporations already digging their FANGs into the technology…Facebook, Amazon, Netflix, and Google, or rather Meta, which I guess would suggest we change the term to MANG…
B.K: MANG, Fang?
Dr. Karam: We should formally change that. But I think it is the use of AI by corporations interested in furthering their marketing control over the population that we need to be more afraid of. Recently, the European Union set up the AI Act to impose some rules and guidelines to help mitigate the potential dangers that we are seeing develop via large corporations. This is a great step forward, along with GDPR rules that Europe implemented, to help deal with nefarious issues arising out of Internet abuse.
B.K: Dr. Karam, recently, you chaired a conference on AI.
Dr. Karam: I was session chair for the International Human Computer Interaction Conference (HCII2023) taking place in Copenhagen this week.
B.K: What's the gist of this? What are you focusing on?
Dr. Karam: AI is basically a branch of computer science. This is a very computation-heavy discipline, where extreme coders and mathematicians are driving the development. Unfortunately, with coders, you don’t often get folks who are well-versed in psychology, philosophy, ethics, or sociology. This has motivated the development of the HCI community, which is working to ensure that code is also user-friendly and safe for humans. Now that AI is making its way into the public domain, we HCI researchers and practitioners are here to guide this technology in directions that aren't going to be damaging or dangerous to humanity. Without this overview, coders would not concern themselves with what is good or bad for humans, and focused on the code, and may not have the background to understand the impact these technologies could have on the world. When a big company says, write me an algorithm that scans all the data collected about our users. This algorithm will determine who the undecided voters are or who are the most easily swayed people we can manipulate into spending money. The developer usually doesn’t ask why or what the impact will be on these people. Also, there are other factors that need to be checked, like biases that reflect those of the humans who are training AI programs to decide on their behalf. Race, gender, and socioeconomic status are often seen as biases that influence decisions like who gets the high-paying jobs or who gets out on parole. These are the main issues that HCI and AI are talking about at the conference.
B.K: Recently, actor Jamie Foxx resurfaced to inform his fans that he's been battling catastrophic health issues. Then, basketball superstar LeBron James' son Bronny collapsed from cardiac arrest during a morning basketball practice session. I wanted to know more about the situation and an update on his son. What I got was Elon Musk. I got all the anti-vaxxers. It was about how vaccines caused myocarditis - inflammation of the heart. The conspiracies raced across the platform. It was speedy, well-orchestrated, and masked behind accounts with fake medical credentials. There's no declaration of Foxx or James being vaxxed or not or consequences.
Dr. Karam: That's horrible. The natural mistake is to associate correlation with causation. Because a hundred people had a heart issue, and they also had a vaccine, that is not causation: that's a correlation. And people who aren't aware of how logic works make illogical jumps and end up believing in things that have no grounding in fact or reality. That's the problem. I'm going to say it. AI, computers, these are not the problem. Humans are the problem.
The Bill King Show is heard - Saturdays at 1 PM, Tuesdays 9 PM @ CIUT 89.5 FM. podpage.com/the-bill-king-show/