News

Believe it… or not

Be wary or AI may lead you astray.

Karen Moltenbrey

Like the story of the frog in a pot of water brought to a boil slowly, AI has seeped into our daily life. In this age of instant communication and information, AI apps and tools are on call 24/7 to instantly keep us up to date on topics ranging from the most relevant and timely to the more obscure. But how trustworthy is that information you are being given? Why would AI lie to me?

AI Robot

Don’t believe everything you read—especially from cyberspace.

As the Internet became part of our lives—when we moved from that all-too-recognizable squelch of dial-up—we were warned repeatedly not to trust the information we find on the Internet. After all, you didn’t really know how reliable the source of that information was. After these repeated assertions and warnings, I envisioned a monkey typing away on a keyboard.

As we grow older, we are supposed to grow wiser. But at some point, many of us let our guard down and began processing much of that digital information as fact. Wikipedia became the Encyclopedia Britannica of the digital age. (After all, it has -pedia in its name—that must mean something, right?) But along with that came a plethora of other, less reliable sources of information.

For the most part, those of us who have been around the block (meaning we have celebrated quite a few birthdays) do not turn to cyberspace or social media for our main source of information. You might be chuckling at that and shaking your head at that naiveté. And yet, you may be following in their footprints, led there—or lulled there—by AI.

In less than two years, AI tools like ChatGPT, Bing, Bard, and others have become acceptable and beneficial to our daily routine. According to Statista, the number of people using AI tools globally surpassed 250 million in 2023, with that number expected to grow to more than 700 million by the end of the decade. According to other recent surveys, about 45% of people use AI tools in their daily life, while nearly 40% of schools incorporate AI for personalized learning and student support. But then again, those numbers are garnered from online citations noted to have been created by AI. So, who’s to say how accurate those numbers really are—I guess it depends on how skeptical or careful you are about the information you assume as fact.

While AI seems like a fairly new invention, it isn’t, and in all likelihood, you have been using it for personal matters for years. Surely you are on a “Hey”-name basis with the likes of Google Assistant, Siri, Alexa, Bixby, etc. And despite hearing horror stories concerning directions gone wrong, many of us have followed driving instructions from that little voice emanating from our device instead of the one in our head indicating that something is not right. Yup, often we believe that virtual voice over our better instincts.

More recently, AI assistance has proliferated. And evolved. But, have we? Just about every application either has, or will have very, very soon, AI assistance. There are AI apps for travel assistance, financial assistance, work assistance, writing assistance, meeting assistance, and even those that operate as human assistance (I am talking about you, Clara). For those using Microsoft Windows—and who doesn’t?—Microsoft’s Copilot will be hard to ignore. And Apple Intelligence, added to the next generation of iPhones, will be ready and able to assist users in a variety of ways. Of course, this is just the tip of the iceberg.

So, with AI everywhere and difficult to ignore, the question is, how accurate is it? Can we rely on the information it provides, or will it lead us astray? We’ve all heard the term hallucination, referring to inaccuracies in the information generated by AI tools. These can vary from minor inconsistencies to information that is totally fabricated. Call it what you will—hallucinations, confabulations, delusions—but it signifies a wrong answer. If these artificial intelligence apps are so smart, why the inaccuracies? Mostly, it goes back to information found on the Internet. Many AI models are trained using oodles of info found in cyberspace, and as we know (or should know), not all of it is 100% accurate. Of course, there are other factors that also affect accuracy such as model size, bias, etc., but that’s another story.

Nevertheless, the information delivered by AI apps, whether correct or not, will likely sound plausible. One such example that received a good deal of attention involved Google’s Bard, which incorrectly claimed, when prompted, that the James Webb Space Telescope took the first pictures of an exoplanet outside our solar system. In actuality, that accomplishment belongs to NASA and occurred in 2004—17 years before the Webb Telescope was launched. But, if you do not geek out on the subject of space, the AI answer would sound reasonable enough to be accepted as fact. At the other end of the spectrum is Google’s AI recommending using glue to make cheese stick to pizza. Oops. While it seems amusing, you know there is someone out there thinking, “What a great idea. I heard about it on the Internet.”

There’s even a new term popping up, “slop.” Akin to spam, slop describes the digital trough that’s filled with AI-generated content that’s mostly useless. (Suddenly, that vision of a monkey at the computer keyboard has morphed into a bot.) Think of it as the new version of the old postal service and email junk mail, often containing provocative clickbait or those keywords that catch the digital attention of search engines and push the so-called story to the top of a Chrome, Safari, Firefox, or similar browser search. Let’s face it, many folks only look at the first few results. So, if it is the second story in the search results, then it must be valid, right? 

Even with all the well-publicized mishaps of AI, its personal assistance has been overwhelmingly positive. It can help a person communicate by aiding them in writing clear, concise sentences and instructions. It can help a person become more organized. It can make them more productive. On a deeper level, it can help solve complex problems, minimize human errors, provide a vast degree of information quickly, cull large amounts of data and present it in manageable bits, assist with research and analysis, and much more. The question really becomes: What can’t it do?

AI continues to prove itself and improve itself. There will be Luddites who reject all things AI, but that seems like an impossibility. It may not be front and center in your life, but you can bet it is working hard in the background, whether you like it or not. Health care, automotive, public safety, environment, communication, customer service, science… AI is there, chugging away.

AI has a lot to offer. We just need to use some human-driven caution and common sense.  Check the so-called facts, the source of the information, the depth of the information, and so on. Oh, and avoid asking AI for pizza recipes.