AI is just a tool

Free ChatGPT shows that AI may not understand word problems in math.

When teaching basic computer literacy to high school students before there were smartphones and game switches, I’d tell them that the computer is only a tool. You wouldn’t choose a hammer if you wanted to cut a board, as a hammer has specific purposes. Similarly, a computer program can have specific purposes, such as word processing and spreadsheets, and hopefully do those purposes well.

Is artificial intelligence (AI) something a computer does well? What is the task AI is trying to do?

ChatGPT was not the beginning of AI

There have been artificial intelligence programs around for years. The idea is nothing new.

IBM Watson

IBM Watson played Jeopardy in 2011, beating two previous champions in three matches (only two were broadcasted). As good as Watson was, it still named “Toronto” in a Final Jeopardy question in the category “U.S. Cities.” Our new computer overlords apparently don’t have to be perfect.

Watson has evolved from that initial incarnation into uses as healthcare and customer service.

IBM Deep Blue

Before Watson, there was Deep Blue. Rather than Jeopardy, Deep Blue only played chess. The research started for this project in 1985 at Carnegie Mellon University by a doctoral student. The project moved to IBM after graduation. In 1996, Deep Blue beat Garry Kasparov in 1996 in one game, but lost the match. That win was a first, however.

Deep Blue won a match against Kasparov in 1997. This happened after Deep Blue made a random valid move to break itself free when stuck in a loop. Kasparov thought Deep Blue made a random, pointless move was from “superior intelligence,” unnerving him.

Kasparov never was able to get a rematch he wanted.

Many AI tools are used daily

Business and industry use AI programs for many functions. This can be a good thing.

Credit card fraud

Financial institutions use AI systems to check whether credit or debit purchases are legitimate. Many years ago, my spouse received a call regarding transactions made on his card. Since the transactions were made in Las Vegas, and we were at home in Washington. We were thankful a computer flagged the purchase.

Drug interactions

Computers using AI have flagged prescriptions made for me by doctors at the clinic and at the pharmacy.

I am allergic to fluticasone sprayed in my nose for allergies. However, my pulmonologist wanted to try inhaled fluticasone propionate for my asthma to solve some issues. We discussed my recorded allergy in my medical records. We discussed what might happen if I used fluticasone propionate. After discussions, we decided to try the drug for my asthma.

The electronic medical records system used by the clinic flagged fluticasone propionate when prescribed, requiring an override by the doctor. The computer system at the pharmacy flagged fluticasone propionate, requiring the pharmacist to ask me some questions.

This is exactly the way I want things to happen. Flag potential problems but allow a human to override.

For the record, using fluticasone propionate has drastically cut the need for my rescue inhaler. I still won’t spray fluticasone in my nose.

Also, I very intentionally will not transfer my prescriptions to my health insurance carrier’s mail-in service for maintenance drugs. I want my local (chain) pharmacy to be able to check any new prescription for a short-term problem (nasal infection) against my long-term prescriptions. The system won’t be as fail-resistant if my maintenance prescriptions are filled by a different pharmacy (in another state) than my short-term prescriptions at my local pharmacy. Let’s give AI a fair chance to work where appropriate.

Vehicle safety

I recently drove a 2023 vehicle with some advanced systems in it. Most of the systems I did not know were in the vehicle when I started driving it.

The collision avoidance system was one such unknown. It fascinated me when it worked. A car suddenly stopped to make a right turn in an intersection when the traffic light was green. Suddenly I saw flashing red lights in the heads-up display on the windshield. The vehicle braked while I was moving my foot to the brake. I was impressed.

Of course, being the curious type, I started looking for other Easter eggs.

An indicator warned me if strayed from my lane. It only worked if there was a clear lane marking to my left and right sides. On a city street without a right fog line, the indicator did not work. On a two-lane highway in the mountains, the software would sometimes unnecessarily warn me during right turns.

While collision avoidance worked very well on a city street, the monitor for following time to the vehicle in front of me did not work so well. I tried out the optional setting to display “following time” on the dash.

There was no warning when following only 1.0-1.5 seconds behind a car. It showed that I am good at recognizing 2.0 to 2.5 seconds at freeway speeds, from years of experience following an older rule. However, the current recommendations is to following at least 3 seconds. The recommendation is 4 seconds if you are in a larger SUV-van. This system failed. It did not work right even for a sedan.

AI can be a problem

A lawyer can use an AI chatbot to create a list of legal citations regarding a specific subject. Then the lawyer can verify each citation for relevance or existence when writing a brief. A lawyer found out the hard way that ChatGPT hallucinates when asking AI to write the brief. It is not only ChatGPT. Large language models can make up false answers without any warning.

Predictive policing sounds great: assign resources to watch AI-generated lists of places or people. However, training your database using data “derived from or influenced by corrupt, biased, and unlawful practices” means the machine predictions carry the same biases as human predictions. Tech-washing does not lead to better policing predictions. It emphasizes crime in some neighborhoods while ignoring the exact same crime in others.

AI is useful and dangerous

Has training predictive policing programs with accurate, science-based data happened? Do people check for hallucination with large language modules?

What is the actual task AI is trying to do for us? It that what AI is doing, or is it something else?

AI can be programmed with bad information. Garbage in. Garbage out.

Don’t let our computer overlords misplace or damage us. Trust but verify.

Postscript

Did you notice that I linked to my sources? Unlike regular ChatGPT.

Cover graphic from I tried Bing’s AI chatbot, and it solved my biggest problems with ChatGPT. The paid version of ChatGPT answered the question accurately, unlike the free version.