This post began out of a comment I made on someone else’s Substack (someone much more popular and important than me, by the way). But I thought I would put something here and expand a bit on what I said there.
We are seeing all kinds of new products billed as “artificial intelligence” being promoted, from various companies. Some of them turned out to be disappointing fairly quickly. New ones keep coming out. Some users are having fun doing stuff with them. Some of the results are disturbing.
I am not a computer expert. I use one, but I am not a programmer, nor even any kind of whiz with one. But I have a family member who is much more qualified than I am. My older son Nathan got into computers when he was 8 years old. Some of his friends had the early home computers from Texas Instruments and Atari. We figured out he was talking to his friends on the phone; he’d tell them what to type into the computer, and they would tell him what happened. I asked around a bit, and we got him a Radio Shack Color Computer. From what I was told, of the cheap home computers, it was the best for those who wanted to program rather than play games. This was around 1983, so there was no Internet yet.
I took him to Radio Shack User Group meetings for several years. By the time he was 12, the guys in their 50s would be asking him questions about what was going on inside the machines—and he could explain it to them. Later he got a Radio Shack Model 4. He eventually tried writing his own operating system for that. Finally he got the parts and built his own IBM-compatible computer.
He did take one computer class at the University of Cincinnati. The instructor figured out what he was, started having him coach the other students in the lab section of the class, and then helped work up a resume. He used that to get a job, and never bothered to try for a degree. Instead, over time he got all kinds of certifications from software companies. Now he is 49 years old, and has been working as a computer professional for over 25 years, mostly in Cincinnati and Indianapolis, but he spent 3 years in Seattle at Amazon’s IT department. He started working from an office in his own home back in 2015, long before the Covid lockdowns.
That’s some of my son’s qualifications. Here is his take about the current AI craze: He has told me that the term “artificial intelligence” is very misleading. So is the older term “machine learning.” What is really happening is complicated math applied to huge amounts of data online, made possible by computers and data centers getting larger and faster. There was a similar hype for a while in the 1990s about “expert systems”—which was an outgrowth of the “Lisp systems” and some research at MIT dating back to the 1960s. He predicts that as people realize what it can and can’t do, the hype will die down.
AI is not really “intelligence.” It cannot think; it cannot reason. All it can do is sort through existing data that it has access to. And even that is affected by how it is programmed. If a programmer introduces some kind of bias into its data selection, that affects the output. Google got some bad publicity recently over their Gemini AI. Part of it was supposed to produce images. But when people started asking it for historic images, they started getting Black Vikings, Asians in WWII Nazi uniforms, and other anomalies. The programmers had prioritized politically correct “diversity” over historical accuracy. Google ended up shutting down the images function.
So in any AI application, the programmers are really the key to how well it actually works. And programmers are human themselves; they have their own limitations. I was never worried about computers or robots taking over my work. I worked in residential construction for 40 years—mostly home repair and remodeling, but some new construction as well. In order for a computer to take over my work, a programmer would have to know what I have learned over those 40 years—including changes in codes and methods that happened during that time. And I have worked on everything from brand new houses to some built in the late 1800s! And a lot of the things I learned may be difficult to find, or not even available online at all.
Another issue is coming up for the AI marketers: publishers and authors are noticing that AI programs are accessing their copyrighted material, without permission and without naming the source. I saw recently that lawsuits are being filed over this. It will take several years to work through this, but it could easily kill the fad. If they have to start paying damages for violating existing copyrights, that could make the new industry a lot less profitable.
Power consumption may be another issue. From what I am seeing, the servers behind these AI programs use tremendous amounts of electricity. And one of the things going on in the US today is that our power grid is stressed in some places, and is less reliable than it used to be. The change from power generated by fossil fuels to using less reliable windmills and solar panels is already having an effect on customers in some parts of the country; yet without lots of power, AI cannot work. If it comes to a choice of shutting down an AI complex or shutting down large numbers of residential customers, what will a utility company do?
I don’t really have much of a stake in this. I have not tried to use any of these AI programs, and I don’t intend to. I do think my son is right. This is currently a fad, and is probably being over-sold. And it will not last. There may turn out to be some users for it. But it will not be a major revolution.