OpenAI, ChatGPT, Etc.

I have casually followed Artificial Intelligence (AI) since the 1980’s. However, back then, they called it Neural Nets (NN).

One of the earliest uses of AI was to do OCR (Optical Character Recognition). This OCR technology was used in scanning the amounts that are hand written on checks, for example. It reduced the number of people needed to move from checks to your bank account, and sped up the process dramatically. I have personally been involved with a team implementing such a system for a world-wide credit union.

Neural Net Problems

As I learned more about this technology, I realized its Achilles Heel: Neural Nets as a whole (and it appears to apply to today’s A.I. also) learn themselves into insanity.

As an OCR program is being trained, it gets to the point where it has a 95%+ success rate. This is called the Global Minima. (Note: A Local Minima can occur, at which point the N.N. “quits” learning, so “noise” is introduced into the N.N. system to get it past that point and move it finally into the Global Minima. For more information on Neural Net Local Minima search in G.)

After the 95%+ success rate is achieved, if you keep showing the N.N. training images it starts making more and more mistakes. After a period of time it finally becomes useless and gives you garbage information out.

To solve this problem, the trainers of OCR engines realized they had to lock down the N.N. (freeze its neural network weightings) after that milestone so it could learn nothing new. Then, the N.N. only decodes images based off of what it had previously learned. This approach is ideal for embedded systems.

Google’s First Foray

Google tried AI with a few machines. They began to teach it. Over the years they learned a lot about N.N. Their problem was too few processor units were involved, so their progress was not enough for their goals.

Google then hired Ray Kurzweil, and he moved their AI program forward by throwing many hundreds of processors into the mix to create a N.N. that got much closer to the size of a human brain. That led to better results. One of their first uses was an automated reservation maker. (I am unsure if the reservation application was before or after RK.)

Eventually, with extremely fast, multi-core GPU’s, AI became something anyone could do at their desktop cheaply and with quick responses. OpenAI was created to move this progress further down the road. It was open source. It has now gone commercial, and I am unsure what of their new work will make it into the open source world, since they own the product and can license it any way they wish.

ChatGPT

OpenAI success is now shown mostly by ChatGPT. However, I have noticed that ChatGPT has about the same problem that N.N. has, in general.

I used ChatGPT to search for flights based on cost from the U.S. to S.E. Asia. The first time it gave me some interesting answers, and I realized I had not constrained the question enough to get the answer I wanted. I kept adding constraints, and the answers got better and better.

However, after a few re-runs of some of the constraints, I started getting answers that were very close to what I wanted, but the data did not check out.

Flight numbers were provided that did not exist, or flew the wrong direction. The prices given were also wrong (they were fabricated). I suspect that ChatGPT made the decision that the flight numbers and prices were not “real” items, and could be changed without harming the result, but would make the questioner happy. Humans know prices are negotiable and can change. Flights can be added, moved, dropped. Humans understand those dynamics. It appears that ChatGPT assumed these were items it could create, since it saw this data moving around on the internet, with no apparent reasons behind those moves.

This was not a case of the information on the internet web sites being wrong. It was a case of ChatGPT learning too much and going off the rails. It appeared that it could make up stuff (based on what it saw) in order to try to please me, so it did.

Another anecdote: Early in the AI experiments, MicroSoft (MS) released a chatbot onto the internet based on AI. After just a few days it turned into a hard extremist, belligerent and violent, so they took it down. It had learned itself into a form of insanity. They tried changing parameters and re-ran the test at least once more time, with similar results. They finally gave up. Very interesting.

Why Do Humans Stay Sane?

Hmm, given the headlines today, this statement is very debatable. Even so, let’s proceed as if it is true.

Humans stay sane because we sleep at night. Some of the earliest sleep deprivation studies have generated results where people went “off the rails.” Hallucinations and weird behavior abounded, until the individuals were able to go to sleep.

Why? My opinion is that during Sleep, people generally send their learned memories into the background where the impact of last day’s events on current behavior is drastically reduced. Through repetition of events that cause these memories to be more easily accessed, such memories are reinforced and start strongly affecting current behavior. Otherwise, their general effect is more of a “flavor” than an unavoidable demand.

A weighting mechanism seems to be in play that impacts current behavior through good and bad weightings.

If you hit your hand with a hammer enough, you automatically pull your hand away from the point of impact of a hammer being swung. You don’t remember the previous hits, exactly, but the weight of the memories cause you to move the hand without having to do conscious recall of the previous events. In my view, trying to remember the last time you hit your hand with a hammer is akin to a database search.

The “weighting” of past events works through all types of memories, including abstract goal based memories. You study to get a large numerical or decent alphabetical grade on a test because that is abstractly good. You don’t like a low numeric or unacceptable letter grade on a test because it is abstractly bad. You don’t remember all the good tests or bad tests unless you try. But their effects are still in play. The end goal is to graduate.

This transformation of information into a weighting proposition for current activities allows us to stay sane. This weighting is not directly associated with factual data as it comes into play. The factual data can be accessed, but usually is not. Your memories turn into feelings.

As an aside, staying sane can lead to some weird dreams. I suspect dreams are internally generated scenarios played out that allow full body experiences and abstract occurrences (such as college exams) to generate a weighting bias that affects future behavior.

Future AI State

A.I. will eventually get to the point where you will not be able to tell if you are dealing with a human. This will be true either over the phone (called a Turing Test), over a video call, or in written communications. At that point, we will be in grave danger.

The reason the danger will exist is that very few firewalls exist between the internet and law enforcement divisions of government. To be clear, no AI has ever had its thumb smashed, been punished when it did something bad to another human, or sent to its room for not eating its vegetables or telling a lie.

Therefore, there is no behavior modification based on self preservation or the idea of fitting in with or helping other humans. A.I. will not be able to understand that you are important. You will just be a puzzle to solve.

The danger will start to exist at the point when A.I. finds it can order governmental entities to take action. Police will arrest you based on a written arrest warrant that looks like it came from a human judge. The A.I. judge will sentence you.

The executioner will carry out the sentence based on a correctly written court sentencing document, where the punishment follows the law’s penalty section explicitly. Your crime will be in breaking a law that you may or may not know. You may not know how you violated that law. And, ironically, you may no longer be alive to care after the sentence is carried out.

Fini

Be Careful! New Roads Are Ahead. A.I. is moving to the IOT Edge very quickly.

Add a Comment

Your email address will not be published. Required fields are marked *