Artificial Stupidity

mihri
5 min readSep 4, 2018

Artificial: Made or produced by human beings rather than occurring naturally, especially as a copy of something natural.

Stupidity : Behaviour that shows a lack of good sense or judgement.

Artificial Intelligence is a big topic for a few decades already and draws a lot of attention from people with different backgrounds. The ultimate goal of AI is to exceed human intelligence, that’s also the point people get anxious. For the whole history — human history as the name already tells, human is used to be the ruler. There are some clusters of human ruling the other classes also though, but imagining another species above human is a big threatening for the human conscious.

Let’s talk a bit about how a machine gets smart:

A machine has to have a knowledge at the zero state and should have an ability to plan/manage to learn things. It can collect the information with its sensors, while moving to explore its environment’s dimensions and the size of an inner place, it can find a path by sensing the distance to the obstacles. It can understand human languages and with this knowledge it can learn new things.

Robots can do things what humans can’t do already — multiplying a complicated math question in milliseconds. The exceeding point is far away still:

A computer gives an answer to everything a human can.
A computer can perform everything a human can.

Then the critical questions come to mind, when do robots become so smart, will they rule the world and humanity? Will skynet come into reality?

Alan Turing suggested that a machine can calculate any mathematical deduction, also neurology and cybernetics discoveries led researchers to consider the possibility of building an electronic brain.

Turing test is simple: A human and a machine are talking to a human (interrogator). Interrogator should determine which one is the human which one is the machine based on the responses to written questions.

There is a competition for Turing Test every year programs are trying to convince the interrogators that they are human. The goal of furthering the scientific understanding of complex human behavior is no less laudable now than it was before the competition, but clearly, a direct assault on a valid test of intelligent behavior is out of the question for a long time; even the prize committee well appreciates that. Thus, any award or prize based on a behavioral test must use a limited task and domain, so that the envelope of technology is pushed, not ignored. The efforts of the Loebner prize committee to design such a test have failed in that the test that they developed rewards cheap tricks like parrying and insertion of random typing errors. This is an (indubitably predictable) lesson of the 1991 Loebner prize competition.

A bot with really fast answer for every question without any mistakes is not convincing. That is because bots reply you late (you see “Writing…” for awhile on your screen), use slang and do grammar mistakes. Bots are trying to act a bit STUPID.

Clearly there are two main division for AI:

1. be like a human

2. be intelligent

And being like a human includes being stupid as well, because of that there is a new term such as Artificial Stupidity and it has different purposes and use cases.

According to its definition, a sufficiently developed Artificial Stupidity program would be able to act/imply all the worst cases regarding a given situation. This would enable computer programmers and analysts to find flaws immediately while minimizing errors that are within the code.

However, this behaviour of a program is mostly expected to be used within the development, test and debugging stages of computer software.

Artificial Stupidity for Games

There are many ways to deliberately introduce poor decision-making in search algorithms. Let’s check minimax algorithm. It is an adversarial search algorithm that is popularly used in games that require more than one player to compete against each other. The main purpose in this algorithm is to choose a move that maximizes your chance of winning and avoid moves that maximizes the chance of your opponent winning. An algorithm like this would be extremely beneficial to the computer as computers are able to calculate thousands of moves ahead.

Heuristic is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.

To “dumb down” this algorithm to allow for different difficulty levels, heuristic functions have to be adjusted.

You may think that creating heuristic functions to allow for stupidity is easy, but not, here is why:

If a heuristic allows for the best move, the computer opponent would be too profound, thus game would become frustrating and unenjoyable for the user. But if the heuristic is poor, the game would be too easy, thus unenjoyable. Therefore, a balance of good moves and bad moves in an adversarial game relies on a well-implemented heuristic function.

So a computer would know all the whole possibilities to choose and it can rank every decision based on some criterias. During the game a computer would choose to not the best solution but a solution which is similar to player’s level to keep the challenge and joy.

Wall-e has a childlike fascination with the world, and is more likely to play with a box than, say, the diamond ring that came in it. So a computer only values things according to what it is taught.

Finally when it comes to the human intelligence, people occasionally and naturally are very confused about the topic. They are taught to think that what makes us successful as a species is our intelligence. We think we rule the earth because we are smart. Firstly we do not rule the earth, and evolution does not select for smartness, it selects for survival.

Cockroaches rule the earth and they are smart enough not to think it’s because they have big brains.

BONUS: Here is a good further reading

--

--