The term “artificial intelligence” belongs to the same class of concepts like “people’s democracy.” The adjective changes everything. Just as “people’s democracy” was essentially a totalitarian system, it was therefore on the opposite pole in relation to what constitutes the encyclopedic definition of democracy. Similarly, the term “artificial intelligence” is used for essentially automatic, programmed systems, and therefore it is closer to concepts such as “unreflective” or “instinctive.” That is also the opposite of what we expect from human intelligence. However, human intelligence was the root of the concept of AI, and expectations to achieve a human-like AI are still being formulated.
Let’s start with the basics that are fundamental here. What is a computer and how does it work? As an example, we will use a toy for 4-year-olds. It is a cuboid with a (partly) transparent casing. It has “drawers” on the sides and a hole for balls on the top. Depending on which drawers are pulled out and which are not, the ball (entered at the top) travels inside the toy in various ways, going out through one of the several holes located at the bottom. For a 4-year-old it’s great fun – watching changes in the course of the ball depending on the setting of the drawers (switches). For us, it is an ideal example of how the processor (computer) works. That is, in fact, how every CPU works. The processor is our cuboid, the balls are electrical impulses “running into” it through some of the pins, and leaving it through others. It is quite like our balls – thrown in through one hole to fall out through another. The transistors, of which the processor is built, serve as drawers (switches) that can be in or out (i.e., switched to different states), in order to change the course of the electrical impulse (our ball) inside the processor.
So the processor (as to the principle of operation) is nothing more than a simple toy for 4-year-olds. It is just that we throw in not one ball at a time, but several dozens; and we repeat this action billions of times per second. And we have not four or six drawers but a few billions. Does anyone sane really believe, that if we put billions of balls into a plastic cuboid with billions of drawers, then at some moment in time this cuboid?, these balls?, one plus the other?, or perhaps the mere movement of these balls, will become consciousness? And it will want to watch the sunset or talk about Shakespeare's poetry? If so, then self-consciousness should be expected from the planet Earth or its oceans.
Is even 100 trillions of plastic balls running through the most complicated paths in a huge plastic cuboid with trillions of movable drawers, whose positions change due to the balls’ movements, able to cause a qualitative leap and result in the “digital singularity” described by wise professors as self-awareness? And this pomposity... We stand at the threshold of the “Big Change,” after which nothing will be the same, our world will change completely, and so on, and so on – in short, typical apocalyptic visions present in every era for centuries. Nihil novi sub sole.
I read about ideas like “If we add up many specialized (intelligent) systems, we will get a ‘general intelligence’ as a result.” It is like saying, “If we add up many modern specialized garden tools, we will get a gardener as a result.” No, we won’t. You can’t add an electric hedge trimmer and a garden irrigation system. Just like you can’t add a quantitative – partial differential equations-based system and an advanced search engine.
And let us not be confused by wise-sounding words like “quantum effects” or even “quantum microprocessors.” It does not change the essence of things. Just as phosphorescent or faster-than-sound balls will not change the way our toy works. The funniest thing is that this very idea was popularized by a famous sci-fi movie of the 80’s. Skynet from “The Terminator” is based on this concept – the belief that the quantity will turn into quality in a natural, spontaneous way. The same way of thinking, in the pre-electronic era, resulted in a belief that a thinking machine is just a matter of the sufficient number of gearwheels. In fact, we are not that far away from this thought – with our CPUs which work in the same way as a primitive toy for children.
It is even easier to see if we put it into the historical background. This kind of thinking repeats itself for centuries. Mary Shelley’s “Frankenstein” written at the dawn of the electric era is a good example: a strong faith that becoming gods able to create life, intelligence, and new beings is at our fingertips. Each new revolution – mechanical, electrical, or contemporary IT – is assumed to propel us across this threshold. This is a very strong belief, a part of human nature. But should we use beliefs and deep faith where logical, reasonable thinking is enough? Anyway, whoever wants to believe, may believe. This is the principle of free will – something very hard to engraft into machines.
Anyway, the question behind the AI is:
“Do we believe that we could make a plastic toy for 4-year-olds a thinking being?”