Academic research in Artificial Intelligence is focused to create machines able to simulate the human intelligence. In order to achieve this goal, the research often focus on studying the human brain and trying to recreate the neurological paths through software.
At the state of art our knowledge about brain is very limited. Just to say, there is no shared and common definition and understanding about human intelligence. There is no common definition about Intelligence, we know very little about how brain processes information, how our memory works ando how the consciousness arise.
Many countries launched national projects for studying the human brain. For example European Union launched the Human Brain Project, a H2020 FET Flagship Project which “strives to accelerate the fields of neuroscience, computing and brain-related medicine”.The results of this project are important to understand how brain works in order to recreate it. The problem is that according to many sources the achievements are still at the beginning and lack of real contribution. (The Human Brain Project has been funding with 1 Billion dollars in order to achieve a Artificial General Intelligence based of Human Brain Structure. They are finding out that could be a little bit more complicated than they thought).
In private sector, because of the constraints by financial achievements, researchers can’t wait to have a deep understanding of human brain so they shifted the original objective of Artificial Intelligence from “create a human-like behaviouring machine” to the more pragmatic “create a machine able to set and pursue a goal by itself”.
Difference in definition makes a deep difference in execution. Machines are required to accomplish jobs at the best. It really doesn’t matter how they “think”, even if it opens some questions such as: “Why and how it achieve the goal?”. That is like to say researchers can’t understand how machine’s mind works.
It’s a change of the game rules. We are no more searching to clone our mind. We are going to create a new form of mind. From this point of view, what we are going to face in the next future is the rising of “machine intelligence”, an intelligence that will not follow the rules of our mind but able to perform even better than the human one.
Experiments in last year showed us some example. Facebook Artificial Intelligence agents invented their own language in order to communicate each other. It was an unexpected behaviour that got the researchers scared and lead them to shut down the system. In another test at Google, AI created its own language to facilitate the translation from japanese to korean. Another experiment in Google showed that AI can use “imagination” to evaluate actions before pursuing them.
Can we say that is not a kind of intelligence ? From a philosophical point of view, it’s hard to deny. Indeed even the philosophy, the cognitive science, the neuroscience can’t explain what intelligence is. Are these systems stupid just because they don’t understand in the way we do ? Can we consider “stupid” just because they develop skill for just a narrow goal ?
An AI algorithm analyses the input data and try to find a relation among them. The relations are often statistical ones, even when an image or a sound is analysed. The analysis performed let the machine to find the “features” characterizing that image or sound. Machine create a function that describes the correlation. When the function is found, it’s like machine would say: “Hey I can recognize your face among millions out there because you have tons of pimples”. But in its understanding the machine is describing you something like that: f(x)=w1a1.
Every time machine will found that f(x)=w1a1 it will know that it is seeing you. It recognizes you. How do we recognize faces? Because they have some specific features. Machines do like the same, but they describe in a different way.
Can we say they don’t understand?
Probably machines will just “think different”.