Skip to content →

Artificial Intelligence is living in a filter Bubble

In his article on The New York Times, Gary Marcus, professor of  psychology and neural science at New York University, told that an Artificial General Intelligence is far from coming to life. According to prof. Marcus this relies on technological and organizational limits.

From the technological point of view, prof. Marcus states that today technologies are far from “understand” the real world. They are great in some specific tasks, but they treat data as per-se input, not being able to understand the “meaning” that they have in the real world. For example, said Marcus, a machine can’t distinguish an image reflection in a ball of water from the real object. “there is no difference between the reflection and the real thing – write prof. Marcus – because the system lacks a theory of the world and how it works”.

machine intelligence filter bubble water ball
“there is no difference between the reflection and the real thing – write prof. Marcus – because the system lacks a theory of the world and how it works”

From the research organizational point of view, he explains that the main problem is that universities lab are too small to work on complex tasks due to a lack of funds and computational resources. From the other side, the private sector and giants like Google, Facebook, IBM, Amazon etc. are too tied to the quarterly report and “they tend to concentrate on narrow problems like optimizing advertisement placement or automatically screening videos for offensive content”. They have the resources, Marcus says, but the lack of general purpose.

I’m agree with Marcus about the organizational problem, but I don’t totally agree with the technological point of view.

First of all, it’s not properly true that machines has no understanding of real world. Many experiments showed that when well trained, algorithms perform very well and sometimes better than humans. Algorithms build models starting by data. Models are mathematical representation of “world” underlining the data they receive. The models created are their own way to understand it. Once having the world modelized, they can act with extreme accuracy. So, if there is a problem in understanding the real world, the problem relies on data feeded to the machine more than the efficiency of algorithm.

Secondary, there is an über-humanisation of the meaning of “understanding”. As human beings we receive input through different “input devices” that we call senses. The signals we receive are elaborated by our brain that adapts and associates these signals to create a meaning, that is a “model” of the world. What if our senses would be limited? Our brain would change and the world perceived by us would change too.

We can see in many situations. For example, blind people perceive the world through sounds and touch and those senses create a representation of the world that is not the same of people with eyesight. A similar scenario happens with filter bubbles; the news we are feeded with change the way we perceive and read the reality. For our brains the filter world is THE world.

So what I argue is that probably Artificial Intelligence is living in a sort of Filter Bubble given by the narrowing development. Its understanding is not the same as the human one, just because the world it lives in and the “senses” it has are limited. Its world is made of data and machine perceives the world features through data. If the data, representing the machine’s world, don’t match the real world enough, for sure machines can’t understand the “real” world.

So if “Machine Intelligence” researchers want machines understanding the real world, they probably have to focus in the test environment more than in learning.

Published in English Tecnologia

Comments

What do you think about?