Skip to content →

A.I. and The problem of the World

Machines experiences the world as a set of data and, when they analyse those data searching for a model describing it, they are just building a representation of the “world”. Saying that, we can argue that the more accurate are data, the more the machine knowledge about the world will be accurate.

In another post we talked about the Filter Bubble in Artificial Intelligence, starting from the statement of Prof. Gary Markus saying that “there is no difference [for a machine] between the reflection and the real thing because the system lacks a theory of the world and how it works”. 

If we consider that machines are able to modelize data, extracting general rules able to describe a dataset, and the data are a codification of features of the world, we can affirm that the problem of understanding the real world rules is more related to the data than to the machine itself. Hereby it makes sense that an AI can’t distinguish from a reflection on a water ball and the real object. According to the machine they are the same, if no one gives it a different information.

The problem is: do these information must be encoded within the learning algorithm? Not necessary.

Our understanding of the world, as humans, comes from the interaction we have with the environment and our brain defines itself according to the input we receive through interactions with the “external” world through senses.

The same happens to the machines. If the training environment, usually a digital one, does not reflect the real world, the virtual world will be the only one the machine will know.

Some of today’s training environment uses video game to teach machines to take decisions and actions. Agents learn what to do thanks to the interactions they have with the “game’s world”. They learn how to move and the rules of the game. So we can argue that if provided with a “real world” model, probably they would eventually learn how the real world works.

From this point of view, Robotics is much more advanced forcing the robots to move in a physical world. They can still have the problem of balancing but it seems to be much more a mechanical problem than a software one. Consider a cleaner robot. They can move along your house and learn the map of your house. In the first time they move random then progressively they become more precise.

The digital environment for software agents doesn’t give the machine the same opportunity. Often these environments are kept as minimal as the task to be learned requires. If from a development point of view it’s the ideal environment to test machines attitudes, for example to plan strategies, at the same time it does not guarantee that agent will perform in the same way in a real contest. Planning to win at GO is not like planning a trip or a real “war”.

So what if a machine will be called, for example, to plan a war ? It will use the knowledge learned in the learning environment and, maintaining the goal, it will act using the knowledge learned, ignoring the effect of its actions on the real world, if not teached. Not because it’s a natural-born-killer, but simply because it doesn’t know. The simulated world is the only one it knows.

Let’s make it more clear. It’s like the difference from the war experience in a video game and the war experience in real world. Do you remember War Games ?

An example was Tay, the Microsoft Bot that started to be a Hitler-Fan on Twitter. Tay was trained to learn from user conversations, the problem is that no-one told her that not all the conversations are useful to grow. It acted like a baby exposed to swear words. The first instinct of the baby will be to repeat those words without knowing the meaning of them. It’s up to parents to teach that it’s not a good behaviour to speak like that.

So the more accurate will be the environment (with both the good and bad thing) the more an algorithm can be trained to know how it works out there. Probably it’s time to focus more on training framework.

 

Published in English Tecnologia

Comments

What do you think about?