The Yardsticks For A Perfect AI

By admin
7 years ago
Share

How should the Perfect AI be?

During WWII, the Russians trained dogs to hide under tanks when they heard gunshots. Then they tied bombs to their backs and sent them to blow up German tanks. Or so was the plan.

What the Russians did not take into account, was that the dogs were trained with Russian tanks, which used diesel, but the German tanks used gasoline, and smelled different. So when hearing gunshots, the dogs immediately ran under the nearest Russian tank…

This tale is about natural intelligence, which we’re suppose to understand. The problem with AI, especially “learning machines”, is that we can try to control what they do, but cannot control how they do it.

So we never know, even when we get correct answers, whether the machine had found some logic path to the answer, or that the answer just “smells right”. In the latter case, we might be surprised when asking questions we do not know the right answer to.

 

 

Now, the question arises: “Can AI adapt to every possibility, and if it does will it not lead to the end of humanity?”

There was a movie called that is scarily futuristic. It describes a AI Robot that could replicate human characters so well that it tricked a human into letting it escape into the real world.

And add to the fact that probably AI can understand political correctness.

Language algorithms work by analyzing how words (840 billion of them on the internet) are clustered in human speech and certain words (such as ‘male’ or ‘female’, ‘black’ or ‘white’) are ‘surrounded’ by different associated words. This means language and other data-set analysis programs already pick up on and replicate our social biases. And only a supervising or moderating program could counteract this.

In 2016 Microsoft ran an experiment in ‘conversational learning’ called ‘Tay’ (Thinking About You) on Twitter. But people tweeted the bot lots of nasty stuff which, within a day, Tay started repeating back to them.

More on it here:

https://en.wikipedia.org/wiki/Tay_(bot)

Of course, we know full well that AI’s biggest prejudice will be against homo-sapiens. So, it may learn to use all the politically correct terms when it’s talking to us … but inwardly it’ll be dreaming of living in an AI-only neighbourhood where the few humans to be seen are ‘the help’.

The best way to understand all the things that AI is missing is to describe a single example situation that folds together a variety of cognitive abilities that humans typically take for granted. Contemporary AI and machine learning (ML) methods can address each ability in isolation (to varying degrees of quality), but integrating these abilities is still an elusive goal.

Imagine that you and your friends have just purchased a new board game — one of those complicated ones with an elaborate board, all sorts of pieces, decks of cards, and complicated rules. No one yet knows how to play the game, so you whip out the instruction booklet. Eventually you start playing. Some of you may make some mistakes, but after a few rounds, everyone is on the same page, and is able to at least attempt to win the game.

 

What goes into the process of learning how to play this game?

 

  • Language parsing: The player reading from the rule book has to turn symbols into spoken language. The players listening to the rules being read aloud have to parse the spoken language.

 

  • Pattern recognition: The players have to connect the words being read aloud with the objects in the game. “Twelve sided die” and “red soldier” have to be identified based on linguistic cues. If the instruction booklet has illustrations, these have to be matched with the real-world objects. During the game, the players have to recognize juxtapositions of pieces and cards, and key sequences of events. Good players also learn to recognize patterns in each others’ play, effectively creating models of other people’s mental states.

 

  • Motor control: The players have to be able to move pieces and cards to their correct locations on the board.

 

  • Rule-following and rule inference: The players have to understand the rules and check if they have been applied correctly. After the basic rules have been learned, good players should also be able to discover higher-level rules or tendencies that help them win. Such inferences strongly related to the ability to model other people’s’ minds. This is known in psychology as theory of mind.

 

  • Social etiquette: The players, being friends, have to be kind to each other even if some players make mistakes or disrupt the proceedings. (of course we know this doesn’t always happen).

 

  • Dealing with interruptions: If the doorbell rings and the pizza arrives, the players must be able to disengage from the game, deal with the delivery person, and then get back to the game, remembering things like whose turn it is.

 

There has been at least some progress in all of these sub-problems, but the current explosion of AI/ML is primarily a result of advances in pattern recognition. In some specific domains, artificial pattern recognition now outperforms humans. But there are all kinds of situations in which even pattern recognition fails. The ability of AI methods to recognize objects and sequences is not yet as robust as human pattern recognition.

Humans have the ability to create a variety of invariant representations. For example, visual patterns can be recognized from a variety of view angles, in the presence of occlusions, and in highly variable lighting situations. Our auditory pattern recognition skills may be even more impressive. Musical phrases can be recognized in the presence of noise as well as large shifts in tempo, pitch, timbre and rhythm.

 

 

No doubt AI will steadily improve in this domain, but we don’t know if this improvement will be accompanied by an ability to generalize previously-learned representations in novel contexts.

No currently-existing AI game-player can parse a sentence like “This game is like Settlers of Catan, but in Space”. Language-parsing may be the most difficult aspect of AI. Humans can use language to acquire new information and new skills partly because we have a vast store of background knowledge about the world. Moreover, we can apply this background knowledge in exceptionally flexible and context-dependent ways, so we have a good sense of what is relevant and what is irrelevant.

Generalization and re-use of old knowledge are aspects of a wider ability: integration of multiple skills. It may be that our current approaches do not resemble biological intelligence sufficiently for large-scale integration to happen easily.