NLU, ambiguity and a park sign

NLU, ambiguity and a park sign

A lot has been told and discussed about the challenges of Natural Language Understanding (NLU) in AI. Sometimes called Natural Language Processing, or even considered as a subtopic of it, NLU deals with machine comprehension of the complex human language.

There is processing power to deal with natural language understanding nowadays, once many algorithms and products claim to understand humans, undoubtedly.

With NLP, the structure of human language is disassembled, parsed and then analyzed in such a way that a human sentence may have entities and syntax and intent properly identified. All of this is absolutely doable with our current technology.

But, there is an important issue, known as ambiguity.

Even with an appropriate analysis, the real meaning of a sentence may vary according to context. It is usual that humans speak (and write) in such a way that the meaning of what has been said depends on a context. Sometimes, and not rarely, the meaning of a sentence may depend on some human peculiarities, as sarcasm and mockery, for instance. All of this may lead to ambiguity, that occurs when a sentence has not a clear meaning or has a doubtful meaning.

As an example to illustrate ambiguity, I had an insight about it when I was jogging in a park and I saw a sign like that in the picture — well, it was not this sign, but a pretty similar one.

To be in the same page with me, you should imagine yourself as a robot programmed with an excellent NLP algorithm. If you would read that sign, what would be your understanding?

Hint: try to read the sign without any preconception, only read and interpret the sentence, as if you were a robot who had not preconceptions loaded into your memory.

Well, the sign reads: “Please excuse the inconvenience. Another park improvement is underway.

In your robotic non-human analysis, you would probably identify two entities, ‘inconvenience’ and ‘park improvement’. Regarding syntax, you would identify noun, verb, adjective, etc. And what’s the intent? It is to be sorry for something. Your sentiment analysis probably would identify two opposite results, negative for inconvenience and positive to improvement.

Do you agree?

Well, what is the problem then? Think neatly about the meaning of the sentence…

The way it is written, doesn’t it gives you a sensation that a park improvement is an inconvenience? Someone is apologizing because a park improvement is underway, after all!

Again, you have to think as a robot, maybe one that has just been built, programmed and released, so you don`t have previous experience or knowledge about the real world.

This is the point: proper understanding of this sentence would depend on reasoning, and this is definitely the greatest challenge for robots get to understand humans. Despite it is not so easy to define and explain reasoning, in this case it would depend on knowledge and previous experiences about the context for comprehension. As in this example, a human being would probably relate an improvement in a park to something that is positive and then infer that the apologies are not due to the improvement itself, but as a temporary nuisance caused by construction work — I understood that way, at least.

There are algorithms and solutions available so far that claim to deal with reasoning, but this stills as one of greatest challenges in AI nowadays. I found this simple example of our daily life as a good illustrative representation.