An NLU review, with the help of a Hollywood movie

An NLU review, with the help of a Hollywood movie

First of all, let me tell you that I really like sci-fi movies, and obviously I’m interested in AI, NLU and chatbots. There are lots of movies that cover AI and robots in some level.

For the first time however, as far as I recall, I watched a movie where NLU is so ostensible, since it really caught my attention. The movie is ‘Passengers’, released in 2016, starring Jennifer LawrenceChris PrattMichael Sheen, and Laurence Fishburne. For those who haven’t watched yet, I promise to be careful to not to give a spoiler. The plot is about a spaceship traveling to a distant planet with thousands of people hibernating. The trip is supposed to take 120 years, and a malfunction in the sleep chambers results in two passengers awakened 90 years early.

Don’t worry, this is how the movie is publicly promoted, so no spoiler so far.

At a given moment of the movie, Jim, the first passenger to wake up, finds an AI robot and chats with it.

To my surprise, this dialogue showed to be an excellent example of a chatting between a human and a robot where a broken dialogue comes up. When we plan to train chatbots, this is an issue that we always try to avoid.

Let’s go to the dialogue, which I transcribe below. Jim goes to a bar, soon after he wakes up alone in the spaceship, where the bartender is an AI robot.

Jim: ‘How much do you know about this ship?’

Robot: ‘I don’t know. I know some things’

Jim: ‘What do I do for a hibernation pod malfunction?’

Robot: ‘Hibernation pods are failsafe. They never malfunction’

Jim: ‘I woke up earlier’

Robot: ‘It can’t happen’

Jim: ‘How long until we get to Homestead 2?’

Robot: ‘About 90 years or some’

Jim: ‘And when are all the passengers supposed to wake up?’

Robot: ‘Until the last for months’ [of the trip]

Jim: ‘How is that that I`m sitting here with you? With 90 years to go…’

At this time the robot has a bug, with a bip, then says:

‘It is not possible for you to be here’

Jim: ‘But I am’

Author’s note: I found a video with this dialogue, if you’re interested. It is on second 0:40.

https://www.youtube.com/watch?v=TVvGEj84y98&feature=youtu.be

When I watched this I started wondering what could have gone wrong with the robot training. All right, I may have waited the movie to end, but I kept thinking about it.

Obviously the robot understood correctly what the human said and his intent, but it was not able to handle that situation because it was not trained to. The situation had never happened before, and so probably that is the reason it was not included in the AI training.

I’d like to let you know that there is another situation in the movie where NLU is ostensible, that time regarded to context understanding. The robot understoods correctly something that a human says, and acting based on this understanding, something very relevant happens in the movie — actually it is a plot twist, so I can’t state it, on behalf of those who are going to watch. I assure you that it is very interesting and very relevant to NLU.

Back to our reality of training chatbots, the question that emerges is: how could we prepare for situations that are unexpected? Even when we plan for lots of situations, some may arise that were not expected. Well, unfortunately this is how things are. The only way it is to monitor constantly the dialogues and unanswered questions to update the training corpus and improve the BOTs training.

Regarding context and reasoning, as depicted in the other part of the movie that I quoted, well, maybe our current technology is not prepared yet to handle it.

You need to watch the movie to know what I’m talking about.