NLP AI order word understand does not

Published: 19 January 2021

Artificial Intelligence systems using Natural Language Processing to understand language are left flummoxed by changes as small as changing word order in sample text, researchers have shown.

Research from Alabama's Auburn University, in conjunction with publishing heavyweights Adobe, looked at AIs using Google's BERT language model. They discovered AIs would regard sentences with the same words, but different word orders, as identical.

You can download the research paper here (PDF, opens a new tab)

NLP AI is seen as a step toward true AI-driven content creation, predicted as a potentially game-changing development in publishing - good or bad, depending on your take. 

Using the 'GLUE' benchmark - the General Language Understanding Evaluation, a standard set of tasks designed to prove language comprehension - the researchers randomly assorted the words in a given sentence and noted that the AIs would report them as duplicates.

Read into the GLUE benchmark at this link (opens new tab).

While many situations can benefit from such logic, such as where a human mistypes a query,  the research team noted that this "superficial" behaviour was not useful in reaching the finer and deeper understanding of language that AI teams are pushing hard for.

Previously, research has shown that changing the word order in a sentence or conversation with advanced chatbots did not alter the response the chatbots gave.

A recommendation from this latest research is that NLP AIs are trained more extensively on tasks where word order matters, such as grammar tasks, and so reduce the "short cut" that NLP AIs are making by focusing on a few key words in any given dataset. 

This short cut comes out of the "self attention" logic popular in such NLP AIs, which trains the AI to focus on the "important" parts of any given series of words.

According to the team: "It is important to design benchmarks that truly test machine capability in understanding and reasoning. Our work also revealed how self-attention, a key building block in modern NLP, is being used to extract superficial cues to solve sequence-pair GLUE tasks even when words are out of order."