NLU pipelines are well-honed and excel in extremely precise tweaking of intents and entities with no significant expense and rapid iteration cycles.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language.  Including NLU design, evaluation & optimisation.  Data-centric prompt tuning and LLM observability, evaluation and fine-tuning.


The utilisation of Large Language Models (LLMs) has become increasingly commonplace in current Conversational AI Frameworks (CAIFs). These LLMs are recognised for their generative and predictive strengths, and many CAIFs have either already implemented LLMs or plan to do so in the near future.

Currently, the primary use of LLMs is on the generative side, rather than on the predictive side. Natural Language Understanding (NLU) is the main approach employed in chatbot frameworks to predict user intent and classify utterances, as well as to identify entities such as named entities and domain-specific fine-tuned entities.

Below are a few points to consider about the current and future importance of NLU, and the fact that NLU can be used independently for offline conversational data processing.

Efficient Open-Sourced NLU Pipelines

Many CAIFs feature generic internal NLU pipelines, which are usually developed with open-source software and come with no licensing requirements or third-party obligations. For instance, Rasa is a powerful open-source NLU API that supports intents with structure, and different entity types.

It has a configurable pipeline that does not require a significant amount of training data or computing power, making training time quite fast. It also offers several light-weight local installation options. If you are interested in learning more about how Rasa's BytePairFeaturizer supports minority human languages, please read more here.

Built-In Efficiencies For Intents & Entities

Intents and entities have been structured and made more efficient over time. Leading Gartner CAIFs have implemented nested intents or sub-intents, which can be split or merged using a drag-and-drop UI.

Each intent is associated with certain entities; this coupling between intent and entity requires two checks before the chatbot can respond. Structure in entities includes Compound Contextual Entities, Entity Decomposition, entity groups, roles, etc.

Accurate entity detection is key to a successful order and avoiding the need to prompt a user for information they have already given.

Training Time & No-Code Environments

Data formatting and transformation can be tricky and time-consuming when using LLMs, and is usually done in a programming code environment.

Natural Language Understanding (NLU) requires only a few training examples, and is usually managed through a no-code studio.

Recently, frameworks such as Rasa and Cognigy have enabled incremental training, and IBM Watson Assistant has drastically decreased NLU training time.

Comparable Classification Results between LLMs & NLU

In situations where the strengths of a LLM are properly utilised and NLU is optimised for creating classification models on a large set of data, the results generated by NLU and LLMs are often comparable; however, the NLU results tend to be more consistent and reliable.

Consistency with NLU

When testing different Large Language Models (LLMs) from a zero to few shot learning perspective, it appears that OpenAI yields the best results, followed by AI21 and Cohere.

Unfortunately, it has been difficult to generate consistent and accurate content with LLMs like Goose AI and Bloom.

NLU, however, has consistently produced low to no variation in results when the same data is submitted.


NLU and LLM should currently be viewed as separate technologies, as I mentioned in the heading. However, I anticipate this to alter over time, with LLMs taking over a larger portion of NLU's domain.

An example of this is Cohere's new no-code Dashboard, which allows users to upload data and train intents using their LLM technology, as well as access other features.

This no-code environment is beginning to look similar to the no-code interfaces usually associated with NLU.