ChatGPT’s 2022 release catapulted artificial intelligence into everyday discourse. One side effect is that the term “AI” now commonly references chatbots, eclipsing the broader field. This is the equivalent of using “transportation” to refer to a Porsche 911. Not totally inaccurate, but it misses trains, ships, aircraft, unicycles, rockets, etc.
Large language models (LLMs) are a single application of generative AI, which is one branch of machine learning, which is one approach within artificial intelligence. The field encompasses technologies that resemble an LLM like a Falcon 9 rocket resembles a sports coupe.
The core problem with LLM-everything
LLMs are bound by the conventions of human language, but as Dr. Alexandra Pasi, CEO of Utah-based Lucidity Sciences (a Talbot West partner), says, “human language is optimized for people, and is not an efficient substrate for machine computation.”
Imagine asking a self-driving car to compose a Shakespearean sonnet about everything it senses before it can react. The car would crash. Similarly, forcing machines to process everything through human language burns compute cycles, racks up API costs and delivers results that are slower and less accurate than purpose-built alternatives.
For domains involving human language, LLMs can synthesize documents, brainstorm and draft content. They’re optimized for plausible-sounding output, not accurate output. They function as the ultimate yes-men and get extremely wobbly with large information sets.
Even when tasks appear to be natural fits for LLMs, alternatives sometimes perform better. A client recently asked Talbot West to architect an LLM solution to automate decision-making on English-language inputs. We steered them to an inference system that achieved similar results, but eliminated third-party API calls, and reduced compute costs by over 90% with much lower latency.
The future belongs to AI strategists
Much of the AI universe operates on numbers, matrices, tensors and graph structures. These native computational substrates are orders of magnitude more efficient for pattern recognition, prediction and optimization.
For example, Lucidity Sciences’ Lumawarp engine constructs custom mathematical representations directly from training data, outperforming best-in-class competitors while running 300 times faster. It chews through massive datasets with high accuracy and low latency, and can run on a laptop.
LLMs can often serve as a valuable translation layer between humans and machines. This keeps human language at the interface while letting machines compute in their native substrates underneath. Frameworks such as Talbot West’s Cognitive Hive AI (CHAI) framework decompose complex tasks and assign the right tool to each.
Organizations that approach AI intelligently will outpace those that treat every problem as a nail for the LLM hammer. The winners in the next decade won’t be the companies that adopted AI first. They’ll be the companies that adopted the right AI, in the right places, for the right reasons.
