Term of the Moment

client


Look Up Another Term


Definition: inference engine


The part of an AI system that generates answers. An inference engine comprises the hardware and software that provides analyses, makes predictions or generates unique content. In other words, the reason for the AI in the first place. See AI training vs. inference.

Human Rules Were the First AI
Years ago and relying entirely on human rules, "expert systems" were the first inference engines. However, the capabilities of today's neural networks and GPT architectures are light years ahead of expert systems.

Inference vs. Training
The inference engine is the processing, or runtime, component of an AI in contrast to the fact gathering or learning side of the system, which develops the models the inference engine uses. Inference engines can use any model that conforms to the engine's required formats.

Large language models that take in billions and trillions of data examples can take weeks and months to be fully trained. After the training phases are over, the inference engine does work for the user. The inference side requires less compute power than the training, and it is considerably faster. However, if a million people are using an inference engine such as a chatbot to get answers, considerable computer power is required. See AI training vs. inference.

A Term With Wiggle Room
The English word "inference" implies assumption and conjecture. Apparently, in the early days of AI, "inferring" an answer seemed a safer bet than "generating" the answer, which implies a degree of accuracy. Thus, even today, an AI does not generate a result; it "infers" the result. Perhaps that term will provide some wiggle room in a future lawsuit! See AI training vs. inference, AI types, AI training, neural network, GPT, deep learning and expert system.