An artificial intelligence (AI) architecture based loosely on the behavior of neurons in the human brain. The neural network architecture comprises numeric parameters connected to each other in a network-like pattern (see below). Such networks may be called "artificial neural networks" to distinguish them from the human brain. See
neuroinformatics.
The neural network is used in image, language and speech recognition, text-to-speech conversion, robotics, diagnosing, forecasting and generative AI, which creates totally unique output. Unlike regular applications that are programmed to deliver precise results (if-then-else), neural networks are "trained" with millions, billions, even trillions of examples of media (text, images, etc.). After the training phase and while doing the processing they were designed for (predict, generate), some neural networks may be able to adapt and improve themselves (see
liquid neural network). See
generative AI and
if-then-else.
Not At All Like Regular Programming
As the Network Expands
There can be millions and billions of neurons in a neural network, and this diagram shows only two. As more neurons are used, the number of mathematical computations (multiply and add) expands exponentially.
There are Plenty of Network Designs
The following diagrams from the Asimov Institute in the Netherlands reveal the variety of neural network architectures that have been conceived. The yellow cells are the input, and the brown cells are the output. All other cells are the layers and neurons in between. When many layers are used, it is called a "deep learning" neural network.
Neural Network Architectures
AI networks are one of the most researched areas of computing in the 21st century. (Images courtesy of Fjodor van Veen and Stefan Leijnen (2019). The Neural Network Zoo.)