What is a Neural Network?
First of all, when we are talking about a neural network, we should more properly say “artificial neural network” (ANN), because that is what we mean most of the time. Biological neural networks are much more complicated than the mathematical models we use for ANNs. But it is customary to be lazy and drop the “A” or the “artificial”. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between th
In a typical computer, made according to what is called a Von Neumann architecture, memory banks live in an isolated module. There is only one processor, which processes instructions and memory rewrites one by one, using a serial architecture. A different approach to computing is the neural network. In a neural network, made up of thousands or even millions of individual “neurons” or “nodes,” all processing is highly parallel and distributed. “Memories” are stored within the complex interconnections and weightings between nodes. Neural networking is the type of computing architecture used by animal brains in nature. This isn’t necessarily because the neural network is an inherently superior mode of processing than serial computing, but because a brain that uses serial computing would be much more difficult to evolve incrementally. Neural networks also tend to deal with “noisy data” better than serial computers. In a feedforward neural network, an “input layer” filled with specialized n