The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:
"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs".
In "Neural Network Primer: Part I" by Maureen Caudill, AI Expert, Feb. 1989
ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well.
So, neural networks are very good at a wide variety of problems, most of which involve finding trends in large quantities of data. They are better suited than traditional computer architecture to problems that humans are naturally good at and which computers are traditionally bad at ? image recognition, making generalizations, that sort of thing. And researchers are continually constructing networks that are better at these problems.
NNs might, in the future, allow:
- robots that can see, feel, and predict the world around them
- improved stock prediction
- common usage of self-driving cars
- composition of music
- handwritten documents to be automatically transformed into formatted word processing documents
- trends found in the human genome to aid in the understanding of the data compiled by the Human Genome Project
- self-diagnosis of medical problems using neural networks
- and much more!