Deep Learning
Deep learning is a branch of artificial intelligence that deals with the simulation of human intelligence by machines. It is based on a matrix of algorithms that are inspired by the structure and function of the brain. Deep learning algorithms are used to simulate the workings of the brain, in order to enable machines to perform tasks such as pattern recognition, classification, and prediction. Deep learning has been shown to be effective in a variety of tasks, such as image recognition, voice recognition, and natural language processing. In recent years, deep learning has gained popularity due to its success in these and other tasks. However, deep learning is still in its early stages, and there is much research yet to be done in order to fully understand its potential.
Deep learning is a subset of machine learning that is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Neural networks are composed of layers of interconnected nodes, or neurons, that can learn to recognize patterns of input data. The term "deep" in deep learning refers to the number of layers in the neural network - deep learning neural networks have more than one hidden layer.
Deep learning algorithms have been able to achieve state-of-the-art results in many cognitive tasks, such as image classification, natural language processing, and recommender systems. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised, or unsupervised.
In 2014, Ian Goodfellow, Yoshua Bengio, and Aaron Courville wrote a textbook on deep learning. Neural networks are composed of layers of low/high-level feature detectors that act as filters for input data (e.g., images). New data is then processed through these layers to create output data (e.g., object recognition). However, the number of layers required varies depending on the task being performed (e.g., facial recognition requires more layers than audio signal recognition). The term “deep” in deep learning refers to the number of layers needed to process the data. A system with 10 layers would be considered “shallow” while one requiring 100 would be considered “deep”. Therefore, deep learning is simply a rebranding/renaming of artificial neural networks with deeper architectures.
Deep Learning was invented in 2006 by Geoffrey Hinton, Ruslan Salakhutdinov, and Dmitry Alexeev but it wasn’t until 2012 that significant advancements were made by Geoffrey Hinton and his students at the University of Toronto. It was then that they introduced a new neural network architecture called a “capsule network” which could better model relationships between objects in images than previous architectures. This breakthrough allowed for much higher accuracy rates when recognizing objects in pictures and also brought about the current interest in deep learning. As technology continues to develop, it is likely that deep learning will become increasingly important in a variety of fields including but not limited to autonomous vehicles, medical diagnosis, fraud detection, consumer behavior analysis and prediction, product recommendations, speech recognition, text translation, robotics, and many more.
TensorFlow can be used to build custom neural networks for tasks such as image recognition or natural language processing.