Self-driving cars, flying drones, robotic rovers, wildlife trackers, game-playing computers, 3D protein folders, and humanized electronic personal assistants. Sounds like the future, right? These are technologies of the present.
The field of artificial intelligence (AI) has been around since 1956. In 1965, Gordon Moore predicted that computing would exponentially increase in power and decrease in cost through time. So far, Moore’s law has held strong and leads AI specialists to expect much more progress in the field of AI.
AI and Deep Learning Overview
Artificial intelligence describes a discipline and related technologies focused on designing computers to mimic human behaviors and complete human tasks. Artificially intelligent technologies use learning methods like machine learning and deep learning. They also use learning models (e.g. neural networks) and high-capacity computing platforms (e.g. cloud computing).
Machine Learning: Computers Learn on Their Own
Machine learning refers to the design, implementation, and operation of artificially intelligent computers with algorithms that learn and improve on their own. To do machine learning, specialists train AI computers with sample data so the computers can learn and make useful predictions about information.
Classical machine learning algorithms require human interventions such as dataset labeling to learn (aka supervised learning). Imagine a computer using machine learning to understand differences between stars, comets, and planets; it requires training data that is categorized and labeled correctly with the three different types of astronomical objects.
Neural Networks: Learning Models for AI
In artificial intelligence and its focal areas of machine learning and deep learning, computers use learning models known as artificial neural networks (ANNs) to process information. The ANNs roughly resemble biological brains and comprise many interconnected units (“nodes” or “artificial neurons”) that communicate signals to each other while processing information.
An artificial neural network generally has an input layer, one to many “hidden” layers, and an output layer. All layers have one or more neurons.
While learning with a neural network model:
- A computer maps an artificial neural network’s neurons and assigns numerical weights (parameters representing the relative influence neurons have over one another) to the connections linking them together (“synapses”).
- The computer uses data (input), neurons, weighted synapses, and activation functions (aka threshold values) to generate output.
- At hidden layers and output layers, the computer combines output from individual neurons with weighted synapses to compute weighted output values. The computer also computes a weighted sum of output values.
- The computer uses weighted output values to decide whether to send output onto the next layer in a network.
- The computer uses this iterative process to eventually calculate final output values.
During training, computer neural networks use cost functions to measure errors in the predictions the networks make. They calculate errors by comparing differences between the networks’ predicted values and actual expected values. Computers also use cost functions to adjust networks’ weights until they reach values that minimize error values (a process known as gradient descent).
The neural network design for information processing helps AI developers effectively manage ever-greater amounts of data. Artificially intelligent computers use neural networks to learn from their own internal methods for information analysis and feedback signaling.
Deep Learning: Amped-up Machine Learning
Deep learning is essentially machine learning in hyperdrive. “Deep” refers to the number of layers inside neural networks that AI computers use to learn. Deep-learning ANNs contain more than three layers (including input and output layers). Superficial hidden layers correlate to a human’s first interactions with a concept while deeper hidden layers and output layers correlate with a deeper understanding of a concept.
In contrast to classical machine learning models, deep learning models don’t require interventions (e.g. labeled datasets) to learn. They can use unstructured, unlabeled data to learn and train themselves (aka unsupervised learning). Due to this difference, deep learning models often require larger amounts and/or types of input data than machine learning models to accurately learn and improve through time.
Deep neural networks (DNNs) are ANNs programmed to use deep learning methods. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are deep neural network architectures with specific characteristics.
In RNNs, data typically flows between connected nodes along a temporal sequence (networks can track what they learn, when, and apply this temporal information in useful ways). This type of network is beneficial for time series analyses and language modeling (e.g. natural language processing, language translation).
CNNs contain special types of layers like:
- Convolutional layers: layers that convolve input and pass results onward to other layers
- Pooling layers: layers that pool data by combining output at one layer to pass onto another layer
- Fully connected layers: layers in which every neuron in one layer is fully connected to each neuron in another layer
The convolutional network increases in complexity at each layer. This network design is ideal for certain processes like image, speech, or audio signal processing (e.g. computer vision, object recognition).
Cloud Computing Enables AI
Cloud computing refers to the on-demand availability of computer resources like data storage and computing power. Cloud computing resources typically derive from central computer servers located in data centers distributed around the world.
Cloud computing systems are helpful for machine learning and deep learning. Via cloud computing, high-capacity computer networks with fast servers and large data storage volumes are more widely available to AI developers and data users. As a result, people can complete unique projects that are impossible or limited without cloud computing such as processing big data, running deep neural networks, and driving autonomous vehicles.
Artificial Intelligence and Deep Learning in Action
AlphaGo, AlphaZero, MuZero, and AlphaStar are AI computers that learned how to play complex games like chess, shogi, Go, Atari, and Starcraft II using deep learning. AlphaGo beat master Go player Lee Sedol in 2016. As recently as 2020, MuZero established new successes in the field by excelling in gameplay without first being told the rules.
Wildlife biologists and conservationists can use artificially intelligent computers with computer vision to review camera trap photos. When intelligent computers “see” species of interest in camera trap images, they flag the images containing those species (e.g. threatened or endangered animals).
Computer vision is also integral to the development and operation of self-driving cars. With complex sensor systems including high-resolution cameras, self-driving cars generate visually rich digital representations of their driving environments that help them safely navigate.
Understand AI and Deep Learning With Udacity
Every day, specialists across disciplines are continuing to advance the field of AI. If you want to learn more about artificial intelligence, consider upskilling with Udacity.
Udacity offers programs to help you learn and develop real-world skills in artificial intelligence and deep learning such as:
Compelling projects abound in AI and deep learning — future-proof career areas. Learn more by registering for one of Udacity’s many interesting Nanodegree programs in the School of Artificial Intelligence today!