Automation spurred by artificial intelligence (AI) is making our lives easier than ever. It helps with tasks as simple as correcting grammar and pronunciation, to more complex ones such as translating text from one language to another. Advanced AI is even able to learn about our personalities: Based on our viewing history, an algorithm recommends movies to match our taste. However, some AI systems go beyond just simple convenience and have the power to transform the way we work and live.

In this article, we’ll cover examples of some of the most advanced AI of our day.

What is Advanced AI?

Advanced artificial intelligence is an elusive term. The moment we accomplish something that’s believed difficult to realize, we no longer consider it advanced or intelligent. Long gone are the days when chess-playing computers were considered the highest form of AI. 

The term “advanced AI” today perhaps evokes a highly anthropomorphic image of a conversational robot such as Sophia the Robot. However, in reality, much of AI is only used to complement human intelligence. What follows are a few examples of how state-of-the-art AI techniques can be used to address the world’s pressing issues.

Artificial Intelligence and the Coronavirus Pandemic

The COVID-19 pandemic has presented us with many difficult challenges that have profoundly reshaped how we live. As we try to curb the spread of the virus, AI has proven to be an extremely valuable tool. Whether it’s detecting early warning signs, predicting future outbreaks or discovering treatments, we can use advanced AI to streamline our efforts. 

AI for Predicting Outbreaks

One such example is BlueDot, a company that models and locates global infectious disease threats. In 2016, BlueDot successfully detected the outbreak of the Zika virus in the U.S.

More recently, they detected and flagged a cluster of pneumonia cases in the Hubei area — the documented origin of COVID-19 — nine days before the WHO confirmed the emergence of the virus! Additionally, they correctly predicted the future epicenters and the initial geographical trajectory of the spread of COVID-19.

BlueDot owes its success to big data and natural language processing (NLP). Their algorithm first combs through the internet looking for useful information. For example, news reports and discussion boards may be used for mapping disease outbreak areas, whereas data from global airline ticketing may reveal the cities affected by these areas and predict how the outbreak might spread in the future.

These advanced AI-powered findings are then sent out to epidemiologists who perform validation and report their assessments of the dangers of the potential outbreak.

AI-driven Drug Discovery

In the race for the most effective COVID-19 vaccine, some pharmaceutical companies have turned to AI for help with vaccine development. Thanks to AI, some vaccines such as Moderna’s started undergoing clinical trials merely a few months after the outbreak.

A successful candidate vaccine exposes the body to a weakened version of the virus, allowing the body to build an immunity to the virus without getting sick. Designing a vaccine requires an understanding of the virus’s structure and finding which of its subcomponents are key to triggering an immune response. 

This is no easy task since there are tens of thousands of candidate subcomponents for any given virus. With the help of machine learning, immunologists can extrapolate which of the candidates are the most likely to activate an immune response. This drastically reduces the search space and allows researchers to more quickly create efficient designs.

One notable effort in AI-driven drug discovery is COVID Moonshot, a collaborative non-profit that aims to develop antiviral therapy for those already infected with the COVID-19 virus. Using advanced AI inspired by “transformer architecture” (a neural network architecture first used for machine translation), the project took a single weekend to map routes for drug synthesis. A group of human chemists might have taken weeks to accomplish the same!

Protein Sequencing

One of the biggest scientific breakthroughs of 2020 arguably belonged to DeepMind’s AlphaFold, a model that’s able to predict a protein’s structure from its amino-acid sequence. The problem of predicting the structures that proteins fold into has boggled biologists’ minds for decades. 

A protein’s function is determined by its structure. If we know the structure of a protein, then we can reason about its biological function on the basis of structural similarity with other proteins. However, predicting a protein’s structure remained a challenging problem for a long time since the number of possible folds for any given protein is astronomical.

DeepMind solved this problem with a solution that was many levels of magnitude faster and cheaper than the best pre-existing one. Their AlphaFold represents the problem of protein folding in the form of a spatial graph. Using a public dataset of about 170,000 protein structures in combination with other datasets of protein sequences with unknown structures, they trained a deep, attention-based neural network to interpret the structure of these graphs.

AlphaFold is yet another example of the profound impact advanced AI can have on our lives. Its novel approach can greatly advance research in bioengineering, allowing researchers to iterate through experiments much more quickly and economically than before.

Creating Images from Textual Descriptions

You might have already heard stories about language models, such as the one about a student who ran a successful blog with posts fully written by an advanced AI language model. There’s also the well-known AI Dungeon, an AI-narrated story-telling game. 

Language models are deep-learning based technologies used to produce human-sounding text. These models are trained to predict missing words within some observable context, a key language modeling training task that results in powerful and generalizable language models. One of the most powerful such models is GPT-3, which exhibits surprising linguistic competence and generates mostly coherent and grammatical text when given a text prompt.

OpenAI, the team behind GPT-3, recently released a model that’s able to create images from textual descriptions. The model, named DALL-E after the artist Salvador Dali and Pixar’s WALL-E, uses a smaller version of GPT-3 for its language understanding capabilities to guide the image generation process. DALL-E seems to be able to create plausible images from a variety of text inputs, such as this illustration of “a baby daikon radish in a tutu walking a dog” or this image of “an armchair in the shape of an avocado”:

Image credit: OpenAI

DALL-E is indicative of the recent deep learning trend of creating single models to learn visual and textual representations together, rather than multiple models that each learn different input streams on their own. Humans perceive the world through multiple stimuli, so exposing models to multimodal training data such as text, images and sounds will hopefully help create smarter and more advanced AI that learns more accurate representations of the world.

Start Your Artificial Intelligence Journey

Our examples illustrate that there’s a lot of untapped potential in artificial intelligence. But before you go out and build your own advanced AI, you’ll need to master the fundamentals. 

Check out our expert-taught Artificial Intelligence Nanodegree and learn how to use AI to solve various real-life problems such as searching, optimization, planning, pattern recognition and more.

Start Learning