We’ve expanded our Artificial Intelligence Nanodegree programs! Find the right program for you.
Start Learning

Udacity Artificial Intelligence Nanodegree program

Now that we’re preparing to close the inaugural round of applications for our new Artificial Intelligence Nanodegree programs, we are quietly but excitedly settling down to the work of rolling out this incredible curriculum for our incoming students. In this post, we’re going to look at the origin story of this program’s curriculum. The first step in preparing a comprehensive AI program is to understand the relevance and trajectory of the field itself.

AI Today, AI Tomorrow

One thing is unequivocally clear: There has never been a better time to study Artificial Intelligence. Demand for smarter solutions to problems big and small—combined with increasing access to high performance computing and an abundance of rich data sources—means AI is poised to seismically impact the world of computing for the better.

Critically, innovation in AI is no longer limited to prestigious academic labs alone. For example, we are witnessing an unprecedented growth in private research enterprise, at places like OpenAI and the Allen Institute for AI. Artificial Intelligence is also making a significant difference in how we approach societal challenges in fields like healthcare, education, economics and transportation. At the same time, virtual assistants like Apple’s Siri, Google Now or Microsoft Cortana are shaping the way we expect technology to behave with us on a personal level—friendly, reliable and efficient. And with cloud services such as Amazon Alexa and If-This-Then-That (IFTTT), developers are bringing this powerful combination of voice-based interfaces and distributed computing to homes, cars and even wristwatches!

Leading enterprise technology providers are embracing AI as well. Systems like IBM Watson and Salesforce Einstein are injecting intelligence into core computing platforms, offering dramatically improved accuracy and efficiency across applications. Others like PredicSis and Alchemy API are banking entirely on their intelligence-as-a-service offerings.

Engineers who can understand and apply these technologies to various real-world scenarios are already in high demand, and so are the researchers who need to constantly push the envelope to address bigger and harder challenges. But while self-driving cars and chatbots are the rage today, there are applications of AI that we haven’t imagined yet; jobs that are still evolving or don’t even exist!

That’s where our Artificial Intelligence Nanodegree program comes in.

Foundations & Concentrations

In our program, we teach foundational knowledge, and then enable students to delve into concentrations. In doing so, we ensure that aspiring professionals build long-terms careers that anticipate the innovations of the future, even as they master the practical skills need to excel in today’s jobs.

AI is an amalgamation of a variety of techniques borrowed from related disciplines, with many skills that are only useful in specialized application domains. To design a lean program that is suitable for all, we looked at certain fundamental concepts that form the basis of most AI algorithms, and ultimately opted for a 2-term structure:

Term 1: Foundations

Covers the core principles that underlie the field as a whole: search, optimization, logic, planning, and probability.

Term 2: Concentrations

Covers topics from computer vision to natural language processing, to prepare students for jobs in fields from virtual personal assistants to computer-assisted medical diagnosis, and much more.

Let’s look closely at the composition of Term 1.

Note: You’re going to start seeing some AI-specific terminology here. If things feel unfamiliar to you, don’t worry! We’ll cover everything in greater depth within the Nanodegree,  and our team is always on hand to direct you to helpful resources!

Search and Optimization

We begin with Module 1, Search and Optimization, in which you learn how common computer science algorithms can be adapted to solve exceptionally hard problems.

Consider playing Tic-Tac-Toe on a regular 3×3 board – you can easily come up with a computer program that can never be defeated, all using traditional tree or graph search methods like Breadth-First Search (BFS) or Depth-First Search (DFS). This basically involves playing out all possible moves in simulation and picking the one that is most likely to result in a win: 9 possible first moves, times 8 next moves, times 7 after that … and so on, for a total of up to 9! = 362,880 potential nodes in the game tree. That’s a lot of possibilities to consider, but still manageable on a modern computer, especially if you avoid duplicate subtrees.

Now consider the same game on a 5×5 board; the number of game nodes to consider suddenly explodes to 25! = 15,511,210,043,330,985,984,000,000 (that’s about 1.55×1025)! Or chess, where the game tree complexity has been estimated to be at least 10120. Traditional search methods are no longer viable. So one approach is to impose additional constraints that help rule out vast portions of the search tree, and using heuristics, we prioritize the most promising paths to be explored first. We essentially inform these algorithms with our own intuitions.

Another class of problems where AI techniques are popular involve some form of numerical optimization. For example, we might try to fit as many pallets of different sizes in a shipping container as we can, or find an efficient route for a package delivery van. You’ll notice that the emphasis is on finding a good solution, not necessarily the best solution. Optimization algorithms sometimes get stuck in valleys of local minima, which make it hard to guarantee globally optimal solutions. So it’s also important that, for the problems we apply these techniques to, we can afford to make mistakes or have other ways of mitigating the risks.

So far, I’ve described AI from a problem-solving perspective, as an extension of computer science itself. But there are a few more important aspects of AI to be aware of.

Logic, Reasoning, and Planning

In Module 2, Logic, Reasoning and Planning, we explore a formal method of expressing and reasoning with knowledge. Logical reasoning is useful in domains where information can be represented as definite facts and relationships. For instance, if I tell you that Whammals have five legs, and that Ablodusia is a Whammal, how many legs does Ablodusia have? If you said five, you just did some deductive reasoning! When you connect different pieces of information, you can start discovering novel facts with the help of computers. You can also come up with reliable guarantees for critical systems like electronic circuits using a process known as formal verification.

Logic can also be used to decompose a complex task into individual actions, identify dependencies and figure out a feasible ordering of such actions that will allow us to successfully achieve our desired goal. This is known as planning, and is widely used in assembly, transportation, logistics, project management, and more.

Probabilistic Models

Not all domains are as definite or observable. Sometimes we need to be able to make smart decisions in the face of uncertainty or incomplete information. This is the subject of Module 3, Probabilistic Models.

Think of your standard issue personal assistant whose mission in life is to get you ready for a productive day. Every morning it queries the weather service which comes back with something characteristically vague, like “partly cloudy sky, with a 50% chance of rain”. Using that, along with your expected agenda for the day, it has to recommend what attire would be suitable, whether you should carry an umbrella or not, etc. In a more critical application, like self-driving cars, the system has to make the most of any available information–it can’t just come to a dead stop in the middle of a highway if the lane markings suddenly disappear for a while.

It is impossible to encode every piece of relevant information as a clear and unambiguous fact, as traditional logic would require. Instead, we need to express them as uncertain beliefs about the world. Moreover, it would be nice to be able to quantify this uncertainty in order to compare our beliefs and make decisions based on the most likely scenario. Fortunately, probability theory provides a formal method for doing just that, and much more! For instance, we can denote “50% chance of rain” as P(Rain) = 0.5, and the possibility of having to go outside during the day as, say, P(Outside) = 0.2. Using just these two beliefs, your assistant could conclude that the chance of you needing an umbrella is P(Rain) x P(Outside) = 0.5 x 0.2 = 0.1, relatively low (probability values range from 0 to 1, with 1 representing absolute certainty). This is a very powerful system that allows self-driving cars to locate themselves using noisy inputs from whatever sensors are available.

These three perspectives—problem solving, knowledge representation, and handling uncertainty—are the foundations of AI that we have identified to be most important, and thus constitute Term 1 of our Nanodegree program.

Although this course of study is a great way to get started in this field, it is not sufficient for one to begin contributing to the AI community in a meaningful way. You often need to understand the nuances of any given application, called domain knowledge, in order to adapt existing AI techniques to work appropriately, or even come up with highly specialized algorithms and knowledge representations to address specific challenges.

Natural Language Processing, Computer Vision and Speech Recognition

Our ultimate goal is to prepare you for a meaningful career involving artificial intelligence, whether that means applying smart algorithms to solve hard problems, or designing intelligent products that can interact with humans, or even pushing the envelope with further research! Therefore the concentrations that we choose to build out will depend on what skills are most important for the AI workforce today, and in the future. With valuable inputs from our partners and advisors, we have identified 3 concentrations to launch in our first offering of Term 2: Natural Language Processing, Computer Vision, and Speech Recognition. Each of these concentrations is being developed in collaboration with experts who have a strong academic background as well as industry experience designing complex intelligent systems.

In addition to co-developing the content for these concentrations, our partners are helping us define capstone projects that address real-world problems. They will provide students with relevant datasets and access to state-of-the-art tools and platforms that are necessary for developing high-performance, scalable solutions.  These projects will become important parts of your portfolio when you seek employment or admission into other academic programs, and the skills and technologies you will learn to use while working on them will empower you to excel in your career.

Designing Our Artificial Intelligence Curriculum

Designing curriculum is equal parts philosophical, technical, and aspirational. You have an idea about the material you want to teach and why, you have the actual mechanics of how you’re going to teach that material, and you have your goals—what students will be enabled to achieve. For our Artificial Intelligence Nanodegree programs, we have designed a curriculum that will help you establish your fundamentals and pursue your passions. We have established collaborations with industry leaders that will ensure you learn the most critical skills, techniques, and tools in the field. And we have hiring partnerships in place to make certain you get the best start possible in launching your new career in Artificial Intelligence.

Arpan Chakraborty
Arpan Chakraborty
Arpan likes to find computing solutions to everyday problems. He is interested in human-computer interaction, robotics and cognitive science. He obtained his PhD from North Carolina State University, focusing on biologically-inspired computer vision. ​At Udacity, he develops content for artificial intelligence and machine learning courses.