resources to further the understanding of synthetic reasoning



Rama Akkiraju
IBM Fellow & Master Inventor at IBM Watson | co-chair of CompTIA AI Advisory Council
Yoshua Bengio
Full professor at Université de Montréal
Yoshua is considered one of the three "Godfathers of Deep Learning" for his early work in the field. He is a co-author of the Deep Learning textbook.
Greg Brockman
Chairman and CEO at OpenAI
Bryan Catanzaro
VP, Applied Deep Learning Research at NVIDIA
François Chollet
Staff Software Engineer at Google
Author of Deep Learning with Python and co-author of Deep Learning with R. François was a contributor to TensorFlow and the creator of Keras.
David Cox
IBM Director, MIT-IBM Watson AI Lab, IBM Research
Jeff Dean
Senior Fellow & SVP, Google AI (Research and Health)
Chelsea Finn
Assistant Professor at Stanford | Research Scientist at Google AI
Lex Fridman
researcher in machine learning, autonomous vehicles and human-centered AI at MIT | host, Artificial Intelligence podcast
Andrea Frome
independent consultant (Frome Consulting, LLC)
Andrea received her PhD from Berkeley and has done AI engineering work for companies like Twitter and Google.
Ian Goodfellow
Director of Machine Learning, Special Projects Group at Apple
Ian is the inventor of generative adversarial networks (GANs). He was the lead author of the Deep Learning textbook while at Google. During an internship at Google, he helped create a system to extract street addresses from Street View imagery.
Geoffrey Hinton
University of Toronto | Google Brain
Geoffrey was one of the founding developers of backpropagation and deep learning.
Andrej Karpathy
Director of AI at Tesla
Andrej is an expert on deep neural networks for vision. He runs the Tesla Autopilot team, and previously worked as a Research Scientist at OpenAI. He received his PhD from Stanford.
Tessa Lau
founder/CEO at Dusty Robotics
Prior to founding her own companies, Tessa was a member of IBM Research for 11 years.
Fei-Fei Li
Professor, Stanford | Co-Director of Stanford Human-Centered AI Institute
Prior to founding her own companies, Tessa was a member of IBM Research for 11 years.
Quoc Le
Principal Scientist, Google Brain Team
Yann LeCun
Professor at NYU | Chief AI Scientist at Facebook
Hanxiao Liu
Research Scientist at Google Brain
Warren McCulloch (1898 - 1969)
neurophysiology and cybernetics at MIT, Yale, Chicago
Besides being one of the originators of the artificial neural network concept, he was also part of the group which discovered, through frog studies, that the eye processes information before sending it to the brain.
Andrew Ng
adjunct professor, Stanford University
Andrew was a co-founder of Google Brain and Coursera. He teaches some of the most popular MOOCs for AI on Coursera.
Devi Parikh
Professor at Georgia Tech | Research Scientist at Facebook AI | co-founder of Caliper
Adam Paszke
author of PyTorch | Research Scientist at Google
Maithra Raghu
Research Scientist at Google Brain
Maithra is applying deep learning to healthcare applications.
Frank Rosenblatt (1928 - 1971)
psychologist and early AI researcher at Cornell
Frank described the perceptron algorithm for early machine learning in 1957. A perceptron is a form of artificial neural network (ANN). In 1962, Rosenblatt developed multi-layer perceptrons, which were able to model more complicated relationships betweem the inputs and outputs.
Rich Sutton
Distinguished Research Scientist, DeepMind | Professor at University of Alberta | Principal Investigator, RLAI
Sebastian Thrun
CEO of Kitty Hawk | chairman / co-founder of Udacity | Adjust Professor, Stanford & Georgia Tech
Founded Google X and led development of Google's self driving car.
Hanna Wallach
Principal Researcher at Microsoft Research NYC | Adjunct Professor at UMass Amherst
Lilian Weng
Robotics at OpenAI
Wojciech Zaremba
co-founder of OpenAI

Influencers in Applied AI

Hilary Mason
Data Scientist in Residence, Accel Partners | Working on undisclosed new initiative
Hilary founded Fast Forward Labs, which was acquired by Cloudera in 2017.


for-profit deep learning research company
Deepmind produced major milestones in game play (AlphaGo, AlphaZero, AlphaStar). It also created AlphaFold for protein folding research, and has projects to enhance Google's services internally.
  1. 2010 - established
  2. 2014 - acquired by Google
Facebook AI (FAIR)
division of Facebook focused on AI
Google AI
division of Google focused on AI
Besides contributing to basic research, Google AI also develops TensorFlow and is working on hardware.
  1. 2017 - announced at Google I/O
MIT-IBM Watson AI Lab
collaborative research lab
A collaborative industrial-academic laboratory focused on advancing fundamental AI research.
  1. 2017 - established
private AI research lab
OpenAI is working on a pathway to general intelligence. At the current time this largely involves creating bots that use machine learning to beat humans at digital games. They created an open Gym toolkit for reinforcement learning. The initiative was initially formed as a non-profit, and to that end they have released some code and their GPT-2 model resulting from their work.
  1. 2015 - established
Reinforcement Learning and Artificial Intelligence Lab (RLAI)
research lab at University of Alberta in Edmonton, AB, Canada
The lab is advancing the development of Reinforcement Learning with its founder, Rich Sutton.
Stanford HAI
research lab
The Stanford Institute for Human-Centered Artificial Intelligence studies and develops AI technologies that are "enhancing to human productivity and quality of life."
  1. 2019 - established
autonomous cars
A subsidiary of Alphabet Inc., Waymo was spun out of Google's self-driving car project. It currently operates limited service in Phoenix AZ.
  1. 2009 - self-driving project started at Google X
  2. 2016 - spun off from Google


for-profit deep learning research company
Open source machine learning framework for Python. It also has a C++ interface. It provides strong hardware acceleration capabilities.
  1. 2016 - first release


artificial neural network (ANN)
machine learning model type

Artificial neural networks are graphs that are rought similar to the network of neuron cells in the human brain. They are called "artificial" to distinguish them from biological neural networks.

The vertices in the neural network graph are called nodes, units, or neurons.

The connections in the neural network graph are called edges or weights.

The essential idea for a neural net was shared by Walter Pitts and Warren McCulloch in 1943. ANNs became much more practical with the development of backpropagation in the 1970s.

machine learning algorithm
conditional random field (CRF)
machine learning model type
generative adversarial network (GAN)
machine learning model type
gradient descent
mathematics algorithm

Gradient descent is a common technique for finding minimal values of a function. If you have a complicated math equation that models average housing price over the days of a year, you can use gradient descent to find the times of the year when prices dip. You begin by choosing a random day of the year and calculating the slope (the gradient) of the function on that day. If the slope is positive, the prices are increasing at that time of year. If the slope is negative, they are increasing. You take a small step in the direction of decreasing slope, either forward or backward, and then repeat the process for this new date. You continue until you get stuck at a point where the slope changes from negative to positive, which can only happen at the bottom of a dip in the curve. At this point, the price is at a minimum value relative to the surrounding dates (a local minimum) but it may not be the absolute lowest price dip throughout the year. Finding the global minimum (absolute lowest value) of a function is difficult; this is why learning/optimization functions often get stuck in local minima.

Taking steps in the direction of the positive gradient (instead of the negative gradient) will lead you uphill to a local maximum of the function. This is called gradient ascent.

In machine learning, gradient descent is used to find a minimum value of the cost function, that is, the point where the error between the predicted values and trained values is minimized.

An excellent overview of different gradient descent techniques for ML is provided by Sebastian Ruder.

hidden Markov model (HMM)
machine learning model type
machine learning model type
recurrent neural network (RNN)
machine learning model type
reinforcement learning
machine learning model type
self-supervised learning
machine learning model type
supervised learning
machine learning model type
support-vector machine (SVM)
machine learning model type
mathematics object
Whereas a vector is a one-dimensional array of components, a tensor can have multiple dimensions. It is respresented by a matrix of numbers.
unsupervised learning
machine learning model type


Deep Learning
Considered one of the best textbooks on deep learning.
Dive into Deep Learning
open source textbook
An interactive deep learning book with code, math, and discussions, based on the NumPy interface.
Reinforcement Learning: An Introduction

By Richard S. Sutton and Andrew G. Barto

An introduction to reinforcement learning directly from the founder of the technique.


Lex Fridman's AI Podcast (YouTube)
YouTube interviews with AI and technology innovators.
Lex, who also provides lecture videos on his YouTube channel, sits down with individuals like Stuart Russell, Joshn Tenenbaum, and Andrew Ng.

Current challenges

Causal reasoning
understanding cause and effect
Current machine learning algorithms are not good at learning how objects interact to produce causes and effects. For example, ML systems can't recognize that running a red light might cause an accident. Current approaches involve specialized programming rather than learning. Understanding causality is an important step for machine learning algorithms to be able to reason on their own.
Active researchers