Skip to content
AI Workshop
AI Workshop

Start here

The AI hierarchy, in order

Read this as one nested system. Start with AI as the umbrella, narrow into Machine Learning, narrow again into Deep Learning, and end at Foundation Models as the reusable engines behind modern generative products.

Layer 1

AI Overview

AI is the umbrella field. Everything below sits inside it.

From Turing machines to Transformer models.
Layer 2

Machine Learning

Machine Learning is a child of AI. It learns patterns from data instead of relying only on fixed rules.

Computers that learn from data and improve with experience.
Layer 3

Deep Learning

Deep Learning is a child of Machine Learning. It uses many neural-network layers to learn complex features.

Unlocking the power of high-dimensional data.
Layer 4

Foundation Models

Foundation Models are a modern child of Deep Learning. They become reusable engines for many downstream tasks.

One model, infinite applications.

Core mechanism

Neural Networks

Neural Networks are the architecture that make Deep Learning possible.

Why it matters

Deep Learning describes the method. Neural Networks explain the architecture underneath it: neurons, layers, weights, activation, and backpropagation.

What foundation models can handle

01
Text / LLMs
02
Vision
03
Audio
04
Science & Structured Data

How machines learn

Supervised Learning
Unsupervised Learning
Semi-Supervised
Reinforcement Learning

Chapter 1

The AI stack

Start with the umbrella idea of AI, then narrow into the child layers that make modern systems possible.

By the end of this chapter

You should understand the nested relationship between AI, Machine Learning, Deep Learning, Neural Networks, and Foundation Models.

Chapter 1 / Lesson 01 Foundation Layer 1 Updated November 27, 2025

AI Overview

From Turing machines to Transformer models.

Where it sits in the stack

AI Overview

Link to this section

Key question

What do people actually mean when they say AI?

Why this matters

Most confusion starts when teams use AI to mean the whole field, one technique, and one product at the same time. The guide has to fix that first.

Learning journey

This is the starting point. Treat AI as the umbrella before you zoom into any child layer.

What this unlocks next

Once the umbrella is clear, the next question is how a machine improves from data instead of fixed rules. That is Machine Learning.

O

Overview

Shared language for 2026

Artificial Intelligence is the broad discipline of creating intelligent machines. It is the umbrella term that encompasses everything from simple rule-based systems to the complex Large Language Models of today.

M

Concept map

4 theory blocks

How is AI structured? What are the types of AI? Why has AI exploded recently? A Brief History of AI
01

Theory

How is AI structured?

To understand AI, visualize concentric circles. The outermost circle is Artificial Intelligence—the grand vision. Inside that is Machine Learning—the technique of learning from data. Inside that is Deep Learning—using neural networks. And at the cutting edge, we find Generative AI—models that create.

02

Theory

What are the types of AI?

We classify AI capability into three stages:

  • ANI (Artificial Narrow Intelligence): AI that excels at one specific task (e.g., playing Chess, recommending movies). This is where we are today.
  • AGI (Artificial General Intelligence): AI that possesses the ability to understand, learn, and apply knowledge across a wide variety of tasks, matching human capability.
  • ASI (Artificial Super Intelligence): An intellect that is much smarter than the best human brains in practically every field.
03

Theory

Why has AI exploded recently?

It is the convergence of three factors:

  1. Big Data: The internet provided the fuel.
  2. Compute Power: GPUs provided the engine.
  3. Better Algorithms: Transformers provided the map.
04

Theory

A Brief History of AI

  • 1950: Alan Turing proposes the Turing Test.
  • 1956: The term "Artificial Intelligence" is coined at Dartmouth.
  • 1997: Deep Blue beats Garry Kasparov at Chess.
  • 2012: AlexNet revolutionizes computer vision (Deep Learning boom).
  • 2017: The "Attention Is All You Need" paper introduces Transformers.
  • 2022: ChatGPT is released, bringing Generative AI to the masses.
E

Example

A Brief History of AI

  • 1950: Alan Turing proposes the Turing Test.
  • 1956: The term "Artificial Intelligence" is coined at Dartmouth.
  • 1997: Deep Blue beats Garry Kasparov at Chess.
  • 2012: AlexNet revolutionizes computer vision (Deep Learning boom).
  • 2017: The "Attention Is All You Need" paper introduces Transformers.
  • 2022: ChatGPT is released, bringing Generative AI to the masses.

What to remember

AI is the broad field. Everything else in this guide is a more specific layer inside it.

Common question

Are we close to AGI?

Estimates vary wildly from a few years to a few decades. The rapid progress of LLMs has accelerated these timelines, but significant hurdles in reasoning and physical world understanding remain.

Chapter 1 / Lesson 02 Core Concept Layer 2 Updated November 27, 2025

Machine Learning

Computers that learn from data and improve with experience.

Where it sits in the stack

AI Overview -> Machine Learning

Link to this section

Key question

How does a machine get better without being reprogrammed for every case?

Why this matters

Machine Learning is where AI stops being only hand-written rules and starts learning patterns from examples.

Learning journey

AI gave you the umbrella term. Machine Learning is the first practical child inside that umbrella.

What this unlocks next

After ML, the next step is to understand the approach that became dominant for complex data like text, images, and audio: Deep Learning.

O

Overview

Shared language for 2026

Machine Learning is a subset of Artificial Intelligence (AI) where computers learn from data and improve through experience without being explicitly programmed. Algorithms are trained to find patterns and correlations in large datasets to make the best decisions and predictions. With practice and more data, these applications become increasingly accurate.

M

Concept map

5 theory blocks

AI, Machine Learning, and Deep Learning Neural Networks and Deep Learning The 4 Types of Machine Learning Real-World Applications Challenges in Machine Learning
01

Theory

AI, Machine Learning, and Deep Learning

Think of them as concentric circles. Artificial Intelligence is the broad discipline. Machine Learning is a subset within AI that allows machines to learn from data. Inside Machine Learning is Deep Learning, and within that are Artificial Neural Networks. AI processes data to make decisions; ML algorithms allow AI to learn and get smarter without additional programming.

02

Theory

Neural Networks and Deep Learning

Artificial Neural Networks mimic the biological brain, with nodes (neurons) grouped in layers working in parallel. They strengthen connections to improve pattern recognition.

Deep Learning involves many layers of these networks and huge volumes of complex data. It extracts features hierarchically: a system might recognize a plant in the first layer, a flower in the next, and a yellow daisy in the last.

03

Theory

The 4 Types of Machine Learning

  1. Supervised Learning: Learning by example. The machine is given labeled inputs and outputs (e.g., images of daisies labeled "daisy"). It learns to map new inputs to the correct output.
  2. Unsupervised Learning: No answer key. The machine analyzes unlabeled data to find hidden patterns, clusters, or structures, similar to how humans observe and categorize the world.
  3. Semi-Supervised Learning: Uses a small amount of labeled data to guide the analysis of a large amount of unlabeled data. This speeds up learning and improves accuracy.
  4. Reinforcement Learning: Learning by trial and error. The system receives "rewards" for good actions and "penalties" for bad ones, optimizing its strategy over time (e.g., playing chess).
04

Theory

Real-World Applications

  • Recommendation Engines: Streaming services (Netflix, Spotify) analyzing viewing habits to suggest content.
  • Dynamic Marketing: Analyzing customer data to personalize marketing and engage in real-time.
  • ERP & Automation: Optimizing workflows and automating repetitive tasks using business data.
  • Predictive Maintenance: IoT sensors on machinery predicting failures before they happen, saving costs and preventing downtime.
05

Theory

Challenges in Machine Learning

Bias and Spurious Correlations: Models can learn incorrect associations (e.g., correlating margarine consumption with divorce rates) if the data is flawed or if they find coincidental patterns.

The Black Box Problem: Complex models like deep neural networks can be difficult to interpret. It is often unclear how or why a specific decision was made, which poses risks in critical fields.

E

Example

Real-World Applications

  • Recommendation Engines: Streaming services (Netflix, Spotify) analyzing viewing habits to suggest content.
  • Dynamic Marketing: Analyzing customer data to personalize marketing and engage in real-time.
  • ERP & Automation: Optimizing workflows and automating repetitive tasks using business data.
  • Predictive Maintenance: IoT sensors on machinery predicting failures before they happen, saving costs and preventing downtime.

What to remember

Machine Learning is the learning engine inside AI, powered by data rather than explicit instructions for every outcome.

Common question

What is the difference between AI and Machine Learning?

AI is the broader concept of machines acting intelligently. Machine Learning is a specific subset of AI where machines learn from data to improve their performance without being explicitly programmed for every task.

Chapter 1 / Lesson 03 Advanced Layer 3 Updated November 27, 2025

Deep Learning

Unlocking the power of high-dimensional data.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning

Link to this section

Key question

Why did modern AI become dramatically more capable?

Why this matters

Deep Learning explains the performance leap that unlocked language, vision, speech, and the current generative wave.

Learning journey

Machine Learning taught the machine to learn from data. Deep Learning shows what happens when that learning is scaled with many layers.

What this unlocks next

To understand why deep systems work, you need to see the architecture underneath them: Neural Networks.

O

Overview

Shared language for 2026

Deep Learning is a specialized subset of Machine Learning inspired by the structure of the human brain. It uses multi-layered neural networks to learn from vast amounts of unstructured data like images, audio, and text.

M

Concept map

4 theory blocks

Why is it called "Deep" Learning? What is Automatic Feature Extraction? Why is Deep Learning important now? Key Architectures
01

Theory

Why is it called "Deep" Learning?

The "Deep" in Deep Learning refers to the number of layers in the neural network. Traditional neural networks might have 2-3 layers. Deep learning models can have hundreds or thousands. This depth allows the model to learn a hierarchy of features—from simple edges and textures to complex shapes and objects.

02

Theory

What is Automatic Feature Extraction?

In traditional ML, humans had to manually select features (e.g., "does this image have ears?"). In Deep Learning, the network performs automatic feature extraction. It learns what features are important directly from the raw pixels or text.

03

Theory

Why is Deep Learning important now?

Deep Learning is the technology behind self-driving cars, voice assistants, facial recognition, and the recent generative AI boom. It thrives on scale—more data and more compute usually lead to better performance.

04

Theory

Key Architectures

  • CNNs (Convolutional Neural Networks): The kings of computer vision.
  • RNNs (Recurrent Neural Networks): Good for time-series and sequential data.
  • Transformers: The state-of-the-art for natural language processing (NLP).
E

Example

Why is Deep Learning important now?

Deep Learning is the technology behind self-driving cars, voice assistants, facial recognition, and the recent generative AI boom. It thrives on scale—more data and more compute usually lead to better performance.

What to remember

Deep Learning is scaled Machine Learning built on many layers, which makes it strong on high-dimensional and unstructured data.

Common question

Is Deep Learning the same as Neural Networks?

Deep Learning is essentially the use of *deep* neural networks. So all Deep Learning involves neural networks, but not all neural networks are 'deep' (though in modern context, the terms are often used interchangeably).

Chapter 1 / Lesson 04 Technical Core mechanism Updated November 27, 2025

Neural Networks

The mathematical architecture of the mind.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Neural Networks

Link to this section

Key question

What is the actual mechanism doing the learning inside Deep Learning?

Why this matters

Without neural networks, Deep Learning stays abstract. This lesson makes the architecture visible.

Learning journey

Deep Learning described the method. Neural Networks explain the machinery that makes that method work.

What this unlocks next

Once the mechanism is clear, the next step is to see how these architectures became reusable, general-purpose engines: Foundation Models.

O

Overview

Shared language for 2026

Artificial Neural Networks (ANNs) are computing systems vaguely inspired by the biological neural networks that constitute animal brains. They are the fundamental building blocks of Deep Learning.

M

Concept map

3 theory blocks

How does an Artificial Neuron work? What are the main types of Neural Networks? How do Neural Networks learn?
01

Theory

How does an Artificial Neuron work?

A neuron takes multiple inputs, multiplies them by weights (importance), adds a bias (threshold), and passes the result through an activation function (non-linearity). If the signal is strong enough, the neuron "fires" and passes information to the next layer.

02

Theory

What are the main types of Neural Networks?

  • Feedforward NN: The simplest type. Information moves in one direction.
  • CNN (Convolutional Neural Network): Specialized for processing grid-like data (images). It scans the image with filters to detect patterns.
  • RNN (Recurrent Neural Network): Designed for sequential data (time series, text). It has a "memory" of previous inputs.
  • Transformer: The modern architecture for language. It uses "attention" mechanisms to weigh the importance of different parts of the input data simultaneously.
03

Theory

How do Neural Networks learn?

They learn through a process called Backpropagation. The network makes a guess, compares it to the actual answer to calculate the loss (error), and then works backward to adjust the weights to minimize that error. This is repeated millions of times.

E

Example

What is an activation function?

It's a mathematical function (like ReLU or Sigmoid) attached to each neuron that decides whether it should be activated. It introduces non-linearity, allowing the network to learn complex patterns.

What to remember

Neural Networks are the mathematical structure that powers Deep Learning and makes modern model scaling possible.

Common question

What is the 'Black Box' problem?

Neural networks can be so complex that even their creators don't fully understand how they arrive at a specific decision. This lack of interpretability is a major challenge in high-stakes fields like medicine.

Chapter 1 / Lesson 05 Modern AI Layer 4 Updated November 27, 2025

Foundation Models

One model, infinite applications.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Foundation Models

Link to this section

Key question

Why do a few large models now power so many different tasks?

Why this matters

Foundation Models are the bridge between technical architectures and the products people actually use today.

Learning journey

Neural Networks gave you the structure. Foundation Models show what that structure becomes at internet scale.

What this unlocks next

Once you understand foundation models, it becomes much easier to explain Generative AI as an application layer on top of them.

O

Overview

Shared language for 2026

A Foundation Model is a large-scale AI model trained on a vast amount of data (often at internet scale) that can be adapted to a wide range of downstream tasks. They represent a paradigm shift from task-specific models to general-purpose engines.

M

Concept map

4 theory blocks

What makes a model a "Foundation Model"? What is "Emergence"? How are they built? Leading Foundation Models
01

Theory

What makes a model a "Foundation Model"?

It must be broadly capable. Unlike previous models designed for one task (e.g., sentiment analysis), a foundation model can write poetry, debug code, translate languages, and summarize text, all without specific retraining.

02

Theory

What is "Emergence"?

Foundation models exhibit emergence—capabilities that were not explicitly trained for. For example, a model trained simply to predict the next word in a sentence might emerge with the ability to translate languages, write code, or solve logic puzzles.

03

Theory

How are they built?

The lifecycle involves two stages:

  1. Pre-training: The expensive, compute-intensive phase where the model learns general patterns from massive datasets (e.g., "learning to read and write").
  2. Fine-tuning: The adaptation phase where the model is specialized for a specific task or behavior (e.g., "learning to be a helpful assistant").
04

Theory

Leading Foundation Models

Prominent foundation models include the GPT series (OpenAI), BERT (Google), Claude (Anthropic), and Stable Diffusion (Stability AI).

E

Example

Leading Foundation Models

Prominent foundation models include the GPT series (OpenAI), BERT (Google), Claude (Anthropic), and Stable Diffusion (Stability AI).

What to remember

Foundation Models are general-purpose engines trained at massive scale and adapted to many downstream uses.

Common question

Are Foundation Models the same as LLMs?

LLMs (Large Language Models) are a *type* of foundation model focused on text. But foundation models can also be multimodal, handling images, audio, and video.

Chapter 2

From models to generated output

Once the stack is clear, you can understand how modern systems generate content and who is shaping that layer.

By the end of this chapter

You should be able to explain what Generative AI is, why it depends on foundation models, and how major model providers differ.

Chapter 2 / Lesson 06 Creative AI Output layer Updated November 27, 2025

Generative AI

From analysis to creation.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Foundation Models -> Generative AI

Link to this section

Key question

How does modern AI move from analysis into creation?

Why this matters

Generative AI is the part of the market most people encounter first, but it only makes sense once the model stack underneath is clear.

Learning journey

Foundation Models explain the engine. Generative AI explains the visible behavior that engine enables.

What this unlocks next

After understanding generated output, the next practical question is who builds these systems and how their approaches differ.

O

Overview

Shared language for 2026

Generative AI refers to algorithms that can create new content—including audio, code, images, text, simulations, and videos. Unlike traditional AI which classifies or predicts, Generative AI produces novel outputs.

M

Concept map

4 theory blocks

Discriminative vs. Generative AI How does Generative AI work? The Creative Revolution Use Cases
01

Theory

Discriminative vs. Generative AI

Traditional AI is Discriminative: It draws a line to separate data (e.g., "Is this a cat or a dog?"). Generative AI is Creative: It learns the distribution of the data to create new examples (e.g., "Draw me a cat that never existed").

02

Theory

How does Generative AI work?

Generative models, such as Diffusion Models (for images) or Transformers (for text), learn the underlying structure of the training data. They then use probability to assemble new patterns that follow those structures but are not identical copies.

03

Theory

The Creative Revolution

Generative AI is democratizing creativity. It acts as a co-pilot for writers, artists, coders, and designers, allowing them to iterate faster and explore new ideas. It is shifting the bottleneck from "skill" to "imagination".

04

Theory

Use Cases

  • Marketing: Generating ad copy and visuals.
  • Coding: Writing boilerplate code and documentation.
  • Entertainment: Creating game assets and scripts.
  • Science: Generating novel protein structures for drug discovery.
E

Example

Use Cases

  • Marketing: Generating ad copy and visuals.
  • Coding: Writing boilerplate code and documentation.
  • Entertainment: Creating game assets and scripts.
  • Science: Generating novel protein structures for drug discovery.

What to remember

Generative AI is the creation layer. It uses modern models to produce new text, images, code, audio, and more.

Common question

Does Generative AI steal art?

This is a complex legal and ethical issue. Models learn from existing data, but they don't 'copy-paste'. They learn styles and concepts. However, the rights of the original creators of the training data are a subject of active debate and litigation.

Chapter 2 / Lesson 07 Industry Provider layer Updated November 27, 2025

LLM Players

The titans shaping the future of intelligence.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Foundation Models -> LLM Players

Link to this section

Key question

Who is shaping the model layer that everyone builds on?

Why this matters

The ecosystem matters because strategy, safety, privacy, deployment, and cost all depend on who provides the model.

Learning journey

Generative AI showed what the systems can do. This lesson explains who is building the core engines and why the market feels fragmented.

What this unlocks next

Once the provider layer is clear, the next step is to see how those models turn into tools for actual work.

O

Overview

Shared language for 2026

The landscape of Large Language Models is dominated by a few key technology giants and ambitious startups. Understanding who they are and what they offer is crucial for navigating the AI ecosystem.

M

Concept map

2 theory blocks

Who are the key players? Open Source vs Closed Source
01

Theory

Who are the key players?

  1. OpenAI: The pioneer. GPT-4 and ChatGPT set the standard for modern generative AI. OpenAI focuses on pushing the boundaries of scale and reasoning capability.
  2. Google (DeepMind): The sleeping giant. With Gemini, Google has integrated its vast research capabilities into a multimodal model that integrates deeply with the Google ecosystem.
  3. Anthropic: The safety-first contender. Founded by former OpenAI employees, Anthropic focuses on "Constitutional AI" and safety. Their Claude models are known for their large context windows and nuanced writing.
  4. Meta (Facebook): The open-source champion. Meta's LLaMA series has been pivotal in enabling the open-source community to build and run powerful models on their own hardware.
  5. Mistral: The European challenger. Based in France, Mistral produces highly efficient, open-weight models that rival the giants in performance-per-parameter.
02

Theory

Open Source vs Closed Source

  • Closed Source (Proprietary): Models like GPT-4 and Gemini. You access them via API. They are generally more powerful and easier to use, but you have less control and privacy.
  • Open Source (Open Weights): Models like LLaMA and Mistral. You can download and run them yourself. They offer privacy, control, and customization, but require hardware to run.
E

Example

Open Source vs Closed Source

  • Closed Source (Proprietary): Models like GPT-4 and Gemini. You access them via API. They are generally more powerful and easier to use, but you have less control and privacy.
  • Open Source (Open Weights): Models like LLaMA and Mistral. You can download and run them yourself. They offer privacy, control, and customization, but require hardware to run.

What to remember

Model providers are not interchangeable. Each one makes different tradeoffs in openness, performance, integration, and control.

Common question

Which model is the best?

It depends on the use case. GPT-4 is often the benchmark for reasoning. Claude is excellent for long documents and coding. LLaMA is best for local deployment. Gemini integrates best with Google Workspace.

Chapter 3

From models to products and practice

After the model layer, the next question is how people actually use AI in real work and how they build intuition.

By the end of this chapter

You should understand how AI becomes tools, how to compare products, and how hands-on practice reinforces the concepts.

Chapter 3 / Lesson 08 Practical Product layer Updated November 27, 2025

AI Tools

Augmenting human capability.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Foundation Models -> Generative AI -> AI Tools

Link to this section

Key question

How do models become useful products in everyday work?

Why this matters

Most learners do not adopt AI at the model layer. They adopt it through tools that wrap models into jobs to be done.

Learning journey

The provider landscape explains the engines. AI tools show how those engines are packaged into user-facing products.

What this unlocks next

After seeing the product categories, the next step is comparison: which tools belong in which workflow.

O

Overview

Shared language for 2026

AI tools are applications that leverage artificial intelligence to solve specific problems. They are the practical interface between complex models and end-users.

M

Concept map

2 theory blocks

What are the main categories of AI tools? How to choose the right tool?
01

Theory

What are the main categories of AI tools?

  • Text & Writing: Tools like ChatGPT, Claude, and Jasper assist with drafting, editing, summarizing, and brainstorming text.
  • Image & Design: Tools like Midjourney, DALL-E 3, and Stable Diffusion allow users to generate photorealistic images and art from text descriptions.
  • Coding: Tools like GitHub Copilot, Cursor, and Replit act as pair programmers, suggesting code, debugging, and refactoring.
  • Productivity: Tools like Perplexity (search), Otter.ai (transcription), and Notion AI (workspace) integrate AI into daily workflows to save time.
02

Theory

How to choose the right tool?

  1. Define your goal: Are you writing, coding, or designing?
  2. Check the model: What underlying model does it use? (e.g., GPT-4 vs Claude 3)
  3. Consider privacy: Does the tool train on your data?
  4. Look for integration: Does it fit into your existing workflow?
E

Example

How to choose the right tool?

  1. Define your goal: Are you writing, coding, or designing?
  2. Check the model: What underlying model does it use? (e.g., GPT-4 vs Claude 3)
  3. Consider privacy: Does the tool train on your data?
  4. Look for integration: Does it fit into your existing workflow?

What to remember

AI tools are the interface layer between advanced models and real human tasks.

Common question

Will these tools replace my job?

AI is more likely to change your job than replace it. The consensus is that 'You won't be replaced by AI, but by a human using AI.' Learning these tools is a career superpower.

Chapter 3 / Lesson 09 Resources Market layer Updated November 27, 2025

AI Tools Directory

A curated collection of the best AI resources.

Where it sits in the stack

AI Overview -> Machine Learning -> Deep Learning -> Foundation Models -> Generative AI -> AI Tools -> AI Tools Directory

Link to this section

Key question

How do you evaluate the crowded tool market without getting lost?

Why this matters

A directory is useful only if it turns abundance into categories and buying criteria.

Learning journey

AI tools introduced the categories. The directory organizes the landscape so the learner can compare options more quickly.

What this unlocks next

After comparison, the best next move is practice. Concepts stick when the learner can interact with them.

O

Overview

Shared language for 2026

Navigating the explosion of AI tools can be overwhelming. This directory categorizes the most reliable and impactful tools available today.

M

Concept map

3 theory blocks

Chat & Assistants Visual Creation Development
01

Theory

Chat & Assistants

  • ChatGPT (OpenAI): The industry standard for conversational AI.
  • Claude (Anthropic): Known for safety and large context handling.
  • Gemini (Google): Multimodal assistant integrated with Google apps.
  • Perplexity: AI-powered search engine for accurate answers.
02

Theory

Visual Creation

  • Midjourney: Highest quality artistic image generation.
  • Leonardo.ai: versatile asset generation for games and design.
  • Runway: Leading tool for AI video generation and editing.
03

Theory

Development

  • Cursor: The AI-first code editor.
  • GitHub Copilot: The most widely used code completion tool.
  • V0.dev: Generative UI system by Vercel.
E

Example

Chat & Assistants

  • ChatGPT (OpenAI): The industry standard for conversational AI.
  • Claude (Anthropic): Known for safety and large context handling.
  • Gemini (Google): Multimodal assistant integrated with Google apps.
  • Perplexity: AI-powered search engine for accurate answers.

What to remember

The right tool choice depends on workflow, privacy, integration, and the model sitting underneath the interface.

Chapter 3 / Lesson 10 Practice Practice layer Updated November 27, 2025

Interactive Exercises

Learn by doing.

Where it sits in the stack

AI Overview -> Machine Learning -> Interactive Exercises

Link to this section

Key question

How do abstract AI ideas become intuition instead of memorized jargon?

Why this matters

Practice is where the concepts stop sounding impressive and start becoming usable mental models.

Learning journey

The directory helps you compare tools. Interactive exercises help you internalize how the underlying systems behave.

What this unlocks next

After the technical and practical stack, the last move is to reconnect all of this to human judgment.

O

Overview

Shared language for 2026

Theory is essential, but practice makes perfect. These interactive simulations and games are designed to build your intuition for how AI systems actually work.

M

Concept map

2 theory blocks

Why Interactive Learning? Available Modules
01

Theory

Why Interactive Learning?

AI concepts like "Gradient Descent" or "Backpropagation" can be abstract and mathematical. Interactive visualizations allow you to see the math in action, building a deeper, intuitive understanding.

02

Theory

Available Modules

  • Neuron Visualizer: See how inputs, weights, and biases combine to fire a neuron.
  • Network Playground: Build and train simple neural networks in your browser.
  • Gradient Descent Sim: Visualize how models minimize error by descending a loss landscape.
  • Hyperparameter Sandbox: Experiment with learning rates and batch sizes to see their effect on training.
E

Example

Available Modules

  • Neuron Visualizer: See how inputs, weights, and biases combine to fire a neuron.
  • Network Playground: Build and train simple neural networks in your browser.
  • Gradient Descent Sim: Visualize how models minimize error by descending a loss landscape.
  • Hyperparameter Sandbox: Experiment with learning rates and batch sizes to see their effect on training.

What to remember

Interactive learning is the shortest path from theory to intuition.

Chapter 4

Human judgment and shared language

Finish by reconnecting AI to people: human intelligence, augmented decision-making, and the vocabulary needed for clear conversations.

By the end of this chapter

You should be able to discuss AI with more precision, better judgment, and a shared vocabulary across teams.

Chapter 4 / Lesson 11 Fundamentals Human context Updated November 27, 2025

Intelligence

IQ, EQ, and AI shape our decision-making.

Where it sits in the stack

AI Overview -> Intelligence

Link to this section

Key question

What still belongs to humans in an AI-shaped world?

Why this matters

A good guide should not end with tools. It should end with judgment, augmentation, and the boundary between machine capability and human responsibility.

Learning journey

You have now seen the stack, the model layer, and the product layer. This lesson brings the conversation back to people.

What this unlocks next

Once the human role is clear, the final step is to lock in a shared vocabulary so teams can speak precisely.

O

Overview

Shared language for 2026

Understanding how human intelligence, artificial intelligence, and augmented intelligence complement each other is key to navigating the future. Intelligence is not just about processing power; it's about the synergy between biological and synthetic cognition.

M

Concept map

3 theory blocks

Three Types of Intelligence Artificial Intelligence (AI) Emotional Intelligence (EQ)
01

Theory

Three Types of Intelligence

We can categorize intelligence into three distinct forms that interact in the modern world:

Feature Humans Machines Augmented Intelligence
Data Handling Understand and generalize concepts Process and analyze large volumes of data Combine context with data-driven insights
Repetition Prone to fatigue Perform repetitive tasks with high accuracy Automate tasks while preserving human oversight
Creativity Flexible problem-solving Limited creative capacity Enhance human creativity with smart tools
Emotional Insight Empathy & customer care No emotional understanding Human-led empathy, supported by smart assistance
02

Theory

Artificial Intelligence (AI)

Definition: The ability of machines to think and reflect like humans, attempting to replicate human intelligence with machines.

Capabilities:

  • Reasoning: Logical thinking and inference.
  • Natural Communication: Understanding and generating human language.
  • Problem-solving: Finding solutions to complex challenges.

Characteristics:

  • Replaces Human Effort: Automates tasks traditionally done by humans.
  • Performs Tasks Independently: Operates without constant human intervention.
03

Theory

Emotional Intelligence (EQ)

"Why aren't we more compassionate?" — Daniel Goleman

Emotional Intelligence operates through the basal ganglia (the wisdom center of the brain). It guides decisions based on emotional valence (what felt good/bad). Unlike the neocortex, it does not speak in words but is connected to emotions and the gut.

Key Insight: Combine EQ + IQ for better decisions.

E

Example

What is Augmented Intelligence?

Augmented Intelligence is a design pattern for a human-centered partnership model of people and AI working together to enhance cognitive performance, including learning, decision making, and new experiences.

What to remember

The strongest future is not human or machine alone. It is augmented intelligence with clear human oversight.

Common question

What is Augmented Intelligence?

Augmented Intelligence is a design pattern for a human-centered partnership model of people and AI working together to enhance cognitive performance, including learning, decision making, and new experiences.

Chapter 4 / Lesson 12 Dictionary Language layer Updated November 27, 2025

AI Concepts & Terminology

Speak the language of the future.

Where it sits in the stack

AI Overview -> AI Concepts & Terminology

Link to this section

Key question

Which terms should everyone use consistently after reading this guide?

Why this matters

Shared language prevents shallow conversations, poor purchasing decisions, and avoidable confusion across teams.

Learning journey

Human judgment sets the frame. The glossary gives the team a common language to carry that frame into daily work.

What this unlocks next

This is the reference layer at the end of the journey. Use it to keep future AI conversations precise.

O

Overview

Shared language for 2026

The field of AI is filled with jargon. This dictionary provides clear, concise definitions for the most important terms you need to know.

M

Concept map

3 theory blocks

A-E F-L M-Z
01

Theory

A-E

Algorithm: A set of rules or instructions given to an AI, neural network, or other machine to help it learn on its own.

Alignment: The problem of ensuring AI systems have goals that match human values.

Bias: Errors in AI output resulting from prejudices in the training data.

02

Theory

F-L

Fine-tuning: The process of training a pre-trained model on a smaller, specific dataset to specialize it.

Hallucination: When an AI generates false or nonsensical information confidently.

LLM (Large Language Model): A deep learning algorithm that can recognize, summarize, translate, predict, and generate text.

03

Theory

M-Z

Multimodal: AI that can understand and generate multiple types of media (text, images, audio).

Parameters: The internal variables (weights) that the model adjusts during training. GPT-4 has trillions.

Token: The basic unit of text for an LLM (roughly 0.75 words).

E

Example

F-L

Fine-tuning: The process of training a pre-trained model on a smaller, specific dataset to specialize it.

Hallucination: When an AI generates false or nonsensical information confidently.

LLM (Large Language Model): A deep learning algorithm that can recognize, summarize, translate, predict, and generate text.

What to remember

A strong AI guide should end with alignment: people leaving with the same words for the same concepts.

Extended guide

Modern AI, in plain language

These are the extra concepts from the workshop guide that help people connect the model stack to what they actually see in products today.

LLM in plain language

The language-specialized child of the model stack.

Definition

What it is

A Large Language Model is a very large model specialized in understanding and generating language. In practice, it acts like a text engine that can summarize, draft, explain, translate, and reason over documents.

Analogy

Simple image

Think of someone who has read a vast library and can help you write, explain, or compare ideas on demand.

Examples

Tools and providers

  • OpenAI and ChatGPT
  • Anthropic and Claude
  • Google DeepMind and Gemini
  • Meta and Llama
  • Mistral, Cohere, xAI, Aleph Alpha, and DeepSeek

Multimodal in plain language

Models that do more than just text.

Definition

What it is

A multimodal system can understand more than one type of input, such as text, images, audio, or video, and combine them in one answer.

Example

Simple example

You upload a photo of a chart and ask a question in text. The model uses both the image and the language prompt to answer.

AI agents in plain language

When a model starts planning and acting across steps.

Definition

What it is

An AI agent is a system that can plan steps, use tools, execute tasks, and sometimes coordinate with other agents to reach a goal.

Example

Simple workflow

  • Research a topic
  • Summarize the useful points
  • Organize a presentation outline
  • Draft a first version
Attention

What to watch

  • Verify sources for important topics
  • Define a clear objective and instructions
  • Review outputs before sharing them

Tools and protocols

How modern AI systems connect to real work.

Tools

Useful tools to know

  • Teachable Machine
  • n8n
  • AutoGen Studio
  • Hugging Face
  • LM Studio
  • Cursor
Protocols

Two standards worth remembering

  • MCP connects a model to tools and data sources.
  • A2A lets multiple AI agents discover each other and collaborate.

Practical guardrails

Safety, common sense, and shared language

Simple reflexes

  • Do not paste passwords, banking details, or sensitive documents into consumer tools.
  • Verify important outputs in health, legal, finance, and compliance contexts.
  • Remember that AI can confidently invent details.
  • Use AI as an assistant, not as absolute truth.

Myth vs reality

  • Myth: if it sounds polished, it must be true. Reality: well-written output can still be wrong.
  • Myth: AI always replaces the human. Reality: it works best when the human sets the frame and reviews the result.

30-second version

  • AI means machines imitating some human capabilities.
  • Machine Learning means they improve from examples.
  • Neural Networks means they learn patterns through layers.
  • Deep Learning means very large, multi-layer neural systems.
  • LLMs are large language-focused models.
  • Generative AI creates new content.
  • Agents plan and execute multi-step tasks.

5-minute challenge

  • Ask an LLM to explain the difference between AI, ML, and LLM for a beginner.
  • Then ask for one example from your own work or daily life.
  • Finally, verify one important claim with a reliable source.

Visual glossary

Shared language people can actually remember

AI

Artificial Intelligence

A machine doing tasks that normally require human intelligence.

Use: Used to understand, decide, recommend, and create.

Example: Spam filters, translation, recommendations.

ML

Machine Learning

The part of AI that learns from examples instead of following only fixed rules.

Use: Used to predict, classify, and segment.

Example: Spam or not spam, price estimation.

LLM

Language Model

A large model specialized in understanding and generating text.

Use: Used to summarize, draft, explain, and translate.

Example: ChatGPT, Claude, Gemini.

Agent

Plan + Tools + Execution

A system that plans steps, uses tools, and executes a task.

Use: Used to automate multi-step workflows.

Example: Research + summary + outline + first draft.

MCP

Tool Protocol

A standard for connecting models to tools and external data sources.

Use: Lets AI fetch context or act through tools.

Example: modelcontextprotocol.io

A2A

Agent Protocol

A standard for helping multiple AI agents discover and collaborate with each other.

Use: Coordinates different AI roles on one mission.

Example: a2aprotocol.ai

Keep these five lines in your head

  • AI is the big umbrella.
  • Machine Learning means learning from examples.
  • LLM means language-focused model.
  • Agent means planning plus tools plus execution.
  • MCP and A2A are connection and collaboration standards.