AI & Marketing

Day 1 - Introduction

Simon Blanchard

Let’s Start Here

Before we go through the material — I want to hear from you.

Think of one example of AI being used in marketing — something you have seen, read about, or experienced as a customer.

We will build three lists on the board together:

🎯 Problem What marketing problem does it solve?
🔁 Replaces / Augments What human task does it take over or assist?
💰 Does the math work? Is it cheaper, faster, or more accurate at scale?

We will come back to these lists throughout the lecture.

No wrong answers. If you are not sure, say what you think and we will figure it out together.

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

Marketers Doubt AI’s Impact

Many marketers and salespeople doubt AI’s ability to boost company revenues or customer satisfaction. Some even believe it adds to their workload, signaling a disconnect between AI adoption and employee confidence.

  • Only 39% of marketers and sales professionals in the US and UK are confident that their departments’ use of AI drives revenues
  • Nearly half (46%) believe AI only somewhat improves the customer experience or does not improve it at all

Sources

Your bosses think AI will solve everything

And maybe you do too.

A Growing Gap Between Leadership and Employees

Broad doubts extend beyond organizational performance. Marketers question AI’s impact on their own work. This suggests a gap between executive enthusiasm and employee experience, where leadership may oversell potential while underestimating operational challenges.

  • 15% say AI has had little or no impact on day-to-day work
  • 18% say it has created more work and pulled them away from strategic priorities
  • 21% say their companies have had to hire more people or have become less productive since adopting AI
  • 24% of Gen Z employees work against their company’s AI strategy (e.g., don’t disclose, use their own tools) because it adds to workload

Sources

Leads to Confusion About What AI Is

And, unfortunately, lots of exaggeration.

The Training Gap

One possible reason for the underwhelming impact is lack of proper training. Many marketers are handed powerful tools with little instruction on how to use them. Without guidance, employees may use AI inefficiently or avoid it due to uncertainty about which tools fit their tasks.

Only 17% report receiving comprehensive, role-specific training that prepared them to use AI effectively.

Sources

Untrained employees be like →

And Yet — It Is Working

The skepticism is real. But so are the results — in specific applications where AI is well-matched to the task.

The pattern: measurable gains show up where AI handles high-volume, well-defined, data-rich tasks. Gains are harder to find where the task is ambiguous or the data is thin.

  • Programmatic advertising — largely AI-driven — grew 18% in 2024, reaching $134.8B

  • Marketing and sales is the function where companies most commonly report revenue increases from AI use

  • Among AI high performers, revenue lifts of 3–15% and sales ROI improvements of 10–20% are reported

  • Content cost reductions of 5–20% and time-to-market cuts of up to two weeks documented in pharma and retail deployments

Sources

AI Is Restructuring Marketing Jobs

Generative AI is not just changing how marketing work is done — it is changing which jobs exist.

Using 40 million U.S. job postings before and after the launch of ChatGPT, Luo, Miao, and Sudhir (2025) document a systematic production-to-orchestration shift:

  • Work that draws on bounded, retrievable knowledge (writing copy, handling inquiries, running reports) is being automated.
  • Work that requires judgment, coordination, and relationships (strategy, sales, account management) is expanding.

The bottleneck is moving — from producing artifacts to deciding what to do with them.

What happened to marketing employment after ChatGPT

Function Change
Content & Communications −17%
Channels & Support −9%
Research & Analytics −4%
Sales & CRM +13%
Strategy & Management +14%

And within the remaining jobs:

  • Entry-level roles: −14%
  • Executive roles: +9%
  • Average wages: +3.7% (despite fewer jobs)

Fewer jobs. Different jobs. Higher-paid jobs.

Source: Luo, Miao & Sudhir (2025), How Generative AI Reorganizes Knowledge Work: Evidence from 40 Million Jobs

So Which Is It?

Both things are true at the same time.

AI produces measurable gains in narrow, well-specified tasks. But organizations consistently struggle to identify those tasks, deploy correctly, and build the internal skills to evaluate results.

This is the gap this course is designed to close.

AI works well when… AI disappoints when…
Task is high-volume and repetitive Task requires judgment or context
Data is abundant and labeled Data is scarce or messy
Success metric is clear Goal is ambiguous or long-term
Feedback loop is fast Feedback is delayed or noisy

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

Course Learning Objectives

By the end of this course, you will be able to:

  • Explain how AI changes marketing relative to traditional approaches
  • Distinguish major AI applications across the customer lifecycle
  • Evaluate strengths and weaknesses of AI-driven decisions and metrics
  • Apply an experimental mindset to assess incremental return
  • Assess privacy, brand, and regulatory risks of AI deployment

What This Course Is

  • Case-driven and discussion-based
  • Applied and managerial in orientation
  • Focused on strategic use of AI tools
  • Grounded in real dashboards, simulations, and chatbot testing

Students will experiment with AI without needing deep coding expertise.

What This Course Is Not

  • Not a coding or machine learning engineering course
  • Not a purely technical data science class
  • Not uncritical advocacy of AI adoption

We will question performance claims, incentives, and long-term consequences.

Cases

Three HBS cases anchor the course.

Case 1 · Week 2 HubSpot & Motion AI: Chatbot-Enabled CRM (HBS #518-067; B case #524-088)

To prepare:

  • Read the case
  • ✏️ In-class case prep quiz (paper & pencil, closed notes)

Case 2 · Week 3 Artea: Designing Targeting Strategies (HBS #521-021)

To prepare:

  • Read the case
  • Work through the Option 3 Dashboard
  • ✏️ In-class case prep quiz
  • 📊 Group Assignment 1 due before class

Case 3 · Week 5 PittaRosso: AI-Driven Pricing & Promotion (HBS #522-046)

To prepare:

  • Read the case
  • Work through the spreadsheet supplement
  • ✏️ In-class case prep quiz
  • 📊 Group Assignment 2 due before class

Course Schedule

Week Part 1 Part 2
1 · Mar 18 How AI (doesn’t) change marketing · Types of AI Introduction to chatbot prototyping project
2 · Mar 25 Case: HubSpot & Motion AI — Chatbot-Enabled CRM Lecture: AI & Advertising
3 · Apr 1 Case + Dashboard: Artea — Designing Targeting Strategies Speaker: Director of Organic Search (SEMrush)
4 · Apr 8 Lecture: AI Pricing & Promotions SEMrush AI Visibility Certification Exam (in-class)
5 · Apr 15 Case + Spreadsheet: PittaRosso — AI-Driven Pricing & Promotion Lecture: Privacy & Ethics
6 · Apr 22 Group Project Presentations Peer evaluations

Participation, Preparation, and Contribution (20%)

Case prep quizzes

  • One quiz per case — three total
  • Paper and pencil, in-class, closed notes
  • No makeups given
  • Lowest grade across all participation activities dropped

Peer surveys

Completion of peer assessment surveys counts toward participation.

Class contribution rubric

Score Description
95–100% Well-prepared in almost all sessions, always relevant
90–94% Contributes in majority of sessions, mostly relevant
80–89% Occasional contributions, relevant comments
70–79% Occasional contributions only
50–69% Attends but almost never contributes
40–49% Attends but never contributes

Negative behaviors (lateness, distraction, disruption) reduce your score.

Group Case Assignments (20%)

Two short group deliverables — one each for Cases 2 and 3.

  • Assignment 1 (due before Week 3): Artea — targeting strategy analysis using the Option 3 Dashboard
  • Assignment 2 (due before Week 5): PittaRosso — pricing and promotion analysis using the spreadsheet supplement

Each deliverable emphasizes analytical rigor and strategic framing. Apply quantitative tools and connect findings to broader marketing strategy. Submissions should be short and professional.

SEMrush AI Visibility Certification (15%)

What it is

The AI Visibility Essentials certification from SEMrush covers how AI is changing search and digital visibility — a core skill for modern marketers.

A SEMrush Director of Organic Search visits in Week 3 to introduce the material.

How to earn full credit

  1. Complete the training materials using your Georgetown.edu account
  2. Take the exam in-class on April 8 — paper and pencil, no outside assistance, no AI tools
  3. Highest score is kept

This is individual work.

AI Visibility: What It Looks Like

Final Exam (25%)

  • In-person, paper and pencil, closed book
  • Short answers, multiple choice, and calculations
  • Covers all lectures, cases, and exercises
  • Scheduled during the allocated final exam period

Assesses conceptual understanding and applied reasoning across the full course.

Group Project: Chatbot Study (20%)

Design and run a real experiment on a customer-facing AI chatbot. Teams of 4 or 5. Graded by instructor and peer assessment.

What you will design

  1. Research question — how does chatbot design choice X affect customer outcome Y?
  2. Scenario — a realistic customer situation
  3. Manipulation — two versions of the chatbot, one behavioral difference

The post-interaction survey is standardized across all teams. You do not design it.

Same measures. Different manipulations. Comparable results.

Timeline

  • Week 3 — Proposal: research question, scenario, manipulation
  • Weeks 4–5 — Deploy via Qualtrics + LUCID. Classmates interact with your chatbot and complete the survey.
  • Week 6 — Present findings. Recommend viability and next steps.

Deliverables: survey instrument, prompts, slide deck, cost and risk assessment.

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

Tentative Definition

Artificial intelligence is machines that think.

Tentative Definition

Artificial intelligence is machines that think.

Artificial intelligence is machines that think, perceive, and act.

We add:

  • Perception: acquiring information from the environment.
  • Action: doing something that changes the environment.

Two Running Examples

These two systems will illustrate every concept in this lecture.

A smart cat feeder

Recognizes cats. Dispenses food.

A Home Depot return chatbot

Receives a return request. Verifies the order. Checks the product. Issues a return label.

Tentative Definition

Artificial intelligence is machines that think, perceive, and act.

Artificial intelligence is models of thinking, perception, and action.

We add:

  • Model: A simplified, explicit structure that supports explanation, prediction, or control.

Cat feeder

Face detected
     ↓
Is this a cat?
  ↓        ↓
 Yes        No
  ↓         ↓
Dispense  Stay
 food     locked

Home Depot chatbot

Message received
      ↓
 start_return?
      ↓ yes
 Order found?
  ↓       ↓
 No      Yes
  ↓       ↓
Error  Photo?
        ↓
     Match?
      ↓   ↓
     No  Yes
      ↓   ↓
   Error Label ✓

Tentative Definition

Artificial intelligence is models of thinking, perception, and action.

Artificial intelligence is representations that support models of thinking, perception, and action.

We add:

  • Representation: The form in which information is encoded so that computation is possible.

Cat feeder

📷 [face at the bowl — raw pixels]

cat:      yes
confidence: 0.94

The image has been transformed into something the system can act on.

Home Depot chatbot

“I want to return a damaged DeWalt DCD791D2 drill. My order number is 4821.”

Intent:    start_return
Product:   DeWalt DCD791D2
Condition: damaged
Order:     4821

Now the system can act: look up the order.

::: :::

Tentative Definition

Artificial intelligence is models of thinking, perception, and action.

Artificial intelligence is representations that support models of thinking, perception, and action.

We add:

  • Representation: The form in which information is encoded so that computation is possible.
  • Having good representations that fit your modeling goals is very important.

The goal changes — so must the representation

The first representation answered: is there a cat?

But the product goal is: is this my cat?

Before

cat:        yes
confidence: 0.94

↓ Not enough. Any cat gets food.

After

cat_id:     Maurice
confidence: 0.91

↓ Only Maurice gets food.

The goal defines the representation.

Tentative Definition

Artificial intelligence is representations that support models of thinking, perception, and action.

Artificial intelligence is constraints exposed by representations that support models of thinking, perception, and action.

We add:

  • Constraints: rules that limit which states, transitions, or solutions are allowable.
  • Once constraints are visible, search replaces guesswork.
  • Constraints do not limit intelligence. They make intelligent behavior computable.

Cat feeder

Identity is not enough either.

Maurice is recognized — but should he be fed right now?

The representation must expand:

cat_id:     Maurice
confidence: 0.91
last_fed:   1h ago  ← new field
eligible:   no      ← constraint applied

The constraint (no overfeeding) can only be enforced if the representation carries feeding history.

Constraints drive representation design.

Home Depot chatbot

Not just: is this a return request?

But: is this item in this customer’s order?

  • Return window expired 45 days ago (time constraint)
  • Order #4821 not found (identity constraint)
  • Valid: “Your DeWalt drill is eligible — generating label.”

The constraint is policy, not intent.

Tentative Definition

Artificial intelligence is constraints exposed by representations that support models of thinking, perception, and action.

Artificial intelligence is algorithms enabled by constraints exposed by representations that model thinking, perception, and action.

We add:

  • Algorithm: A procedure that operates over a representation while respecting constraints.

Cat feeder

Input: face embedding vector

Steps:

  1. Compare embedding to registered cats
  2. Find nearest match
  3. Check confidence threshold
  4. Check feeding schedule

Constraint: only registered cats; not overfed

Output: feed (0.91) / no feed (0.09)

Home Depot chatbot

Input: customer message vector

Steps:

  1. Multiply by learned weight matrix
  2. Apply nonlinear transformation
  3. Repeat across layers
  4. Output probability per intent

Constraint: only defined intents; confidence threshold

Output: start_return (89%)

Close to a full definition

Artificial intelligence is algorithms enabled by constraints exposed by representations that model thinking, perception, and action.

Artificial intelligence is behavior emerging from algorithms, constraints, representations, and models of thinking, perception, and action.

We add:

  • Behavior: What the system does over time, including feedback loops between perception, thinking, and action.

Cat feeder

Maurice → food dispensed ✓

Garfield → locked ✗

Home Depot chatbot

Order verified. Product matches. Within return window.

“Your return label has been emailed. Drop off at any UPS location within 14 days.”

Label issued ✓

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

Rule‑Based vs Learning‑Based Systems

Two fundamentally different answers to the same question:

How does a machine produce intelligent behavior?

  • Rule-based systems: A human expert writes down the rules. The machine follows them.
  • Learning-based systems: The machine induces rules from data. No human specifies them explicitly.

Rule-Based AI

Human knowledge encoded as explicit logical rules.

How it works

  • A domain expert encodes knowledge as IF-THEN rules
  • The system applies rules mechanically
  • New knowledge requires human intervention

Cat feeder — pressure plate + timer

IF time = 7:00am OR time = 6:00pm
AND plate_weight > threshold
THEN release food
ELSE stay locked

No camera. No model. No inference.

A cat steps on the plate at feeding time → the spring releases → food drops.

The rules are encoded in the physics and the clock.

Any cat, at the right time, gets fed.

Customer service — phone tree

Press 1: Start a return
Press 2: Check order status
Press 3: Speak to an agent

Enter your order number.
Press 1 to confirm.

For damaged items, press 1.
For wrong items, press 2.
For change of mind, press 3.

The rule is encoded in the menu script.

If your problem isn’t on the menu, you’re stuck.

::: :::

The Limits of Rules

Rule-based systems fail when:

  • The domain is too large to enumerate
  • The world changes faster than rules can be updated
  • Variation is continuous, not categorical
  • The knowledge cannot be made explicit

Learning-Based AI

The machine learns statistical patterns from data.

How it works

  • Feed the system many examples
  • The system detects patterns across examples
  • Patterns generalize to new inputs
  • Performance improves with more data

Cat feeder

Input: hundreds of photos
       labeled by the owner
       (Maurice, not Maurice)

Output: a model that maps
        new face images
        to the most likely identity

The pressure plate fed every cat.

The learned model feeds only Maurice.

Home Depot chatbot

Input: thousands of customer messages
       labeled with correct intents
       (start_return, check_status,
        damaged_item, wrong_item,
        missing_receipt, escalate)

Output: a model that maps 
        new messages 
        to the most likely intent

The phone tree broke on anything not in the menu.

The learned model handles variation.

::: :::

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

How Machine Learning Works

A machine learning system learns a function from data.

\[f: \text{input} \rightarrow \text{output}\]

  • We do not write \(f\) by hand
  • We show the system many (input, output) pairs
  • The system estimates \(f\) from those pairs
  • We then use the estimated \(f\) on new inputs

A Simple Example: Is This Maurice?

The problem

The owner wants the feeder to recognize Maurice — and only Maurice.

We want a system that looks at a photo and decides: is this Maurice?

The function we want

\[f(\text{image}) \rightarrow \{\text{Maurice}, \text{not Maurice}\}\]

We cannot write this function by hand.

We have no rule that reliably identifies Maurice from a photo.

So we learn it from data.

What the training data looks like

Maurice ✓

Maurice ✓ (different angle)

Not Maurice ✗

Each photo gets a label. The algorithm learns what separates them.

Where do these labels come from?

A Simple Example: Is This the Right Product?

The problem

A customer uploads a photo of the item they want to return.

We want a system that looks at the photo and decides: is this the right product?

The function we want

\[f(\text{image}) \rightarrow \{\text{match}, \text{no match}\}\]

We cannot write this function by hand.

We have no rule that reliably identifies a product from a photo.

So we learn it from data.

What the training data looks like

Reference product ✓

No Match ✗ (wrong brand)

Each photo gets a label. The algorithm learns what separates them.

Where do these labels come from?

Who Labels These Images?

Not a machine. A human being — looking at each photo, one at a time.

The annotation task

An annotator is shown a photo and a reference product (e.g., DeWalt 20V MAX Drill, SKU #100672834).

They answer one question:

Does this photo show the same product?

They click Match or No Match.

Then the next photo appears.

For a catalog of 10,000 SKUs with 50 photos each — that is 500,000 individual judgments.

What the annotator sees

Reference product

Submitted: wrong brand → No Match ✗

Reference product

Submitted: clearly wrong → No Match ✗ (easy)

Training and Prediction

Machine learning operates in two distinct phases.

Training phase

  • Collect labeled examples (images + labels)
  • Algorithm finds patterns that separate classes
  • Result: a trained model \(\hat{f}\)

Product match example

The model sees thousands of labeled product photos.

It learns which pixel patterns predict “match”.

Prediction phase

  • A new image arrives — no label
  • Feed it to the trained model \(\hat{f}\)
  • Model outputs a predicted label
  • No further learning (in most systems)

Product match example

A customer uploads a photo of their item.

The model outputs: match (confidence: 91%) → return is auto-approved

What Makes a Good Training Dataset?

Garbage in, garbage out. But what does good data look like?

Four properties

  • Representative: covers the range of inputs the model will see
  • Labeled accurately: outputs must be correct
  • Large enough: more examples generally means better generalization
  • Recent enough: old data may not reflect current patterns

Home Depot example

  • Not representative: trained only on clean product photos → fails when customers upload dark, blurry phone snapshots at odd angles

  • Mislabeled: a damaged item like this is still a Match ✓ — but a rushed annotator might label it No Match

    If that happens, the model learns that damage = wrong product.

  • Too small: 200 photos per product category is not enough to generalize across lighting, backgrounds, and damage types

  • Too old: product lines change; a model trained on last year’s drill SKUs may not recognize new models

Why Machine Learning Became Dominant

Three conditions converged in the 2000s and 2010s:

Condition What changed
Data Digital behavior generated training examples at scale
Compute GPUs made training large models economically viable
Algorithms Gradient descent and backpropagation scaled well

The algorithms were not new. The infrastructure was.

The ideas behind neural networks existed in the 1980s. What changed was the ability to feed them data and compute at scale.

Publicly available annotated datasets accelerated everything

  • ImageNet (2009) — 14 million labeled images across 20,000 categories. Made image classification a solvable benchmark. The 2012 AlexNet result on ImageNet launched the deep learning era.

  • Netflix Prize (2006–2009) — $1M competition to improve Netflix’s recommender system by 10%. Opened collaborative filtering to researchers worldwide. Kaggle was born from the same idea.

  • Kaggle (2010–present) — platform hosting hundreds of labeled datasets and competitions. Lowered the barrier to entry for ML practitioners globally.

  • Common Crawl, Wikipedia, BooksCorpus — the unlabeled text corpora that pre-trained the language models behind ChatGPT.

The companies that accumulated behavioral data earliest — Google, Amazon, Meta — built an infrastructure moat that still exists today.

Mapping to Our Framework

Machine learning instantiates the framework from Part 1.

Concept from Part 1 Machine learning equivalent
Perception Input data (pixels, audio, text)
Representation Features or embeddings
Model Learned function \(\hat{f}\)
Constraints Output space, business rules, filters
Behavior Prediction or decision

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

Types of Machine Learning

Machine learning methods differ in the type of feedback available during training.

  • Supervised learning — learn from labeled examples
  • Unsupervised learning — discover structure without labels
  • Reinforcement learning — learn from rewards and penalties

Supervised Learning

Cat feeder version

The problem

You want the feeder to recognize Maurice — and only Maurice.

You cannot write rules for this. Faces vary by angle, lighting, weight gain, time of day.

So you learn it from examples.

The function

\[f(\text{face image}) \rightarrow \{\text{Maurice}, \text{not Maurice}\}\]

What the training data looks like

Photo Label
📷 Maurice, straight on Maurice ✓
📷 Maurice, side angle Maurice ✓
📷 Garfield at the bowl Not Maurice ✗
📷 Neighbor’s tabby Not Maurice ✗
📷 Maurice, slightly blurry Maurice ✓

Someone — you — sat down and labeled these.

The algorithm learned what separates them.

Supervised Learning — Prediction

The goal is never to memorize the training data.

The goal is to predict correctly on data the model has never seen.

The model was trained on photos of Maurice, Garfield, and a neighbor’s tabby.

It has never seen this:

What the model sees

A face embedding — a vector of numbers.

Not a cat. Not a mask. Not a costume.

A point in a high-dimensional space.

What happens

cat_id:     Garfield
confidence: 0.97
eligible:   no

The mask changes the pixels.

It does not change the embedding enough to fool the model.

The model generalizes. That is the point of supervised learning.

Supervised Learning

The model learns to map inputs to labeled outputs.

Structure

  • Training data: (input, label) pairs
  • Goal: learn \(f\) such that \(f(\text{input}) \approx \text{label}\)
  • Prediction: apply \(f\) to new inputs

Two main tasks

  • Classification: output is a category
  • Regression: output is a number

Example 1 — Intent classification

  • Input: customer message (text)
  • Label: intent category (start_return, check_status, damaged_item…)
  • Model learns: which text patterns predict each intent

This is classification.

Example 2 — Product image match

  • Input: customer photo of item to return
  • Label: match / no match (does it correspond to the order?)
  • Model learns: which pixel patterns predict a product match

Also classification — but a different input type entirely.

Same framework. Different representation.

Unsupervised Learning

Cat feeder version

The setup

The feeder has been running for six months.

Maurice is recognized correctly. Garfield is blocked.

But the access logs show something odd: 47 bowl-approach events that don’t match any registered cat.

You didn’t go looking for this. The data revealed it.

What the logs show

2:14am  Unknown face — blocked
2:31am  Unknown face — blocked
2:14am  Unknown face — blocked
3:02am  Unknown face — blocked
...

A clustering algorithm groups the unknown events by face similarity.

Three clusters emerge:

  • Cluster A: Small face, pointy ears, 1–3am → raccoon
  • Cluster B: Tabby face, similar to Garfield, daytime → neighbor’s cat
  • Cluster C: Looks like Maurice but slightly different → Maurice’s sibling?

You didn’t define these categories. The algorithm found them.

Unsupervised Learning

The model discovers structure without labeled outputs.

The setup

You built a supervised intent classifier for the return chatbot.

You defined six intents:

  • start_return
  • check_status
  • damaged_item
  • wrong_item
  • missing_receipt
  • escalate

You labeled thousands of messages. You trained the model. You deployed it.

Now you have 20,000 new conversations.

And some messages don’t fit.

Messages that don’t fit any label

“I bought this drill for a job that got cancelled. Never even opened it. Can I still return it?”

“This was a gift. I don’t have the receipt or the order number — is there anything you can do?”

“This is the third DeWalt tool that’s failed on me. Can I get store credit instead of a replacement?”

“I already started a return but I need to change the pickup address.”

“I ordered two of these. I want to return one but keep the other.”

Your classifier forces each of these into one of six boxes.

It is wrong every time.

Unsupervised Learning: What the Algorithm Found

The process

Feed all 20,000 conversations — unlabeled — into a clustering algorithm.

The algorithm groups messages by linguistic similarity.

It does not know what the groups mean.

A human inspects each cluster and names it.

Some clusters were expected:

  • damaged_item
  • wrong_item
  • missing_receipt

Some were not:

  • Contractor bulk returns — job site purchases, multiple items, different policy expectations
  • Gift returns — no order number, no account linkage
  • Return modifications — changing address or pickup after a return was started
  • Partial returns — keep one, return another
  • Compensation requests — store credit, repeated product failures

What this means for the system

The unsupervised analysis revealed five intent categories that were never in the original label set.

Each one represents a real pattern in customer behavior that the supervised classifier was silently mishandling.

The labels came from the data — not from the design team.

Common tasks

  • Clustering: group similar examples ← this is what we just did
  • Dimensionality reduction: compress representations
  • Anomaly detection: find unusual examples

The broader principle

Unsupervised learning is not a fallback when you lack labels.

It is the right tool when you don’t yet know what the labels should be.

Reinforcement Learning

Cat feeder version

The setup

The feeder recognizes Maurice. But when should it dispense food?

You could set a fixed schedule: 7am and 6pm. But Maurice sometimes skips a meal. Sometimes he’s hungry at 3am. Sometimes he’s been fed by a neighbor.

A fixed rule wastes food. A learning system does better.

The agent’s decision at each bowl approach:

Dispense food now — or wait?

Structure

  • Agent: the feeder
  • Environment: Maurice and his eating patterns
  • Action: dispense food, or stay locked
  • Reward: Maurice finishes the bowl = positive; food sits uneaten = waste = negative; Maurice goes hungry = negative
  • Learning: over time, the feeder learns which conditions predict a finished bowl

The key tradeoff

Exploit: dispense at times that have worked before

Explore: occasionally dispense at a new time to see if Maurice is hungry then too

The policy learns Maurice’s rhythm — without anyone programming it.

Reinforcement Learning: Marketing Application

Structure — escalation policy

  • Agent: the chatbot
  • Environment: the customer and their conversation
  • Actions: continue automated conversation, or escalate to a live agent
  • Reward: issue resolved without escalation = positive; customer abandons chat = negative; unnecessary escalation = wasted agent time = negative
  • Learning: over time, the chatbot learns when escalation leads to better outcomes

The key tradeoff

Exploit: keep handling automatically in situations where that has worked before

Explore: escalate earlier in ambiguous situations to learn whether a human would have resolved it faster

Too much exploitation → frustrated customers who can’t get help

Too much exploration → overloads live agents unnecessarily

The policy must balance these continuously.

Where Does This Leave Us?

The three-category taxonomy is useful — but incomplete.

Type Feedback Chatbot example Marketing use
Supervised Labeled examples Intent classification; image match Churn, click prediction, scoring
Unsupervised None Discover return topics from messages Segmentation, anomaly detection
Reinforcement Environmental rewards Escalation policy Ad serving, recommendations

The taxonomy was developed before one major class of systems existed.

Where Do Large Language Models Fit?

LLMs are trained using a combination of all three learning paradigms.

Pre-training

Self-supervised: predict the next token from billions of text examples. Has the flavor of supervised learning — but labels are generated from the data itself, not provided by humans.

Fine-tuning

Supervised: train on human-curated examples of good responses.

Alignment

Reinforcement learning from human feedback (RLHF): human raters score responses; the model is updated to produce higher-rated outputs.

A better distinction

Rather than asking how the model is trained, ask what it produces:

  • Discriminative models: output a label or score
    • Is this a cat? Is this customer likely to churn?
  • Generative models: output new data
    • Generate a photo of a cat. Write a subject line for this campaign.

LLMs are generative. Most traditional marketing ML is discriminative.

Both matter. They are used differently.

What Did We Learn Today?

AI is not a buzzword. It has a structure.

The framework

A system is doing AI when it:

  1. Perceives the world — raw inputs from the environment
  2. Represents what it perceives — in a form computation can act on
  3. Models patterns in that representation — under constraints
  4. Acts on the world — and observes the consequences

Representations, models, constraints, behavior. That is the architecture of every AI system we will study.

What this means for labeling things “AI”

Running a regression on a spreadsheet is not AI.

Clustering customers into segments is not AI — it is a model, but there is no perception loop, no environment, no action.

These are useful tools. They are not AI. Precision matters.

Two Questions to Take With You

1. Where around you would an AI system actually help?

Not: “where could I add a model?”

But: what perceives, what represents, what acts, and on what?

Think about your workplace, your industry, your daily life. Where is there a perception-action loop waiting to be closed?

2. What would the representation need to look like?

The Maurice detector needed photos of Maurice — not cats in general.

The return chatbot needed labeled intents — not raw message logs.

The representation defines the data you need. The data defines what is actually buildable.

Good AI design starts with representation, not algorithms.

This Lecture

  • Part 1: Why an AI & Marketing Course?
  • Part 2: AI & Marketing: This Course
  • Part 3: About Your Instructor
  • Part 4: What Is AI?
  • Part 5: How Do Machines Learn?
    • Part 5a: Rule‑Based vs Learning
    • Part 5b: ML Fundamentals
    • Part 5c: Types of ML
  • Part 6: The Course Project

The Course Project

You will design and run a real experiment on a customer-facing AI chatbot.

What you will do

Design a scenario in which a customer interacts with a Home Depot chatbot.

Change one thing about how the chatbot behaves.

Measure how that change affects customer perceptions.

Teams of 4 or 5.

Teams will be formed before next class (after add/drop).

If you have preferences, indicate them on the Google Sheet on Canvas before next class.

The structure of every project

  1. Research question — how does chatbot design choice X affect customer outcome Y?
  2. Scenario — a realistic customer situation
  3. Manipulation — two versions of the chatbot, one behavioral difference

The post-interaction survey is standardized across all teams. You do not design it.

Same measures. Different manipulations. Comparable results.

Example 1 — Tone: Formal vs Casual

Research question

How does the chatbot’s communication tone affect customer confidence when selecting a lightbulb?

Scenario (pre-purchase)

You are shopping on the Home Depot website and need to replace several bulbs in your living room. You are unsure which type to buy — LED, smart bulb, or standard — and what wattage is appropriate.

You open the shopping assistant chatbot to help you choose.

Your goal: find the right bulb for your space and budget.

Manipulation — one difference

Condition A — Formal

“Respond in a professional and concise manner. Focus on delivering clear factual information.”


“The Feit Electric 60W LED equivalent offers 800 lumens and a 10-year lifespan at $0.98 per bulb.”

Condition B — Casual

“Respond in a friendly conversational tone similar to a helpful store associate.”


“Oh, great choice going LED! For a living room you’ll want something warm — around 2700K. This Feit pack is super popular and really affordable.”

Example 2 — Name Personalization

Research question

How does using a customer’s name affect satisfaction during a return interaction?

Scenario (post-purchase)

You recently purchased a DeWalt DCD791D2 cordless drill from Home Depot. After a few uses, the chuck has become loose and the drill no longer holds bits securely.

You open the Home Depot customer support chatbot to resolve the issue.

Your goal: determine whether you can return or exchange the drill.

Manipulation — one difference

Condition A — No name

“Respond to the customer without using personal references.”


“I can help you with your return. Can you provide your order number so I can look up the purchase?”

Condition B — Name

“Ask the customer for their name and occasionally refer to them by name during the conversation.”


“Hi! I’m happy to help. May I ask your name first?

Thanks, Alex. Let’s get that sorted out for you. Can you share your order number?”

Why the Project Is Built This Way

The assignment maps directly onto the framework from this lecture.

Project element AI framework concept
Scenario Environment — the world the system perceives
Customer message Perception — raw input to the system
Chatbot prompt Constraints and instructions — what shapes behavior
Chatbot response Behavior — what the system does in the world
Survey outcomes Feedback — how the environment responds

By changing the instructions, you change the behavior.

By measuring outcomes, you observe the consequences.

This is the perception–representation–constraint–behavior loop — applied to a real system, in a real marketing context.

Before Next Class

Form your team.

Teams of 4 or 5.

Indicate preferences on the Google Sheet on Canvas before next class. Preferences are not guaranteed but will be considered.

Add/drop closes before next class — teams are finalized after that.

Read the HubSpot case.

The case is in your course pack.

Read it. Prepare it. Come ready to discuss.

There will be a short case prep quiz at the start of class.

⚠️ Do not be late.