Level 0 - Mindset & Vocabulary
Level 0 - Mindset & Vocabulary
Section titled “Level 0 - Mindset & Vocabulary”Goal: Understand how we think about data before touching any tools.
No tools yet. Just mental models.
Overview
Section titled “Overview”Before you touch any analytics tool, you need to understand the mental frameworks that make you effective.
This level teaches you:
- How to think about monitoring and data
- The core investigation flow (aggregate → individual → pattern)
- Why traffic quality matters more than page design (often)
These are mindsets, not technical skills.
Modules
Section titled “Modules”Lessons:
- Lesson 0.1.1: Self-Sufficient Monitoring
- Lesson 0.1.2: Numbers Are People
- Lesson 0.1.3: Traffic Quality vs Page Design
Lessons:
- Lesson 0.2.1: Aggregate → Individual → Pattern
- Lesson 0.2.2: The 10-Person Test
Module 0.1 – How We Think About Data
Section titled “Module 0.1 – How We Think About Data”Based on: Encyclopedia - Core Principles (Monitoring & Observability, Data Analysis, Learning & Exploration sections)
Lesson 0.1.1: Self-Sufficient Monitoring
Section titled “Lesson 0.1.1: Self-Sufficient Monitoring”Goal: Understand why you should never wait for others to tell you what to look at.
Key Facts:
- You should never ask “what landing pages should I track?”—the system shows ALL landing pages automatically
- If something changes in user behavior or new pages get traffic, you discover it yourself through observation
- Being self-sufficient in data analysis is critical—don’t depend on others
- Good tools let you see all events by exploring user activity
Narrative:
Most people wait for someone else to tell them what happened. “What changed?” “Why did conversions drop?” “Which campaigns are working?”
That’s broken. You’re reactive, not proactive. You’re dependent.
Self-sufficient monitoring means: You observe the data yourself. You notice when something changes. You investigate before anyone asks.
Example: Marketing launches a new landing page. In a bad system, you have to manually add tracking. In a good system, it just appears in your data automatically. You notice it, investigate it, understand it—before your boss asks.
This is the foundation: Own your monitoring. Don’t wait.
Exercise (Mental, not tool-based):
Write down answers to these questions:
- In your current role, do you wait for others to tell you “something is wrong” or do you proactively look at data daily?
- If marketing launched a new campaign tomorrow, how would you know? Would someone tell you, or would you see it in the data yourself?
- What would “self-sufficient monitoring” look like for you? (Example: Check attribution dashboard every morning, scan for new campaigns/pages)
Deliverable: A paragraph describing how you’ll implement self-sufficient monitoring in your daily workflow.
Deep dive: Encyclopedia - Self-Sufficient Monitoring
Lesson 0.1.2: Numbers Are People
Section titled “Lesson 0.1.2: Numbers Are People”Goal: Internalize that every aggregate metric represents specific individuals you can investigate.
Key Facts:
- 55,775 pageviews = 55,775 specific people, not an abstract number
- Every person in that number has: session recordings, UTM parameters, campaign IDs, full property data
- Aggregate metrics are starting points, not conclusions
- Don’t accept “we have a bounce problem”—understand which campaigns, sources, segments are bouncing
Narrative:
When you see “90% drop-off” or “55,000 landing page views,” your brain treats it like a statistic. Abstract. A fact to memorize.
Wrong.
Those are 50,000 SPECIFIC PEOPLE. Each one made decisions. Each one came from somewhere. Each one has a story.
You can click into them. Watch their session recording. See their UTM parameters. Understand their journey.
The magic isn’t in the aggregate. The magic is in drilling down to the individuals and finding patterns.
This is not optional. This is how you actually understand what’s happening.
Exercise (Mental, conceptual):
Imagine these scenarios and answer the questions:
Scenario 1: “We have 10,000 landing page views but only 1,000 leads (10% conversion).”
- How many specific people landed? (Answer: 10,000)
- How many specific people provided email? (Answer: 1,000)
- How many people dropped off? (Answer: 9,000)
- Can you investigate those 9,000 people? (Answer: Yes, in a good system)
- What might you look for? (Answer: Do they come from specific campaigns? Specific devices? Specific landing pages?)
Scenario 2: “Campaign A has 5% conversion, Campaign B has 45% conversion.”
- This isn’t just about percentages. What does this tell you about the PEOPLE in each campaign?
- (Answer: Campaign B is bringing much higher quality traffic—the people are more qualified/interested)
Deliverable: Write one paragraph explaining why “numbers are people” matters to your work.
Deep dive: Encyclopedia - Never Accept Numbers as Abstract
Lesson 0.1.3: Traffic Quality vs Page Design
Section titled “Lesson 0.1.3: Traffic Quality vs Page Design”Goal: Understand that most conversion problems are about WHO you’re targeting, not WHAT you built.
Key Facts:
- Good traffic converts well at every step. Bad traffic struggles everywhere.
- When different campaigns show different conversion rates on the SAME page, it’s an audience quality issue, not a page issue
- This is critical: You could waste months optimizing pages when the real problem is bad targeting
Narrative:
You see low conversion. First instinct: “The landing page is bad. Let’s redesign it.”
Hold on.
What if different campaigns show wildly different conversion rates on the exact same page?
- Campaign A on Page X: 5% conversion
- Campaign B on Page X: 44% conversion
Same page. Different traffic. 9x difference in conversion.
This tells you: The page isn’t the problem. The traffic is the problem. Campaign A is targeting the wrong people. Campaign B is targeting the right people.
Good traffic + mediocre page > Bad traffic + perfect page.
Fix your targeting first. Then optimize your pages.
Exercise (Conceptual):
Imagine you’re running two campaigns to the same landing page:
- Campaign A: 1,000 visitors, 50 conversions (5%)
- Campaign B: 1,000 visitors, 440 conversions (44%)
Questions:
- What does this tell you about the campaigns? (Answer: B is bringing much better traffic)
- Should you redesign the landing page? (Answer: No, not yet—fix Campaign A’s targeting first)
- What would you investigate next? (Answer: What’s different about Campaign B’s targeting/creative/audience?)
Deliverable: Write one paragraph about a time you (or your team) blamed a page when it might have been a traffic quality issue.
Deep dive: Encyclopedia - Traffic Quality Persists
Module 0.2 – Core Thinking Flow
Section titled “Module 0.2 – Core Thinking Flow”Based on: Encyclopedia - Thinking Flows (Core Flow: Aggregate → Individual → Pattern)
Lesson 0.2.1: Aggregate → Individual → Pattern
Section titled “Lesson 0.2.1: Aggregate → Individual → Pattern”Goal: Learn the universal investigation methodology.
Key Facts:
- Step 1: See the aggregate number (100 users, 90% drop-off, etc.)
- Step 2: Understand the number is made of individual units (100 users = 100 specific people)
- Step 3: Drill down to the ones (click into people/events/sessions)
- Step 4: Random sampling (pick 10 people, investigate both success and failure)
- Step 5: Segment to find patterns (by campaign, source, device, etc.)
- Step 6: Form hypotheses and validate
Key Mental Shift: NEVER treat numbers as abstract or granted. Numbers are ALWAYS collections of individual cases.
Narrative:
This is THE universal investigation flow. Use it for everything.
You see an aggregate number → You drill into the individuals → You find patterns.
Example:
- Aggregate: “90% drop-off between landing and next step”
- Individual: Click through, see the 50,000 specific people who dropped off
- Pattern: “All users from Campaign X bounce immediately”
Without this flow, you’re stuck with “we have a 90% drop-off problem” (not actionable).
With this flow, you get “Campaign X drives bad traffic, pause it” (actionable).
This is how you go from confusion to clarity.
Exercise (Mental rehearsal):
Imagine these scenarios and mentally walk through the flow:
Scenario 1: “We have a 90% drop-off from landing page to next step.”
Walk through the flow:
- See aggregate: 55,775 landed, 5,509 continued = 90.12% drop-off
- Understand individuals: 50,266 specific people dropped off
- Drill down: Click to see list of those 50,266 people
- Random sample: Pick 10 people who dropped off, 10 who continued
- Segment: Look for patterns (campaigns? devices? landing pages?)
- Hypothesis: “Campaign X drives bounces” or “Mobile users drop off more”
- Validate: Check more users in that segment, confirm pattern
Deliverable: Write the 6-step flow in your own words. Make it a cheat sheet you can reference later.
Deep dive: Encyclopedia - Core Thinking Flow
Lesson 0.2.2: The 10-Person Test
Section titled “Lesson 0.2.2: The 10-Person Test”Goal: Understand how random sampling reveals patterns.
Key Facts:
- If you have 90% drop-off, randomly pick 10 people → you’ll see ~9 who dropped, ~1 who passed
- Investigate BOTH groups: Why did the 9 drop off? Why did the 1 succeed?
- Compare and contrast to form hypotheses
- This is faster and more insightful than looking at aggregates alone
Narrative:
When you have a big number (50,000 drop-offs), it’s overwhelming. Where do you start?
The 10-Person Test: Pick 10 random people from the drop-off group.
Watch their session recordings. Check their properties. See what they have in common.
- Do they all come from the same campaign?
- Are they all on mobile?
- Did they all land on a specific page variant?
Then pick 10 people who DIDN’T drop off (the successes). What’s different about them?
Patterns emerge quickly. You don’t need to analyze all 50,000. Just 10 from each group tells you 80% of what you need to know.
Exercise (Conceptual):
Imagine you pick 10 random people from a drop-off group. You discover:
- 9 out of 10 came from “Campaign 120233281060520682”
- All 9 were on mobile devices
- All 9 bounced within 3 seconds
Now pick 10 from the success group:
- 8 out of 10 came from “Campaign 120236311971890504”
- 7 out of 10 were on desktop
- They spent 30+ seconds on the page
Questions:
- What pattern do you see? (Answer: Campaign ending in …060520682 drives bad traffic; …971890504 drives good traffic)
- What’s your hypothesis? (Answer: First campaign has wrong targeting or bad creative)
- What would you do next? (Answer: Check more users from that campaign to confirm, then pause it or adjust targeting)
Deliverable: Write one paragraph about how you’ll use the 10-Person Test in your next investigation.
Deep dive: Encyclopedia - The 10-Person Test
Level 0 Complete
Section titled “Level 0 Complete”You’ve completed Level 0. You now understand:
- Self-sufficient monitoring (don’t wait for others)
- Numbers are people (drill down to individuals)
- Traffic quality matters (fix targeting before pages)
- Aggregate → Individual → Pattern (universal investigation flow)
- The 10-Person Test (random sampling reveals patterns)
These are the mental models that make everything else work.
Next Steps
Section titled “Next Steps”👉 Continue to Level 1: Level 1 - Basic “Tool-On” Skills
Now you’ll actually start using your analytics tool.