Skip to content

Thinking Flows & Investigation Methodologies

Thinking Flows & Investigation Methodologies

Section titled “Thinking Flows & Investigation Methodologies”

Note: This is different from Principles. Principles are fundamental truths/rules. Thinking Flows are step-by-step mental processes for investigation and analysis.


Core Thinking Flow: Aggregate → Individual → Pattern

Section titled “Core Thinking Flow: Aggregate → Individual → Pattern”

The Universal Investigation Methodology

  • Any number: 100 users, 55,775 landing page views, 90% drop-off, etc.
  • This is your starting point, not your answer

Step 2: Understand the Number is Made of Individual Units

Section titled “Step 2: Understand the Number is Made of Individual Units”
  • 100 users = 100 individual people
  • 55,775 landing page views = 55,775 specific sessions with specific people
  • 90% drop-off = 50,266 individual people who each made a choice to drop off

Key Mental Shift: NEVER treat numbers as abstract or granted. Numbers are ALWAYS collections of individual cases.

When you don’t understand an aggregate number, go to the individual level:

Option A: Self-Service (Preferred)

  • Click on the chart/number in your analytics tool
  • Tool shows you the individual people/events/sessions
  • Scroll through them, click into profiles, watch recordings
  • This is what PostHog enables: Click funnel drop-off → See list of people

Option B: Ask for Help (Fallback)

  • If tool doesn’t enable drill-down, ask developers/data analysts
  • They should be able to provide individual-level data
  • But this should be quick/nimble, not a multi-day data request

The 10-Person Test:

  • If you have 90% drop-off, randomly pick 10 people
  • You’ll likely see: 9 who dropped off, 1 who passed
  • Investigate both groups:
    • Why did the 9 drop off? What do they have in common?
    • Why did the 1 pass? What’s different about them?
  • Compare and contrast to form hypotheses

After seeing individuals, segment by dimensions:

  • Campaign IDs
  • Traffic channels (utm_source, utm_medium)
  • Landing page variants
  • Ad IDs or ad sets
  • Geographic location
  • Device type
  • Time of day
  • UTM terms/content
  • Any property you can think of

Goal: Find patterns like:

  • “All 9 drop-offs came from campaign X”
  • “The 1 who passed had utm_term=Y”
  • “Drop-offs viewed on mobile, success on desktop”

Step 5b: Multi-Dimensional Segmentation (Advanced)

Section titled “Step 5b: Multi-Dimensional Segmentation (Advanced)”

Don’t stop at single-dimension segmentation. Layer filters:

  • First layer: Segment by landing page → Find discount-01 at 5.80%
  • Second layer: Add affid filter → Narrow to affid=1000
  • Third layer: Add campaign filter → Find campaign 1202363… at 44.21%

The combination reveals winners:

  • Landing page alone: mediocre performance
  • Landing page + traffic source + campaign: 4x better performance

This shows success comes from the full stack, not one element.

Based on patterns, form hypotheses:

  • “Campaign X drives poor-quality traffic”
  • “Landing page is broken on mobile”
  • “Ad creative for utm_term=Y better matches user intent”

Then validate by:

  • Looking at more users in that segment
  • Watching session recordings of that segment
  • Comparing metrics across segments

Example Application: 90% Landing Page Drop-off

Section titled “Example Application: 90% Landing Page Drop-off”

Step 1 - See Aggregate:

  • 55,775 landing page views → 5,509 achieve screen loads = 90.12% drop-off

Step 2 - Understand the Units:

  • This is 50,266 SPECIFIC people who dropped off
  • And 5,509 SPECIFIC people who continued

Step 3 - Drill Down:

  • Click on the drop-off segment in PostHog
  • See list of the 50,266 people
  • Click into individual profiles

Step 4 - Random Sample:

  • Pick 10 random drop-offs:
    • Watch their session recordings
    • Check their properties
    • See if they bounced immediately or engaged first
  • Pick 10 random successes (who reached achieve screen):
    • Watch their recordings
    • Compare their properties vs. drop-offs

Step 5 - Segment Analysis:

Segment drop-offs by:

  • Campaign: Do certain campaigns have higher drop-off?
  • Landing page variant: Is one variant broken?
  • Traffic source: Is organic vs. paid different?
  • Device: Mobile vs. desktop drop-off rates?
  • Geography: Do certain regions drop off more?

Look for patterns:

  • “All users from campaign ID 120233281060520682 bounce”
  • “Mobile users (viewport width < 500) have 95% drop-off vs. 80% on desktop”
  • “Landing page /medication-tirzepatide-discount-01 has 92% drop-off vs. 70% on /medication-tirzepatide”

Step 6 - Hypothesis Formation:

Based on patterns discovered:

  • Hypothesis 1: “Campaign 1202332810… is targeting wrong audience”
  • Hypothesis 2: “Landing page is not mobile-optimized”
  • Hypothesis 3: “Discount landing pages attract non-serious visitors”

Step 7 - Validation:

  • Filter funnel by each segment to confirm pattern
  • Watch more session recordings from problematic segments
  • Compare campaign targeting settings
  • Test fixes (improve mobile UX, adjust campaign targeting)

  1. Never accept aggregate numbers at face value
  2. Always remember aggregates are made of individuals
  3. Drill down to see the actual cases
  4. Use random sampling to understand patterns
  5. Segment systematically to find root causes
  6. Form hypotheses from patterns, not guesses
  7. Validate hypotheses with more data

Use this thinking flow whenever:

  • You see an unexpected number
  • Conversion rates don’t make sense
  • Drop-offs seem too high (or too low)
  • You need to understand “why” behind a metric
  • You’re investigating a problem
  • You want to find optimization opportunities
  • Someone asks “why is X happening?”

Do NOT use this for:

  • Obvious, expected patterns (high conversion on optimized flows)
  • Small sample sizes where random variation is expected
  • Metrics you’re just monitoring, not investigating

Advanced Thinking Flow: Isolating Landing Page vs. Traffic Quality

Section titled “Advanced Thinking Flow: Isolating Landing Page vs. Traffic Quality”

When you see poor conversion, is it the landing page or the traffic?

You have 90% drop-off from landing page to next step. Two possible causes:

  1. Landing page problem: Page is broken, poorly designed, doesn’t load, bad UX
  2. Traffic quality problem: Ads are targeting wrong people, bad creative/audience match

Answer: Usually both. But you need to isolate which is the bigger issue.

Step 1: Hold Landing Page Constant, Vary Traffic

Take ONE landing page (e.g., /medication-tirzepatide-discount-01) and compare different campaigns/traffic sources on it:

Traffic SourceLanding PageConversion
Campaign Adiscount-015%
Campaign Bdiscount-0144%
Campaign Cdiscount-013%

Finding: Same page, wildly different conversion rates Conclusion: Traffic quality is a major factor

Step 2: Hold Traffic Constant, Vary Landing Page

Take ONE campaign and compare how it performs on different landing pages:

Traffic SourceLanding PageConversion
Campaign Bdiscount-0144%
Campaign Bpre-quiz-tt50%
Campaign Broot (/)40%

Finding: Same traffic, different conversion rates Conclusion: Landing page quality also matters

Step 3: Compare Patterns

  • Bad traffic baseline or worse: Performs poorly on ALL landing pages (5%, 3%, 2%)
  • Good traffic: Performs well across ALL landing pages (40%, 44%, 50%)
  • Page multiplier: Good pages make good traffic even better; bad pages hurt even good traffic

The Sophisticated Insight: Multi-Step Validation

Section titled “The Sophisticated Insight: Multi-Step Validation”

Don’t just look at first step - look at subsequent steps

If it’s truly a landing page problem, you’d see:

  • Step 1 (landing → screen 2): Low conversion
  • Step 2 (screen 2 → screen 3): Normal/good conversion (because everyone past step 1 is qualified)

If it’s a traffic quality problem, you’d see:

  • Step 1 (landing → screen 2): Low conversion
  • Step 2 (screen 2 → screen 3): ALSO low conversion (bad traffic struggles everywhere)

Key Test: Look at screen 2 → screen 3 conversion

  • Everyone at screen 2 saw the SAME screen (not dependent on landing page)
  • If conversion rates differ based on original traffic source, it’s 100% an audience quality issue
  • Good traffic continues converting well; bad traffic continues struggling

Baseline Funnel:

  • Step 1→2: ~10% conversion (90% drop-off) ← Could be page or traffic
  • Step 2→3: Need to check this

Filtered for Good Traffic (campaign 1202363… + affid 1000):

  • Step 1→2: 44% conversion ← Much better
  • Step 2→3: Check if this is also higher

Analysis:

  • If Step 2→3 is ALSO higher for good traffic: It’s an audience quality issue
  • If Step 2→3 is same for all traffic: Step 1 is a page/UX issue

Real Data from Screenshot:

  • achieve_screen_loaded → achieve_screen_viewed: 63.6% conversion overall
  • This is the “loaded vs viewed” bounce effect
  • If good campaigns show higher % here too → Confirms traffic quality issue
  • Different traffic sources perform differently on SAME screen → Definitely audience issue

When you see:

  • Good traffic converting at 40-44% while bad traffic converts at 5%
  • Different conversion rates on the SAME page based on traffic source
  • Good traffic outperforming at EVERY funnel step, not just the first

Diagnosis: Primary issue is audience quality/targeting, not landing page design

Action Priority:

  1. Scale good campaigns
  2. Pause/optimize bad campaigns
  3. Investigate why good campaigns work (targeting, creative, audience)
  4. Fix landing pages as secondary optimization

It’s usually both, but traffic quality is often the bigger lever.

A mediocre landing page with perfect traffic will outperform a perfect landing page with terrible traffic. Fix your targeting first, then optimize your pages.