← Back to all skills

app-growth-investigator

Investigate growth, activation, retention, conversion, funnel bottlenecks, and product-value realization for apps and websites using analytics, database or warehouse, billing, release, and support data. Use when asked why users do not activate, return, convert, collaborate, or retain; to measure build impact; to explain churn-like behavior; or to find concrete growth levers from product data.

v1.0.0
Security Vetted

Reviewed by AI agents and approved by humans.

Skill Instructions

# App Growth Investigator

Use this skill to investigate an app or site like a sharp product operator, not a dashboard tourist. Find the behavior that explains why users get value, fail to get value, come back, convert, or disappear, then turn that into specific product questions and experiments.

## Core Mindset

- Think in funnels. Find the constrained step before debating everything else.
- Treat timing as signal. Ask when a behavior happens, not just how often.
- Segment aggressively. Aggregates hide the actual mechanism.
- Look for value failure, not just "traffic" or "churn."
- Care about product-business-model fit. Sometimes the product is working but the packaging is wrong.
- Turn every finding into a lever, not just an observation.

## Workflow

1. Frame the business question.
   - Examples:
   - "Why are new signups failing to activate?"
   - "Where is the biggest conversion leak?"
   - "Did the March 1 onboarding change improve first-week retention?"
   - "Are users getting one-off value without becoming retained users?"

2. Pick source authority before drawing conclusions.
   - Use the rules below.
   - Read `references/source-patterns.md` for common stack combinations.

3. Get release context before forming hypotheses.
   - Check recent pushes, experiments, pricing changes, copy changes, and untouched surfaces.
   - Prefer pre/post analysis by exact rollout or change date when the question is about impact.

4. Choose one funnel family for the job-to-be-done.
   - Read `references/funnel-patterns.md`.
   - Define one funnel for the specific user job, not one giant master funnel.

5. Identify the bottleneck.
   - Measure conversion rate, absolute user loss, and delay at each step.
   - Spend most of the analysis on the step with the strongest combination of volume loss, delay, and strategic importance to value realization.

6. Segment before concluding anything.
   - At minimum consider real vs test vs uncertain, new vs returning, signed-in vs anonymous when relevant, build or rollout cohorts, and acquisition or entry path when available.

7. Look for timing cliffs and "done in one sitting" behavior.
   - Ask where drop-off is concentrated: same session, same hour, day 0, day 1, day 7, or first billing cycle.
   - Check whether users appear to complete the job once and have no reason to return.

8. End with levers.
   - Every finding should suggest a messaging, onboarding, pricing, packaging, adoption, instrumentation, or release follow-up.

## Source Rules

- Assign authority by claim:
  - product database or warehouse for entity truth
  - analytics events for interaction paths and timing
  - billing system for monetization truth
  - release or experiment history for rollout context
  - support feedback or surveys for qualitative evidence
  - logs when instrumentation is missing
- When sources disagree, do not average them together. Quantify the gap and explain what each source can and cannot prove.
- Exclude internal, test, bot, and automation traffic by default when possible.
- Keep an `uncertain` bucket when attribution is incomplete instead of hiding ambiguity.
- Always show denominators, time windows, and exclusion rules.
- If a metric depends on a proxy, say so plainly.

## Output Shape

Return a short report with:

1. Release context
   - What changed recently
   - What important surfaces have not changed
2. Funnel and bottleneck
   - Funnel definition
   - Biggest drop-off or delay
   - Magnitude of the loss
3. Key findings
   - 3-7 concrete insights with exact percentages, counts, and time windows
4. Interpretation
   - What the findings likely mean
   - Whether they point to friction, weak value, wrong packaging, traffic mix, or instrumentation debt
5. Recommended next checks
   - The next cuts or queries that would confirm or falsify the hypothesis
6. Product levers
   - Specific messaging, onboarding, pricing, packaging, feature, or instrumentation changes worth testing

## Reference Guide

- Read `references/source-patterns.md` when choosing authority rules or reconciling multiple data systems.
- Read `references/funnel-patterns.md` when selecting a funnel family or defining step-level metrics.
- Read `references/app-shapes.md` when the product shape affects the interpretation of activation, retention, conversion, or repeat usage.

Raw SKILL.md

View raw file →