Self-Service Analytics Platform
Built an internal analytics tool that reduced data team requests by 60%
Impact
60%
Request reduction
200+
Weekly active users
50+
Custom dashboards
<2s avg
Query time
Context & Problem
Our data team was drowning in ad-hoc requests. Every product manager, marketer, and executive needed custom reports, but the queue was 3-4 weeks long. This bottleneck was slowing down decision-making across the company.
The existing solution—giving everyone direct database access—had failed spectacularly. Non-technical stakeholders couldn’t write SQL. Those who could wrote inefficient queries that brought down production databases twice. The data team spent more time firefighting than building.
My Approach
I saw an opportunity to build an internal tool that would democratize data access while maintaining guardrails. This was a 0-to-1 product where the users were my colleagues—high stakes for getting it right.
Discovery:
- Surveyed 40 internal stakeholders about their data needs
- Shadowed the data team for 2 weeks to understand request patterns
- Analyzed 200+ past data requests to find common queries
- Benchmarked competitor tools (Looker, Metabase, Mode)
Key insights:
- 80% of requests followed 5 patterns (user acquisition, retention, feature usage, funnel analysis, cohort comparison)
- Most stakeholders wanted to see trends over time, not raw numbers
- Data team spent 40% of time on formatting/presentation, not analysis
- Real need wasn’t “see all data” but “answer specific business questions”
Solution Design
Rather than build a generic query builder, I designed around the 5 common question types:
- Metric Explorer: Track any metric over time
- Funnel Builder: Visualize conversion funnels with customizable steps
- Cohort Analyzer: Compare user groups by signup date, plan, etc.
- Feature Adoption: See who’s using which features
- Dashboard Composer: Combine multiple views into sharable dashboards
Technical architecture decisions:
Worked with the engineering lead to balance flexibility vs. performance:
- Pre-aggregated data in Redshift (refresh every hour)
- GraphQL API for flexible querying without database access
- React + Recharts for interactive visualizations
- Permission system tied to Okta (restrict sensitive data)
- Query caching layer to prevent duplicate work
Controversial decision: I said no to custom SQL.
The data team initially pushed back hard—they wanted a “power user” mode with raw SQL access. I held firm because:
- Would recreate the same performance problems
- Most users wouldn’t benefit from it
- Could always add it later if the 5 patterns proved insufficient
Outcome & Metrics
Adoption:
- 200+ weekly active users within first month
- 50+ custom dashboards created
- Became the default tool for weekly exec reviews
Efficiency:
- Data team requests dropped 60%
- Average query response time <2 seconds
- Zero database performance incidents since launch
- Data team now focuses on complex analyses and ML projects
Business impact:
- Product decisions happening 2-3x faster (no waiting for reports)
- Marketing team can test campaigns without data support
- Executives get real-time dashboards instead of weekly email updates
What I Learned
Saying no was the right call: The custom SQL request came up 3 more times in the first 2 months. Each time, we found that the real need could be met by extending one of the 5 patterns. If I’d built it on day 1, we’d have performance problems and no one would use the curated views.
Internal products need just as much UX care: Early versions had data-heavy interfaces that intimidated non-technical users. After we added progressive disclosure and better defaults, adoption doubled.
Start with constraints: Rather than trying to give everyone everything, constraining the tool to specific use cases made it more useful. “Do everything” often means “do nothing well.”