Building an Internal Analytics Dashboard: Why We Needed It and How It Came Together


At Factors.ai, we manage over 4,000+ customer projects. Each project generates a wealth of data — events, user sessions, integrations, CRM syncs, background jobs, and more. While our platform provides customers with great insights, I realized we lacked a unified internal view to monitor the health of all these projects at scale.

Our Customer Success and Engineering teams needed answers to questions like:

  • Which projects are experiencing data collection failures?
  • How many accounts are being identified daily across our customer base?
  • Are CRM integrations syncing properly?
  • Which background jobs are failing and why?

Previously, this meant jumping between multiple tools, manually querying databases, and piecing together information. I decided to build a single source of truth — and that’s how the Factors.ai Project Health Dashboard was born.


The Solution: A Modern Analytics Dashboard

I built a comprehensive internal dashboard that consolidates project health metrics, user activity, system status, and integration health into one place.

The Architecture

The system is composed of three main components:

1. React Frontend (TypeScript)
A modern, responsive web app built with React 19, TypeScript, and styled-components. It features:

  • Google Authentication restricted to @factors.ai emails
  • Interactive charts powered by Chart.js
  • Real-time KPI cards with trend indicators
  • Dark-themed UI for comfortable all-day use

2. Node.js/Express Backend (API Server)
A backend API deployed on Google Cloud Run that:

  • Aggregates data from BigQuery and the live Factors.ai Admin API
  • Handles authentication and session management
  • Provides RESTful endpoints with Swagger documentation
  • Implements proper rate limiting, logging, and error handling

3. Python Analytics Collector (Cloud Run Job)
An automated data pipeline that:

  • Runs on a schedule via Cloud Scheduler
  • Fetches analytics data from the Factors.ai API for all tracked projects
  • Stores processed data in BigQuery for historical analysis
  • Handles rate limiting, retries, and error recovery gracefully
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│ Cloud Run Job   │    │   Factors.ai     │    │    BigQuery     │
│   (Scheduler)   │───▶│      API         │───▶│   (Analytics)   │
└─────────────────┘    └──────────────────┘    └─────────────────┘

Key Technical Decisions

Live Data + Historical Data

I adopted a hybrid approach: the dashboard prioritizes live data from the Factors.ai Admin API for current metrics while using BigQuery for historical trends. This ensures users always see up-to-date information while still being able to analyze 30-day trends.

BigQuery for Analytics Storage

BigQuery was the natural choice for storing analytics data — it scales effortlessly with 4,000+ projects and allows complex aggregations on millions of rows without infrastructure headaches.

Cloud Run for Everything

Both the frontend and backend are containerized and deployed on Cloud Run. This gave me:

  • Zero infrastructure management
  • Automatic scaling based on traffic
  • Pay-per-use pricing
  • Easy CI/CD via GitHub Actions

Domain-Restricted Authentication

Security was a priority. I integrated Google OAuth but restricted it to @factors.ai email addresses only. New team members are automatically provisioned when they first sign in.


Why This Matters

This dashboard has transformed how our teams operate:

  1. Faster Incident Response — When a project has data collection issues, the team knows immediately rather than waiting for a customer complaint.
  2. Proactive Customer Success — CS teams can spot projects with declining engagement before it becomes churn.
  3. Engineering Visibility — Engineers can identify systemic issues (like a failing integration type) across all projects at once.
  4. Data-Driven Decisions — With 30-day trend data, the team can track the impact of platform changes on customer projects.

Lessons Learned

  • Start with the questions, not the data. I mapped out exactly what questions each team needed answered before designing schemas or UIs.
  • Hybrid data sourcing works. Mixing live API data with batch-collected BigQuery data gave me the best of both worlds.
  • Invest in error handling. When you’re processing 4,000+ projects, graceful failure handling and retry logic aren’t optional — they’re essential.
  • Internal tools deserve good UX too. A well-designed interface means higher adoption across teams.

What’s Next

I continue to iterate on the dashboard based on team feedback. Recent additions include job duration tracking, CRM sync backlog visualization, and enhanced failure alerting. The modular architecture makes it easy to add new data sources and visualizations as needs evolve.

Building internal tools might not be as glamorous as building customer-facing features, but the productivity gains they unlock are invaluable. This dashboard is now an essential part of daily operations at Factors.ai.


Built with React, Node.js, Python, BigQuery, and Google Cloud Run.

Leave a Comment

Your email address will not be published. Required fields are marked *