Published October 27, 2025 | Amy Bray | Financial Operations & Automation
💡
Why I Built This (The Real Version)
I love structure, but my Tuesdays did not. The work was important—people's paychecks depended on it—but the way we did the work felt brittle. Fifteen report formats. A dozen small judgement calls. Late-day math when your brain wants tea and a walk.
The Breaking Point: The night I found a 27% YoY trend after three hours of Excel pivots, I felt equal parts proud and annoyed. Proud because: cool insight. Annoyed because: why was it so hard to see what was already there? That was the moment I decided I didn't want to be the hero with a spreadsheet cape anymore. I wanted the system to be the hero.
After managing $80M+ in payroll operations, I'd spent enough evenings lost in Excel archaeology. The data was there. The insights were there. But getting to them felt like solving a puzzle every single time.
"What's our month-over-month growth by department?"
"Give me 30 minutes and a pivot table..."
That 30-second question shouldn't take 30 minutes to answer. So I built the tools I'd been wishing for.
🚀
What I Built: Two Tools, One Goal
$80M+
Real Ops Experience
Hours→Min
Time Transformation
🔄 Data Pipeline Pro
Problem: 15 vendor formats → 1 billing format. Manual transformation = hours wasted, errors guaranteed.
Solution: Ingests Excel/CSV, validates everything, transforms intelligently, exports clean data. Upload → Process → Export. Done.
📊 GSV Tracker Pro
Problem: "How's growth looking?" shouldn't require 3 hours of custom analysis.
Solution: Instant MoM/QoQ/YoY metrics, department performance at a glance, trend detection, and interactive charts ready for exec decks.
The Goal: Tiny, opinionated tools tuned for ops reality. Not a "universal data platform" that needs weeks of configuration. Just tools that work.
🔧
Data Pipeline Pro: The Details
What It Does
- Ingests anything: Excel, CSV, multiple sheets—handles them all
- Validates automatically: Catches duplicates, missing fields, totals drift
- Transforms intelligently: Column mapping, derived fields, standard types
- Exports clean data: Billing format, payment format, executive rollups
Real-time, not batch. Upload → process → see results. No job queues, no waiting.
Why Validation Matters
Bad data in = bad insights out. The pipeline automatically checks for:
- Null values and their impact
- Duplicate entries that would cause billing errors
- Missing required columns
- Rate/hour outliers that need review
That little "Errors (3)" badge next to the upload? Instant trust. You know exactly what's wrong before it becomes a problem downstream.
📈
GSV Tracker Pro: The Details
What It Does
- MoM/QoQ/YoY in one view: No more rebuilding pivot tables
- Department performance: Which teams are driving growth? See it instantly
- Trend detection: Spot spikes and dips automatically
- Interactive charts: Plotly visualizations that drop straight into decks
The analysis that took 3 hours? Now takes 30 seconds. And you can do it at 9:37 PM in dark mode because you're human.
⚙️
The Tech Stack (and Why)
Python 3.11
Streamlit
Pandas
NumPy
Plotly
OpenPyXL
Why Python?
Fast to prototype, powerful for data transformation. Python + Pandas is the gold standard for operations data.
Why Streamlit?
Wanted something usable, not just a Jupyter notebook. Streamlit = prototype to live demo in hours, not weeks. Plus file uploads, charts, and dark mode out of the box.
Why Plotly?
Because Excel charts in 2025 are sad. Plotly gives interactive, presentation-ready visualizations that actually help people understand data.
🎯
Key Design Decisions
1. Real-time, not batch
Every upload triggers immediate processing. You see results as soon as data loads. This mirrors how I actually wanted to work—upload, verify, export. Done.
2. Data validation first
Checks for nulls, duplicates, missing columns, and rate/hour outliers. Catch issues BEFORE they become problems downstream.
3. Dark mode is a requirement
Finance folks (hi, it's me) work late. A persisted toggle keeps eyes happy. Not optional.
4. Export everything
If Finance can't download it, it doesn't exist. Every view can export. Because sometimes they just want the spreadsheet.
🎓
What I Learned (The Messy Middle)
My first parser was too clever. It handled 12 edge cases and broke on the 13th. Simpler rules with clear errors won.
I underestimated dates. Week ending vs. pay period vs. service dates—humans improvise; code shouldn't. Added strict validation + friendly messages.
I over-designed the charts. Managers wanted three numbers and a line. I kept the pretty stuff, but defaulted to fast and obvious.
Key Lessons
- Build for the pain, not the theory. Two specific tools for two specific pain points I lived with weekly. That specificity is why they work.
- Fast iteration beats perfect planning. Streamlit let me prototype in a weekend. Got feedback from colleagues, iterated. Weeks later, battle-tested.
- Real data is messier than you think. Test data is clean. Real vendor reports have merged cells, inconsistent dates, random blank rows. Building for the mess made it robust.
- Visual feedback is mandatory. Users trust systems that SHOW them what's happening. Progress bars, validation summaries, preview tables—essential, not nice-to-have.
📊
The Impact
2-3 hrs
Reduced to 10-15 min
Real-time
Leadership Metrics
It's not just about saving time. It's about enabling better decisions by making data accessible when you need it. Validation catches 4:30 PM Friday mistakes. Real-time metrics mean leadership doesn't wait until Tuesday for answers. And you surface hidden growth patterns you never would have spotted manually.
🔮
What's Next
Short Term
- Custom metric builder (let users define their own KPIs)
- Scheduled reports (daily/weekly/monthly summaries)
- Multi-currency support for global operations
Long Term
- SQL backend for history + bigger files
- Team workspaces with annotations
- Predictive analytics (forecast growth based on trends)
"The best tools are built by people who've felt the pain firsthand."