Built from 7+ years managing $500M+ CapEx portfolios — A command center approach to de-risk tool installations across NPI programs. Translates fragmented operational data into executive decision-making tools where execution discipline + financial governance + cross-functional coordination intersect.
All data is synthetic/anonymized.
| Category | Why It Matters | Link |
|---|---|---|
| Live Dashboard | See how I visualize complex program data for leadership decision-making | Streamlit App |
| CI/CD Pipeline | Evidence of production-grade automation mindset | GitHub Actions |
| Evidence Pack | Sample executive-ready outputs I generate for leadership reviews | docs/evidence/ |
| Program Artifacts | RAID logs, decision logs, exec updates — showing operational rigor | docs/templates/ |
(High-res backup: docs/images/dashboard.pdf)
| Business Challenge | How I Solved It | Result |
|---|---|---|
| CapEx variance blind spots | Automated variance tracking by program/category/month with root-cause tagging | +$7.5M variance surfaced early across $561.8M plan |
| Readiness status ambiguity | RAG-scored readiness gates with dependency-aware critical path | 57.5% → 87.0% readiness clarity across 50 tools |
| Expedite cost leakage | Vendor-level burn analysis with driver categorization | $7.6M expedite tracked across 1,434 lines |
| Leadership reporting overhead | CI-generated evidence packs on every commit | Zero-touch exec-ready outputs |
Dataset scale: 5 programs, 50 tools, 6 categories, 6 vendors, 24 months — all synthetic CSVs in
data/raw/
┌─────────────────────────────────────────────────────────────┐
│ Leadership Layer (GitHub Pages / Markdown Evidence Packs) │
├─────────────────────────────────────────────────────────────┤
│ Analytics Engine (Pandas + Plotly + Custom Logic) │
│ ├── Readiness scoring with dependency-aware critical path │
│ ├── CapEx variance analysis with forecast drift detection │
│ └── Expedite burn-down by vendor & root cause │
├─────────────────────────────────────────────────────────────┤
│ Data Layer (Synthetic CSVs → Extensible to ERP/PLM APIs) │
└─────────────────────────────────────────────────────────────┘
Key Design Choices:
readiness.py, critical_path.py, expedite.py) reusable across programs| Competency | Evidence in This Repo |
|---|---|
| Cross-functional orchestration | Integration of facilities, supply chain, and finance data models |
| Executive communication | Automated evidence packs + RAID/decision log templates |
| Financial acumen | CapEx variance analysis, forecast drift, expedite ROI tracking |
| Risk management | Critical path analysis, gate slip risk scoring, RAG statusing |
| Process automation | CI/CD pipeline for zero-touch reporting |
| Data-driven decision making | Plotly dashboards with drill-down capability |
| NPI/Operational excellence | Tool readiness gating, install → power-on → SAT tracking |
app.pydata/raw/ (synthetic CSVs)src/analytics/readiness.py — readiness rollups + RAGsrc/analytics/critical_path.py — dependency-aware critical path per tool/programsrc/analytics/expedite.py — vendor burn summariesGenerated by: python -m src.tooling.generate_evidence
Outputs to docs/evidence/:
readiness_score_output.mdcritical_path_output.mdexpedite_summary_output.mdcapex_variance_snapshot.mdgate_slip_risk_output.mdPrerequisites: Python 3.11+
# Setup
python -m venv .venv
source .venv/bin/activate # Windows: .\.venv\Scripts\activate
pip install -r requirements.txt
# Run dashboard
streamlit run app.py
# Generate evidence pack
python -m src.tooling.generate_evidence
Workflow: .github/workflows/capex_readiness_ci.yml
python -m src.tooling.generate_evidencedocs/evidence/** as CI artifactThis repository uses synthetic/anonymized data only. In production environments, I implement:
Never commit proprietary data. This portfolio demonstrates the logic — the data layer is swappable.
| Priority | Enhancement | Business Value |
|---|---|---|
| P0 | Scenario planning module (Forecast/Commit/Stretch) | Enable “what-if” analysis for CapEx reallocation |
| P1 | Automated gate go/no-go criteria | Reduce program review prep from days to hours |
| P2 | KPI suite (OTD, lead time P95, expedite rate) | Standardize vendor performance scorecards |
| P3 | Schema validation + data quality checks | Prevent garbage-in-garbage-out in automated pipelines |
Data & Analytics: Python · Pandas · NumPy · Plotly
App & Visualization: Streamlit · HTML/CSS
Automation & DevOps: GitHub Actions · Bash
Data Engineering: SQL (PostgreSQL-compatible) · Docker-ready
docs/templates/DECISION_LOG_TEMPLATE.mddocs/templates/RAID_LOG_TEMPLATE.mddocs/templates/WEEKLY_EXEC_UPDATE_TEMPLATE.mddocs/samples/DECISION_LOG_SAMPLE.mddocs/samples/RAID_LOG_SAMPLE.mddocs/samples/WEEKLY_EXEC_UPDATE_2026-01-02.mddocs/diagrams/system_view.mddata/
raw/ # synthetic/anonymized source data
processed/ # rollups used by charts
docs/
data_dictionary/ # column-level documentation
diagrams/ # system views
evidence/ # auto-generated outputs
images/ # screenshots / preview PDF
samples/ # program artifacts
templates/ # program templates
src/
analytics/ # readiness, critical path, expedite logic
tooling/ # evidence generation scripts
utils/ # IO helpers
app.py # Streamlit dashboard
.github/ # CI workflow
This is a demonstration project for portfolio purposes. To extend:
| Sourabh Tarodekar | CapEx Program Management · NPI Operations · Portfolio Analytics |
LinkedIn · Email · Full Portfolio
MIT License — See LICENSE file for details ```