[Your Name] · [Email] · [Phone] · [City, ST]
April 21, 2026
Dear Hiring Manager,
I'm writing to apply for the Senior Data Scientist role on your Growth team. Your recent post about moving from last-touch attribution to a multi-touch Markov model resonated — I built a similar system at Instacart last year, and the part that nobody talks about (reconciling the model's output with finance's definition of revenue) is exactly what I'd want to work on next.
At Instacart I owned a customer lifetime value model that informed quarterly paid-acquisition budgeting. The first version was a gradient-boosted LTV model (R² = 0.84 on 90-day holdout) trained on 8M users and 140+ behavioral and transactional features in BigQuery. The harder work was making it trustworthy to non-technical stakeholders: I built a SHAP-based explanation layer so the marketing team could see, per-segment, which features were driving predictions. That one change is what got the model adopted — it let us reallocate $1.2M in quarterly ad spend toward the top LTV decile and lifted ROAS by 38% without increasing budget.
Before Instacart I spent three years at a Series B fintech startup (Mosaic) as their second data scientist. I built the first real-time fraud detection system — a PyTorch-based LSTM serving 500K daily transactions at 97.2% precision and 120ms p99 — and, more importantly, set up the MLflow + feature-store stack that the next 4 data scientists built on top of. I mention this because I think the biggest leverage in a data science team isn't the next model; it's the infrastructure that lets the team ship models in days instead of months. That's what I'd want to work on in my first six months at your company.
I'd welcome the chance to walk through the Instacart LTV model architecture — including the parts that didn't work — and hear where your team is currently spending time on experimentation versus production. Happy to share a write-up of the SHAP explanation layer as a first discussion point.
Sincerely,
[Your Name]