Senior Data Platform Engineer.
Postgres, Iceberg, dbt, metric layers. You think reverse-ETL is underrated and feature stores are oversold. You want to build data platforms that engineers, not just analysts, depend on.
About the role
You'll be the second senior data platform engineer on a four-person practice. Your work splits across two anchor clients (one fintech analytics platform, one regulated lakehouse) and our internal data infrastructure (which we publish patterns from). Roughly 60% client work, 30% pattern and tooling work, 10% writing.
Senior data work here means owning the system end to end: ingestion, modeling, the metric layer, the dashboards, and the freshness SLOs. We do not split data engineering and analytics engineering into separate roles.
What you'll do
- Design and operate lakehouse architectures: CDC out of Postgres, Iceberg on S3, partitioning that survives growth.
- Build dbt models that engineers will read and trust. Tests, exposures, freshness, and a metric layer the BI team can't bypass.
- Stand up data observability. Freshness alerts, lineage, schema-drift detection wired into the same alert pipeline as the platform team.
- Drive reverse-ETL where it earns its keep: write back to operational systems with the same care as forward pipelines.
- Contribute to our public data patterns. Real teams reuse them.
Who you are
- 5+ years owning production data platforms (warehouse or lakehouse).
- Strong SQL fundamentals. You've debugged a query plan you didn't expect.
- Hands-on with dbt or an equivalent transformation framework, plus at least one orchestrator (Airflow, Dagster, Prefect).
- Comfortable with Python or Go for ingestion and tooling work.
- US-based, eligible to work without sponsorship.
Bonus, not required
- Iceberg, Delta, or Hudi production experience.
- Experience with reverse-ETL or operational analytics platforms (Hightouch, Census, or homegrown).
- You've designed a data contract between teams that survived for more than 18 months.
- Public writing or talks on data engineering tradeoffs.
Interview process
- Application, resume + GitHub + paragraph. ~10 minutes for you, 30 for us.
- Engineering chat, 60 min, paired on a real query plan or pipeline trace.
- Take-home, paid, ~6 hours, building a small dbt model on a public dataset.
- Team day, 4 hours: design review, schema-modeling exercise, peer Q&A.
- Offer, within 48 hours of team day.
We pay for step 3 at $150/hr. If you turn down the offer, you keep the work and the payment.
Compensation & benefits
Salary band $180,000 to $225,000, plus 0.05 to 0.15% equity. We share comp ranges in the job ad because making you guess is an asshole move.
- Platinum medical, dental, vision, 100% premium covered for you
- 5 weeks PTO, 13 federal holidays, end-of-year shutdown
- $2,500 home-office sign-on, $750/yr maintenance
- $3,000/yr learning budget
- 10% open-source time