Who We Are
Perchwell is the premier workflow software and data platform for real estate professionals and consumers. Based on the industry’s foundational data, Perchwell builds a modern software suite to empower real estate professionals to do their best work, provide differentiated service to their clients, and grow their businesses.Backed by Founders Fund, Lux Capital, and some of the country’s leading Multiple Listing Services (MLSs), Perchwell builds next generation workflow software/data products for the multi-trillion dollar residential real estate industry.
Perchwell is the first new entrant to come to market in decades and is currently scaling its best-in-class platform.
Position Overview:
Perchwell’s mission is to become the fastest growing MLS workflow and data platform in the country. With that, data is core to what Perchwell represents, and we are looking for a Senior Data Engineer to take charge of our data engineering initiatives, from building a data lake and warehouse solution to scaling our existing data infrastructure to onboard several new MLSes in the coming months. As a senior data engineer, you’ll be collaborating with cross-functional teams including Data Insights, Product, Design, and other engineering teams to build robust data solutions that help Perchwell become the best-in-class MLS workflow and data platform.
We’re a small but growing team and as a foundational member you’ll have the opportunity to shape the standards, culture, and values of the data engineering team.
What You’ll Do:
Build out ETL tooling and data pipelines, consuming data into our system from 3rd party data sources and APIs
Design and implement automated data governance measures to improve data quality and observability
Build out team processes and culture around ownership and accountability
Partner with Data Analyst team that will act as partner team to conduct analysis, dashboards, and quality assessments
What You’ll Need:
5+ years experience in data engineering including experience with Python, SQL, or Kotlin
Experience building scalable and fault tolerant data pipelines with data originating from 3rd party API’s for batch and real-time use cases
Expertise with any of ETL schedulers such as Airflow (preferred), Dagster, Prefect or similar frameworks
Experience with cloud architecture (preferably AWS) and technologies including S3, SQS, RDS, EMR, Glue, Athena, and Lambda
Experience working with data warehouses: snowflake (preferred), redshift, or google bigquery
Experience building CI/CD pipelines using GitLab, GitHub actions, Terraform, or Jenkins
Familiarity with microservices architecture and cloud data lake implementations
Excellent communication skills, both oral and written, with a demonstrated ability to effectively collaborate with cross-functional teams.
In this role, you’ll work out of our New York City Office in Soho Manhattan at least 3 days/week.
Bonus points for the following:
Certifications in AWS, Snowflake, or Elasticsearch
Ruby on Rails experience
Compensation:
To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. Our ranges are based on function and level benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including skills, job-related knowledge and depth of work experience. The compensation for this position is $160-$190K base salary + equity + benefits
Note:
At this time, we are only considering candidates who are authorized to work in the U.S.
Compensation Range: $160K - $190K