At Kalibri Labs, we are helping to redefine and rebuild the way performance metrics are viewed in the hotel industry. We are looking for passionate, energetic, and hardworking people with an entrepreneurial spirit, who dream big and challenge the status quo. We are working on cutting-edge solutions for the industry as they navigate the recovery process. We are using our big data coupled with machine learning and AI to help highlight the path forward. Kalibri Labs is growing, so if you’re ready to make a difference and utilize your talents across a groundbreaking organization, please keep reading!
We are looking for a Senior Data Engineer who is passionate about building highly scalable data processing pipelines. The engineer will work in a data team to add new and varied data sources to our warehouse, continuously improve our pipeline to ensure high availability of data to our customers, and ensure data delivered to our customers is of utmost integrity.
Responsibilities
- Advances Kalibri’s mission through the design, development, and deployment of scalable, high quality data pipelines that significantly impact efficiency and value to the organization
- Mentors data engineers in the data lifecycle to include modeling, transformation, storage, development, testing, and deployment.
- Continuously monitor and optimize the data processing pipeline
- Defines data SLAs and works with a cross functional team to execute against SLAs across the full data lifecycle.
- Build automated quality tests and monitors that ensure availability, consistency, and accuracy
- Lead technical communication on a cross functional data team
- Participate in code reviews and design sessions within an Agile process paradigm
- Maintain and operate continuous build, test, and integration pipelines
- Contribute to a high velocity team culture
- Work closely with data architect to design and deliver scalable data solutions
Skills & Requirements
- 8+ years of experience on a development team contributing to all parts of a modern data processing pipeline
- Demonstrated ability to deliver major data pipelines on a project plan and schedule
- Strong background in modern data warehouse technologies such as Snowflake, Databricks, BigQuery
- Expert SQL and Python programming in a production context
- Experience with query performance tuning, monitoring, backup, disaster recovery, automated testing, automated schema migration, and/or continuous deployment
- Production experience with sql-based transformation models such as dbt
- Experience building and operating modern data orchestration and automation systems
- Knowledge of AWS serverless technologies, AWS security model, and/or data analytics services preferred (e.g. Lambda, Redshift, Fargate)
- Proven ability to independently solve ambiguous problems and demonstrate ownership of a production domain
- Production experience with Spark Scala is preferred
- Bachelor’s degree in Computer Science or related discipline or equivalent experience
*
Applicants from California, Alaska, and Hawaii are not eligible for hire.
Salary Range: $130,000 - $150,000 based on experience, background, and location.
*Eligible to participate in the company bonus programCompany Benefits
- All positions are fully remote.
- We cover 75% of the cost of medical insurance premiums for employees & their dependents with three medical plans through Blue Cross, Blue Shield.
- We cover 75% of the cost of dental and vision insurance premiums for employees & their dependents.
- We provide a 50% 401k match of the first 6% of the employee's contribution (for a max of 3%).
- We also provide $50,000 of life insurance, Long-term disability and an Employee Assistance Program.
- All new hires receive a $250 allowance to help them set up their home office.