Full time, London or UK-based.
About us
BeZero Carbon is a global ratings agency for the Voluntary Carbon Market. We distribute our ratings via our SaaS Product, BeZero Carbon Markets, informing all market participants on how to price and manage risk. Our ratings and research tools support buyers, intermediaries, investors and carbon project developers.Founded in April 2020, our 170+ strong team combines climatic and earth sciences, sell-side financial research, earth observation, machine learning, data and technology, engineering, and public policy expertise.
We work from four continents. Having raised a significant Series B funding round in late 2022, we are rapidly growing as a company, accelerating the Net-Zero transition through ratings.
Job Description
BeZero is looking for a senior data engineer to join our existing data products and tooling team, which sits within the broader data organisation. The team is focussed on developing carbon offset-related data products for our clients, as well as building internal data tools to increase the efficiency of our Ratings teams.You’ll be responsible for building data products and tools that directly affect the way our ratings teams analyse carbon offset projects. This is a cross-functional role: you will be working together with colleagues from our product, ratings, and software engineering team every day.To give you a flavour of the kind of work this team does, these are some of the projects members in our team have been working on recently:
- Developing robust back-end API services that power our in-house central data portal, enabling ratings analysts to access prepared and curated data essential for evaluating carbon offset projects.
- Introducing an in-house knowledge management tool, with generative AI capabilities, to help rating analysts in navigating the large amounts of unstructured document data that exists in the carbon market.
- Designing cross-team data flows and service architectures to deliver data consistently to our client-facing platform.
- Deploying a system of web crawlers to aggregate carbon project related data into our data warehouse, alongside developing a standardised data model so the data can be used internally and displayed on our client-facing platform.
If you’re excited by working on such problems and making impactful contributions to data in the climate space, then we’re looking for you.
Tech stack
As a data team, we have a bias towards shipping products, staying close to our internal and external customers, and end-to-end ownership of our infrastructure and deployments. This is a team that follows software engineering best practices closely. Our data stack includes the following technologies:AWS serves as our cloud infrastructure provider.
- Snowflake acts as our central data warehouse for tabular data. AWS S3 is used for any of our geospatial raster data, and we use PostGIS for storing and querying geospatial vector data.
- We use dbt for building SQL-style data models and Python jobs for non-SQL data transformations.
- Our computational jobs are executed in Docker containers on AWS ECS, and we use Prefect as our workflow orchestration engine.
- GitHub Actions for CI / CD.
- Metabase serves as a dashboarding solution for end-users.
We are a remote-friendly company and many of our colleagues work fully remote; however, for this position, we will only consider applications from candidates based in the UK. If you live in or near London, you are welcome (but not required!) to work from our London office.
Responsibilities:
- You will be an individual contributor in our data engineering team, focused on designing and building robust data pipelines for the ingestion and processing of carbon offset-related data.
- You will contribute to and maintain our analytical data models in our data warehouse.
- You will work with our product engineering teams to architect robust data flows, systems, and APIs to deliver data to our internal and external customers.
- You will work with other teams in the business to enable them to be more efficient, by building data tools and automations.
You’ll be our ideal candidate if:
- You care deeply about the climate and carbon markets and are excited by solutions for decarbonising our economy.
- You are a highly collaborative individual who wants to solve problems that drive business value.
- You have at least 5 years of experience building ELT/ETL pipelines in production for data engineering use cases, using Python and SQL.
- You have hands-on experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster), containerization using Docker, and a cloud platform like AWS.
- You can write clean, maintainable, scalable, and robust code in Python and SQL, and are familiar with collaborative coding best practices and continuous integration tooling.
- You are well-versed in code version control and have experience working in team setups on production code repositories.
- You’ve designed back-end services and deployed APIs yourself, ideally using a framework like FastAPI.
- You have experience in deploying and maintaining cloud resources into production using tools such as AWS Cloud Formation, Terraform, or others.
- You have ambitions to grow into a technical leadership role, and are willing to take on line management responsibilities of 1-2 engineers.
Our interview process:
- Initial screening interview with recruiter (15 mins)
- Introduction call with senior team lead (30 mins)
- 2x Technical interview with members from the data engineering team (60-90 mins)
We value diversity at BeZero Carbon. We need a team that brings different perspectives and backgrounds together to build the tools needed to make the voluntary carbon market transparent. We’re therefore committed to not discriminate based on race, religion, colour, national origin, sex, sexual orientation, gender identity, marital status, veteran status, age, or disability.