At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle. Visit
h1.co to learn more about us.Product Engineering plays a pivotal role in developing and delivering our consumer-facing applications. Our team processes data from upstream sources, constructs robust data pipelines to ensure information remains current, and transforms this data for downstream use, providing our users with actionable, exportable insights. Our applications serve as gateways to our extensive knowledge base, which includes millions of healthcare professional profiles and their affiliations.
The Product Engineering team is responsible for building these applications that enable our customers to search for and visualize information, facilitating quick, business-critical decisions. As we continue to expand into new markets and deepen the breadth of data we collect for our customers, it is essential that our team grows and scales accordingly to meet these increasing demands.
WHAT YOU'LL DO AT H1
As a product engineer on the Datafeeds and Pipeline team, you will be integral to designing, building, and maintaining efficient data pipelines to manage large volumes of data from various sources, ensuring high data quality and seamless flow within an AWS environment. You’ll be responsible for writing production grade pipelines using big data technologies to transform large datasets into file formats ready for clients to ingest. You’ll manage projects across all stages including application deployment to deliver the best scalable, stable, and high-quality healthcare data application in the market.You will:- Be responsible for product features related to data transformations, enrichment, and analytics- Work closely with internal stakeholders, gathering requirements, delivering solutions, while effectively communicating progress and tracking tasks to meet project timelines- You’ll design, build, and maintain efficient and reliable data pipelines to ingest, process, and distribute large volumes of data.
This involves working with various data formats and sources, ensuring data quality, and managing data flow seamlessly between systems- Implement and manage APIs and other integration methods to connect disparate systems and enable smooth data exchange. This might include integrating third-party data sources into the existing ecosystem- Apply business logic to raw data to transform it into a usable format. This includes cleaning, aggregating, and manipulating data to meet specific business needs and to make it suitable for analysis and reporting- Continuously monitor and optimize the performance of data pipelines and systems to handle increasing volumes of data efficiently. This may involve refining data storage practices, optimizing queries, and leveraging data caching strategies- Provide technical support for data-related issues. This includes troubleshooting data pipeline failures and performance bottlenecks, as well as implementing fixes and enhancements to prevent future problems- Implement security measures to protect data integrity and privacy. Ensure compliance with data governance and legal regulations pertaining to data handling and processing
ABOUT YOU
As a self-starter, you excel in managing projects across all stages, from requirement gathering and design to coding, testing, implementation, and ongoing support. You possess robust hands-on technical expertise encompassing both conventional and non-conventional ETL methodologies, alongside proficiency in T-SQL, Spark-SQL and Spark-Scala. Your proactive approach and diverse skill set make you an invaluable asset in driving innovation and delivering impactful solutions within our dynamic Product Engineering team.- You possess a deep understanding of various distributed file formats such as Apache AVRO, Apache Parquet and common methods in data transformation- You have experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement- You have a proven ability to write clean code that is easy to maintain
REQUIREMENTS
- 3+ years of experience working with big data technologies and deploying products on AWS EMR- Strong coding skills in Scala preferably or Python supporting large scale data processing - Familiarity with data engineering pipeline tools such as Apache Airflow- Experience with Docker, Kubernetes or Terraform.- Experience with databases like PostgreSQL- Software management tools such as Git, JIRA, and CircleCI- Familiarity with ElasticSearch, ETL API design, or customer facing data delivery a plus
COMPENSATION
This role
pays $120,000 to $135,000 per year, based on experience, in addition to stock options.Anticipated role close date: 10/05/2024
H1 OFFERS
- Full suite of health insurance options, in addition to generous paid time off- Pre-planned company-wide wellness holidays- Retirement options- Health & charitable donation stipends- Impactful Business Resource Groups- Flexible work hours & the opportunity to work from anywhere- The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globeH1 is proud to be an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates.
Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, ancestry, national origin, religion, disability, sex (including pregnancy), age, gender, gender identity, sexual orientation, marital status, veteran status, or any other characteristic protected by law.H1 is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you require an accommodation, please reach out to your recruiter once you've begun the interview process.
All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law.Apply for this job