Company Description
Yieldmo is an advertising technology company that operates a smart exchange that differentiates and enhances the value of ad inventory for buyers and sellers. As a leader in contextual analytics, real time technology, and digital formats, we create, measure, model, and optimize campaigns for unmatched scale and performance. By understanding how each unique impression behaves and looking for patterns and performance in real time, we can drive real performance gains without relying on audience data. Yieldmo is a fully-distributed, global company that provides the opportunity for employees to activate their entrepreneurial side .
We are well-positioned for success in the new phase of adtech innovation with about 150 employees. We firmly believe that each person we bring into our team can make an impact.
What You Can Expect In This Role
As a member of the Yieldmo Data Team you are expected to build innovative data pipelines that support Extract Transform Load (ETL) operations and analyze our large user datasets (250 billion + events per month) to report unique insights. One is expected to demonstrate leadership earlier on in this role, as one is challenged to develop solutions with few instructions or limited documentation. Here are some of the technologies one is expected to become proficient in: Develop complex business rules in SQL, build data pipelines on AWS cloud infrastructure using various services such as ECS, Event bridge, Airflow, EC2 while coding in Python.
Requirements
- BS or higher degree in computer science, engineering or other related field
- 5+ years of SQL coding experience
- MUST have experience of data analysis in SQL on multidimensional large volume data Big Data systems such as Snowflake
- 3+ years of experience of developing in Python to transform large datasets on distributed and cluster infrastructure
- 5+ years of experience in engineering ETL data pipelines for Big Data Systems
- MUST have strong proficiency in SQL. Have experience writing complex SQL queries that perform data transformations
- Comfortable in building data analysis for multi dimensional dataset and sharing business insights
- Logical approach and efficient in troubleshooting while investigating data quality issues
- Experience developing in Python to transform large datasets on distributed and cluster infrastructure
- Experience in engineering ETL data pipelines for cloud based Big Data Systems
- Prior experience of designing and building ELT pipelines on cloud infrastructure involving streaming systems such as Spark, AWS services such as ECS, EKS, EventBridge, Airflow EMR, EKS, ECS, AWS Glue, Airflow
- Comfortable juggling multiple technologies and high priority tasks
Hiring Process
Select candidates will be invited to schedule a 30 minute screening call with a member of our Talent Acquisition team. We will discuss the Hiring Process details at that time. The hiring process typically includes, but is not limited to:
- Two 60min code pairing rounds where the candidate is expected to engage in writing solutions in code. This will be both in Python and SQL
- One 60 min design session, where the candidate is expected to share various data systems design options to solve proposed big data problem
- A 60 minute interview with the Hiring Manager.
- All the above rounds are video interview rounds
Perks
- Fully remote workplace
- Generous employer contribution to Health Benefit premiums
- Work/life balance: flexible PTO, competitive compensation packages, Summer Fridays & much more
- 1 Mental Escape (ME) day each quarter to fully unplug and recharge
- Dedicated staff committed to diversity and inclusion
- An allowance to help you upgrade your home office