We are seeking a highly skilled GCP Python Developer with +4 years of experience in Python programming and data engineering. The ideal candidate should have a deep understanding of production-level coding techniques, including testing, object-oriented programming (OOP), and code optimization. The role requires strong expertise with big data technologies, such as Spark, PySpark, Hadoop, HIVE, BigQuery, and Pub/Sub, to build scalable ETL pipelines. This is an exciting opportunity for an individual with a passion for cloud-native solutions, data visualization, and a drive to solve complex technical challenges.
Responsibilities:
- Utilize advanced data engineering skills, including expert-level SQL, data modeling, and query optimization, to build efficient ETL pipelines.
- Focus on cloud-native solutions with hands-on experience, preferably in Google Cloud Platform (GCP) or Azure environments.
- Leverage data visualization and dashboarding techniques to effectively communicate complex data insights to stakeholders.
- Debug, troubleshoot, and implement solutions for complex technical problems, ensuring high performance and scalability.
- Continuously learn new technologies, prototype solutions, and propose innovative approaches to optimize data engineering processes.
- Collaborate with cross-functional teams to integrate data solutions across platforms and services.
Requirements:
- Proficiency in Python programming, with a strong emphasis on data engineering.
- Extensive experience with big data technologies: Spark, PySpark, Hadoop, HIVE, BigQuery, and Pub/Sub.
- Expertise in SQL, data modeling, and query optimization for large-scale data processing.
- Experience in data visualization and dashboarding tools.
- Strong debugging and problem-solving skills to resolve complex technical issues.
- Ability to work independently, learn new technologies, and prototype innovative solutions.
Apply for this job