logo inner

Data Engineer

StrattmontPhilippinesRemote

About Us:
We are an ecommerce focused end to end data analytics firm assisting enterprises & brands in data driven decision making to maximize business value. Our suite of work spans extraction, transformation, visualization & analysis of data delivered via industry leading products, solutions & services. Our flagship product is Daton, an ETL tool.We have now ventured into building exciting ease of use data visualization solutions on top of Daton. And lastly, we have a world class data team which understands the story the numbers are telling and articulates the same to CXOs thereby creating value.

Where we are Today


We are a boot strapped, profitable & fast growing (2x y-o-y) startup with old school value systems. We play in a very exciting space which is intersection of data analytics & ecommerce both of which are game changers. Today, the global economy faces headwinds forcing companies to downsize, outsource & offshore creating strong tail winds for us. We are an employee first company valuing talent & encouraging talent and live by those values at all stages of our work without comprising on the value we create for our customers.

We strive to make the company a career and not a job for talented folks who have chosen to work with us.

The Role


We are seeking a highly skilled and motivated Data Engineer with expertise in Python or PySpark, and it's good to have experience with DBT (Data Build Tool), to join our team. As a Data Engineer, you will play a crucial role in designing, building, and maintaining our data infrastructure in the cloud. You will collaborate with cross- functional teams to ensure data is collected, processed, and made available for analysis and reporting. If you are passionate about data engineering, cloud technologies, and have strong programming skills, we invite you to apply for this exciting position.

Responsibilities:


1. Data Pipeline Development: Design, develop, and maintain scalable data pipelines that collect, process, and transform data from various sources into usable formats for analysis and reporting.2. Cloud Integration: Leverage cloud platforms such as AWS, Azure, or Google Cloud to build and optimize data solutions, ensuring efficient data storage, access, and security.3. Python/PySpark Expertise: Utilize Python and/or PySpark for data transformation, manipulation, and ETL processes. Write clean, efficient, and maintainable code.4.

Data Modeling: Create and maintain data models that align with business requirements, ensuring data accuracy, consistency, and reliability.5. Data Quality: Implement data quality checks and validation processes to ensure the integrity of the data, troubleshooting and resolving issues as they arise.6. Performance Optimization: Identify and implement performance optimizations in data pipelines and queries to ensure fast and efficient data processing.7. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with reliable data sets.8. Documentation: Maintain thorough documentation of data pipelines, workflows, and processes to ensure knowledge sharing and team efficiency.9. Security and Compliance: Implement security best practices and ensure data compliance with relevant regulations and company policies.

Good to Have (Preferred Skills)


• DBT (Data Build Tool): Experience with DBT for managing and orchestrating data transformations.• Containerization and Orchestration: Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).• Data Streaming: Knowledge of data streaming technologies (e.g., Kafka, Apache Spark Streaming).• Workflow Management: Familiarity with data orchestration and workflow management tools (e.g., Apache Airflow).• Cloud Certification: Certification in cloud services (e.g., AWS Certified Data Analytics, Azure Data Engineer).• Data Governance: Understanding of data governance and data cataloging.

Qualifications


• Bachelor’s degree in computer science, Information Technology, or a related field (master’s degree preferred).• Proven experience as a Data Engineer, with a focus on cloud-based solutions.• Strong proficiency in Python and/or PySpark for data processing and ETL tasks.• Experience with cloud platforms such as AWS, Azure, or Google Cloud.• Knowledge of data warehousing concepts and technologies (e.g., Redshift, BigQuery).• Familiarity with data modelling and database design principles.• Solid understanding of data integration and ETL best practices.• Excellent problem-solving skills and attention to detail.• Strong communication and collaboration skills to work effectively in cross-functional teams.Join our team and be part of a dynamic environment where you can contribute to the development of innovative data solutions using the latest cloud technologies, programming languages, and DBT for efficient data transformations.

If you are a passionate data engineer with these skills, we want to hear from you!

Life at Strattmont

Thrive Here & What We Value* Committed to Excellence, Innovation, and Client Satisfaction* Collaborative Teamwork* Equal Opportunity Employer & Celebrate Diversity* Fast-Paced and High-Energy Environment* Support for Team Members in Specialized Projects* Competitive Salary, Flexible Work Arrangements, Professional Development Opportunities* Passionate about Technology and Delivering Quality Services* Customer-Centric Approach with Excellent Service
Your tracker settings

We use cookies and similar methods to recognize visitors and remember their preferences. We also use them to measure ad campaign effectiveness, target ads and analyze site traffic. To learn more about these methods, including how to disable them, view our Cookie Policy or Privacy Policy.

By tapping `Accept`, you consent to the use of these methods by us and third parties. You can always change your tracker preferences by visiting our Cookie Policy.

logo innerThatStartupJob
Discover the best startup and their job positions, all in one place.
Copyright © 2024