About SmartAsset:
SmartAsset is an online destination for consumer-focused financial information and advice that powers SmartAsset Advisor Marketing Platform (AMP), a national marketplace connecting consumers to financial advisors. Reaching an estimated 69 million people each month (as of January 2024) through its educational content and personalized calculators and tools, SmartAsset's mission is to help people make smart financial decisions. In 2021, SmartAsset ranked #561 on the Inc. 5000 and #160 on the Deloitte Technology Fast 500™ lists of fastest-growing companies, while additionally closing a $110 million Series D funding round, valuing the company at over $1 billion.
Currently, SmartAsset ranks on Y Combinator's list of Top 100 Companies of all time.
About the team:
At Smartasset you will join a team of data scientists, analysts and ML engineers that are at the forefront of AI/ML application for the company. You will have the opportunity to work cross-functionally with leaders from different teams (Marketing, Product, Sales, and Technology), and with our growing Data team.
About the Job:
Responsibilities: Multiple positions available. Lead data engineering efforts by collaborating with cross-functional teams to harness the power of data and turn it into a strategic asset that drives business forward. Architect and oversee the development of scalable, dependable, and high-performance integrated data platforms. Create and maintain advanced ETL pipelines using Python. Implement real-time streaming data pipelines for application integration. Leverage advanced SQL expertise to optimize queries and data retrieval.
Harness AWS services for efficient data storage and processing. Manage cloud storage systems to optimize data asset storage efficiency. Implement Docker containerization and orchestration to support scalable ETL pipelines. Apply advanced statistical methods for intricate data analysis and modeling. Utilize Snowflake data warehouse for loading data and optimizing query performance. Use strong communication skills (written and verbal) to facilitate data exchange with external vendors and organizations, including integration using APIs. Ensure all data deliverables adhere to regulatory and security requirements as defined by stakeholders. Establish CI/CD pipelines for automated deployment and rigorous testing. Leverage Spark for distributed data processing tasks. Utilize data modeling techniques to guarantee data accuracy and integrity. Enhance data pipelines quality, security, efficiency, and scalability in alignment with best practices. Take ownership of and drive technical projects in a dynamic environment. Collaborate with Data Analysts, Data Scientists, Database Architects, cross- functional teams, and business partners to deliver data supporting analytics and machine learning initiatives. Mentor fellow data engineers and advocate for best practices in data system development. Position allows telecommuting from anywhere in the U.S.
Skills / Experience You Have:
MINIMUM REQUIREMENTS: Bachelor’s Degree or U.S. equivalent in Computer Science, Computer Engineering, Information Technology, Telecommunications Engineering, Business Analytics, or related field, plus 5 years of professional experience as a Data Engineer, Information Architect, or any occupation/position/job title involving data engineering including constructing ETL pipelines in a production environment.Must also have experience in the following:
- 2 years of professional experience processing data with a massively parallel technology (including Snowflake, Redshift, and Spark or Hadoop based big data solution);
- 2 years of professional experience prototyping Python-based ETL solutions and translating complex requirements into actionable tools;
- 2 years of professional experience utilizing SQL including advanced query optimization;
- 2 years of professional experience with data modeling and relational database technologies utilizing data warehousing systems including Snowflake and Redshift;
- 2 years of professional experience utilizing cloud platforms including AWS or GCP;
- and 2 years of professional experience utilizing Spark for distributed data processing.
Skills / Experience:
In lieu of a Bachelor's degree plus 5 years of experience, the employer will accept a Master's degree or U.S. equivalent in Computer Science, Computer Engineering, Information Technology, Telecommunications Engineering, Business Analytics, or related field, plus 3 years of professional experience as a Data Engineer, Information Architect, or any occupation/position/job title involving data engineering including constructing ETL pipelines in a production environment.
Available Benefits and Perks:
- All roles at SmartAsset are currently and will remain remote- flexibility to work from anywhere in the US.
- Medical, Dental, Vision - multiple packages available based on your individualized needs
- Life/AD&D Insurance - basic coverage at 100% company paid, additional supplemental available
- Supplemental Short-term and Long-term Disability
- FSA: Medical and Dependant Care
- 401K
- Equity packages for each role
- Time Off: Vacation, Sick and Parental Leave
- EAP (Employee Assistance Program)
- Financial Literacy Mentoring Program
- Pet Insurance
- Home Office Stipend
SmartAsset is an equal opportunity employer committed to fostering an inclusive, innovative environment with the best employees. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please contact us at Recruiting@smartasset.com.California, Colorado, Connecticut, Maryland, Nevada, Rhode Island, Washington, and New York City residents*
Salary range: $176,000-$230,000 per year+ RSUs + benefits.
Salary at SmartAsset is determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location within the contiguous United States.