April 30
🏡 Remote – Anywhere in Canada
• Coordinate with data scientists, data analysts, software engineers and product managers to design and evolve our existing architecture, improving the reliability and performance of our data infrastructure • Build, maintain and scale our ETL pipelines to help power data science and analytical needs • Design, build, and maintain data infrastructure systems such as but not limited to AWS DMS, HBase, EMR, Redshift, S3, Kafka, Airflow while ensuring scalability, reliability, and security • Write new tools, services, and platforms in Python • Research new technologies to solve tomorrow’s data infrastructure needs • Advocate for the adoption of tools in the data engineering team
• You have 5+ years of data engineering experience • Ability to write high-quality code in Python and PySpark • Strong understanding of database fundamentals • Deep understanding of two or more of these technologies: Spark, Airflow, AWS EMR, AWS Glue, Sagemaker and Redshift • Experience with batch processing and streaming systems • Exposure and/or strong interest in Kafka or other queuing technologies • A critical thinker and possess creative problem solving skills • Quantitative skills, ability to think analytically, and strong attention to details • A desire to contribute to the broader development community
• Career development; we believe in mentorship and investing in your learning, supporting you to achieve your goals • Health benefits, including vision and dental! • RRSP Contributions (Canada), 401K Contributions (USA) • Generous vacation and Parental Leave Top-up • Corporate discount for gym memberships for you and your family • Winter break shutdown and a whole lot more!
Apply Now