May 30
🏢 In-office - Toronto
• Conceptualize and own the data architecture for multiple large-scale projects • Be an advocate for data quality and excellence • Create and contribute to frameworks that improve the efficacy of logging data • Gather requirements, understand the big picture, create detailed proposals in technical specification documents • Productizing data ingestion from various sources, data delivery to various destinations, and creating well-orchestrated data pipelines • Optimize pipelines, dashboards, frameworks, and systems • Conduct SQL data investigations, data quality analysis, and optimizations • Contribute in peer code reviews and help the team produce high-quality code • Mentor team members by giving/receiving actionable feedback
• Bachelor’s degree • 5+ years of experience with writing and debugging data pipelines • Great data modeling skills • Strong SQL proficiency • Strong coding skills in Scala or Java • Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and Airflow • Versed in software production engineering practices • Excellent communication skills • Experience in AWS cloud is preferred
• 11 paid holidays • Generous Accrued Time Off increasing with years of service • Generous paid sick time • Annual day of service
Apply Now