Job Description
Data Science Developer – Senior
Hybrid, Toronto, Ontario, Canada
Contract
Responsibilities
– Participate in product teams to analyze systems requirements, architect, design, code, and implement cloud-based data and analytics products that conform to standards
– Design, create, and maintain cloud-based data lake and Lakehouse structures, automated data pipelines, analytics models
– Liaises with cluster IT colleagues to implement products, conduct reviews, resolve operational problems, and support business partners in the effective use of cloud-based data and analytics products.
– Analyze complex technical issues, identify alternatives, and recommend solutions.
– Support the migration of legacy data pipelines from Azure Synapse Analytics and Azure Data Factory (including stored procedures, views used by BI teams, and Parquet files in Azure Data Lake Storage (ADLS)) to modernized Databricks-based solutions leveraging Delta Lake and native orchestration capabilities
General Skills
– Experience in multiple cloud-based data and analytics platforms and coding/programming/scripting tools to create, maintain, support, and operate cloud-based data and analytics products, with a preference for Microsoft Azure
– Experience with designing, creating, and maintaining cloud-based data lake and Lakehouse structures, automated data pipelines, analytics models in real-world implementations
– Strong background in building and orchestrating data pipelines using services like Azure Data Factory and Databricks
– Demonstrated ability to organize and manage data in a Lakehouse following medallion architecture.
– Background with Databricks Unity Catalog for governance is a plus
– Proficient in using Python and SQL for data engineering and analytics development
– Familiar with CI/CD practices and tools for automating deployment of data solutions and managing code lifecycle
– Comfortable conducting and participating in peer code reviews in GitHub to ensure quality, consistency, and best practices
– Experience in assessing client information technology needs and objectives
– Experience in problem-solving to resolve complex, multi-component failures
– Experience in preparing knowledge transfer documentation and conducting knowledge transfer
– Experience working on an Agile team
Technology Stack
– Azure Storage, Azure Data Lake, Azure Databricks Lakehouse, Azure Synapse, Azure Databricks
– Python, SQL
– PowerBI
– GitHub
Please send your resume to jim.nickolson@noramtec.com.
Only candidates selected to be moved forward in the recruitment process will be contacted by a member of the recruitment team to schedule a conversation