Job Description
🌟 Job Opportunity: Data Engineer 🌟
📅 Resume Due Date: Friday, February 20th, 2026 (5:00PM EST)
🆔 Job ID: 25-199
🔢 Number of Vacancies: 4
📊 Level: MP4
⏳ Contract Duration: 11 Months
🕰️ Hours of Work: 35 hours per week
💵 Hourly Rate: $92
📍 Location: CHQ 1908 Colonel Sam Drive, Oshawa, ON
🏠 Work Mode: Hybrid – 3 days remote
Job Overview
– As an Azure and Databricks Data Engineer, the role focuses on designing, building, and supporting data‑driven applications that enable innovative, customer‑centric digital experiences.
– Work as part of a cross‑discipline agile team, collaborating to solve problems across business areas.
– Build reliable, supportable, and performant data lake and data warehouse products to support reporting, analytics, applications, and innovation.
Apply best practices in development, security, accessibility, and design to deliver high‑quality services.
– Develop modular and scalable ELT/ETL pipelines and data infrastructure leveraging diverse enterprise data sources.
– Create curated common data models in collaboration with Data Modelers and Data Architects to support business intelligence, reporting, and downstream systems.
– Partner with infrastructure, cyber teams, and Senior Data Developers to ensure secure data handling in transit and at rest.
– Clean, prepare, and optimize datasets with strong lineage and quality controls throughout the integration cycle.
– Support BI Analysts with dimensional modeling and aggregation optimization for visualization and reporting.
– Collaborate with Business Analysts, Data Scientists, Senior Data Engineers, Data Analysts, Solution Architects, and Data Modelers.
– Work with Microsoft Stack tools including Azure Data Factory, ADLS, Azure SQL, Synapse, Databricks, Purview, and Power BI.
– Operate within an agile SCRUM framework, contributing to backlog development and using Kanban/SCRUM toolsets.
– Develop performant pipelines and models using Python, Spark, and SQL across XML, CSV, JSON, REST APIs, and other formats.
– Create tooling to reduce operational toil and support CI/CD and DevOps practices for automated delivery and release management.
– Monitor in‑production solutions, troubleshoot issues, and provide Tier 2 dataset
– Implement role‑based access control and perform automated unit, regression, UAT, and integration testing.
Qualifications
– Completion of a four‑year university program in computer science, engineering, or related data disciplines.
– Experience designing and building data pipelines, with strong Python, PySpark, SparkSQL, and SQL skills.
– Experience with Azure Data Factory, ADLS, Synapse, and Databricks, and building pipelines for Data Lakehouses and Warehouses.
– Strong understanding of data structures, governance, and data quality principles, with effective communication skills for technical and non‑technical audiences.
To apply please send your resume to careers@cpus.ca or through the following link: https://lnkd.in/ebhYfRyj
