Azure & Databricks Data Engineers (MP4)

February 13, 2026

Job Description

  • Contractor
  • Anywhere

We’re Hiring: Azure & Databricks Data Engineers (MP4)
📍 Hybrid (3 days) – Oshawa | $90–$95/hr | 11‑month contract
📅 Apply by February 20, 2026 (5:00 PM EST)

Are you a Data Engineer who loves solving complex problems, building scalable data solutions, and working with the latest Microsoft tech? If so — we want to meet you! 🙌
Our team is growing, and we’re looking for 4 talented Data Engineers to help shape modern, data‑driven digital experiences across the organization.

🌟 Job Opportunity: Data Engineer 🌟
📅 Resume Due Date: Friday, February 20th, 2026 (5:00PM EST)
🆔 Job ID: 25-199
🔢 Number of Vacancies: 4
📊 Level: MP4
⏳ Duration: 11 Months
🕰️ Hours of Work: 35 hours per week
💵 Hourly Rate: $90–$95
📍 Location: CHQ 1908 Colonel Sam Drive, Oshawa
🏠 Work Mode: Hybrid – 3 days remote

Job Overview

As an Azure and Databricks Data Engineer, the role focuses on designing, building, and supporting data‑driven applications that enable innovative, customer‑centric digital experiences.
Work as part of a cross‑discipline agile team, collaborating to solve problems across business areas.
Build reliable, supportable, and performant data lake and data warehouse products to support reporting, analytics, applications, and innovation.
Apply best practices in development, security, accessibility, and design to deliver high‑quality services.
Develop modular and scalable ELT/ETL pipelines and data infrastructure leveraging diverse enterprise data sources.
Create curated common data models in collaboration with Data Modelers and Data Architects to support business intelligence, reporting, and downstream systems.
Clean, prepare, and optimize datasets with strong lineage and quality controls throughout the integration cycle.
Support BI Analysts with dimensional modeling and aggregation optimization for visualization and reporting.
Collaborate with Business Analysts, Data Scientists, Senior Data Engineers, Data Analysts, Solution Architects, and Data Modelers.
Work with Microsoft Stack tools including Azure Data Factory, ADLS, Azure SQL, Synapse, Databricks, Purview, and Power BI.
Operate within an agile SCRUM framework, contributing to backlog development and using Kanban/SCRUM toolsets.
Develop performant pipelines and models using Python, Spark, and SQL across XML, CSV, JSON, REST APIs, and other formats.

Qualifications

Completion of a four‑year university program in computer science, engineering, or related data disciplines.
Experience designing and building data pipelines, with strong Python, PySpark, SparkSQL, and SQL skills.
Experience with Azure Data Factory, ADLS, Synapse, and Databricks, and building pipelines for Data Lakehouses and Warehouses.
Strong understanding of data structures, governance, and data quality principles, with effective communication skills for technical and non‑technical audiences.

To apply please send your resume to careers@cpus.ca or through the following link: https://lnkd.in/ebhYfRyj