Job Description
7 days ago
Key Responsibilities
• Assist in building and maintaining data pipelines using Azure Databricks (batch and/or streaming).
• Develop and update notebooks and jobs using PySpark/Python and/or SQL .
• Support ingestion from common Azure sources (e.g., ADLS Gen2 , Azure SQL, APIs/files).
• Help implement data quality checks, basic transformations, and documentation.
• Monitor jobs, troubleshoot failures, and support incident/problem resolution.
• Work with CI/CD practices for notebooks/jobs (e.g., Azure DevOps/Git) and follow engineering standards.
• Collaborate with data engineers/analysts to understand requirements and deliver clean datasets for reporting and analytics.
Required Skills & Experience
• 2+ years of experience in data engineering, analytics engineering, or a similar role (internships welcome).
• Hands-on exposure to Databricks (Azure Databricks preferred).
• Working knowledge of Azure fundamentals (resource concepts, storage, security basics).
• Programming experience with Python and/or SQL (PySpark a plus).
• Understanding of data concepts: schemas, joins, partitions, file formats (CSV/JSON/Parquet).
• Familiarity with Git and basic DevOps practices (branching, PRs, deployments).
• Strong problem-solving skills and willingness to learn in a fast-paced environment
• Assist in building and maintaining data pipelines using Azure Databricks (batch and/or streaming).
• Develop and update notebooks and jobs using PySpark/Python and/or SQL .
• Support ingestion from common Azure sources (e.g., ADLS Gen2 , Azure SQL, APIs/files).
• Help implement data quality checks, basic transformations, and documentation.
• Monitor jobs, troubleshoot failures, and support incident/problem resolution.
• Work with CI/CD practices for notebooks/jobs (e.g., Azure DevOps/Git) and follow engineering standards.
• Collaborate with data engineers/analysts to understand requirements and deliver clean datasets for reporting and analytics.
Required Skills & Experience
• 2+ years of experience in data engineering, analytics engineering, or a similar role (internships welcome).
• Hands-on exposure to Databricks (Azure Databricks preferred).
• Working knowledge of Azure fundamentals (resource concepts, storage, security basics).
• Programming experience with Python and/or SQL (PySpark a plus).
• Understanding of data concepts: schemas, joins, partitions, file formats (CSV/JSON/Parquet).
• Familiarity with Git and basic DevOps practices (branching, PRs, deployments).
• Strong problem-solving skills and willingness to learn in a fast-paced environment
More jobs from Oliver James Associates Limited

Data Engineer – AI & Microsoft Platforms
Oliver James Associates Limited
jobBoard.filter.role.option.DEVOPS_SRE
Central and Western, Hong Kong, China
7 days ago
Full Time
Onsite
Technology, Information and Media

Business Analyst (Digital Platform)
Oliver James Associates Limited
Business & Management
Central and Western, Hong Kong, China
7 days ago
Full Time
Onsite
Technology, Information and Media

Security Project Manager
Oliver James Associates Limited
Product/Project Manager
Central and Western, Hong Kong, China
7 days ago
Full Time
Onsite
Technology, Information and Media
More jobs like this
Azure Databricks Data Engineer: Scale End-to-End Pipelines
ConnectedGroup Limited
Central and Western, Hong Kong, China
Data Engineer: Scalable Pipelines (AWS/Azure, Databricks)
Tenth Revolution Group
Central and Western, Hong Kong, China
Data Engineer: Scalable Pipelines (AWS/Azure, Databricks)
Tenth Revolution Group
Central and Western, Hong Kong, China
AZURE DATABRICKS DATA ENGINEER, LISTED FIRM
Grey Anderson Limited
Central and Western, Hong Kong, China
AZURE DATABRICKS DATA ENGINEER, LISTED FIRM
Grey Anderson Limited
Central and Western, Hong Kong, China
🎉 Got an interview?







