工作描述
7 天前
Our client is a global leader in digital transformation and data-driven innovation, currently expanding their global data team to support a wide range of digital initiatives.
They are now seeking a Data Engineer to join their Data Service team and play a key role in building scalable data pipelines and analytics solutions.
Key Responsibilities:
• Design, build, and optimize scalable end-to-end data pipelines for structured and unstructured data from diverse sources (APIs, databases, files).
• Develop and manage robust ETL/ELT workflows for both batch and real-time data processing, ensuring high data quality and reliability.
• Monitor and optimize pipeline performance using autoscaling, caching, and cost-effective strategies.
• Leverage Databricks and Apache Spark (PySpark, Spark SQL) for distributed data processing and transformation.
• Implement data storage solutions using Medallion Architecture and Delta Lake, ensuring schema evolution and ACID compliance.
• Collaborate with data scientists, analysts, engineers, and business stakeholders to align data solutions with business goals.
• Stay current with emerging technologies in data engineering, especially within the Azure and Databricks ecosystem.
Key Requirements:
• Bachelor's degree in Computer Science, Information Systems, or a related field.
• 5-10 years of hands-on experience in data engineering, with a strong track record in building and optimizing data pipelines.
• Proven expertise in Databricks, Apache Spark (PySpark, Spark SQL), Python, and SQL.
• Experience integrating Databricks with Azure services (Data Lake Storage, Data Factory, SQL Database, Event Hubs).
• Familiarity with both batch and real-time data processing tools such as Azure Event Hubs and Kafka.
• Proficient in Git and Azure DevOps for CI/CD and source control, including Databricks Asset Bundles.
• Strong understanding of data quality, monitoring, and performance optimization.
• Excellent documentation, collaboration, and communication skills.
• Mature, independent, and customer-oriented with strong time management and problem-solving abilities.
Our client offers competitive compensation and benefits, as well as a dynamic and fast-paced work environment.
Interested parties, please send your CV to Ron Lee on or contact on for more details. For other available opportunities, please visit
They are now seeking a Data Engineer to join their Data Service team and play a key role in building scalable data pipelines and analytics solutions.
Key Responsibilities:
• Design, build, and optimize scalable end-to-end data pipelines for structured and unstructured data from diverse sources (APIs, databases, files).
• Develop and manage robust ETL/ELT workflows for both batch and real-time data processing, ensuring high data quality and reliability.
• Monitor and optimize pipeline performance using autoscaling, caching, and cost-effective strategies.
• Leverage Databricks and Apache Spark (PySpark, Spark SQL) for distributed data processing and transformation.
• Implement data storage solutions using Medallion Architecture and Delta Lake, ensuring schema evolution and ACID compliance.
• Collaborate with data scientists, analysts, engineers, and business stakeholders to align data solutions with business goals.
• Stay current with emerging technologies in data engineering, especially within the Azure and Databricks ecosystem.
Key Requirements:
• Bachelor's degree in Computer Science, Information Systems, or a related field.
• 5-10 years of hands-on experience in data engineering, with a strong track record in building and optimizing data pipelines.
• Proven expertise in Databricks, Apache Spark (PySpark, Spark SQL), Python, and SQL.
• Experience integrating Databricks with Azure services (Data Lake Storage, Data Factory, SQL Database, Event Hubs).
• Familiarity with both batch and real-time data processing tools such as Azure Event Hubs and Kafka.
• Proficient in Git and Azure DevOps for CI/CD and source control, including Databricks Asset Bundles.
• Strong understanding of data quality, monitoring, and performance optimization.
• Excellent documentation, collaboration, and communication skills.
• Mature, independent, and customer-oriented with strong time management and problem-solving abilities.
Our client offers competitive compensation and benefits, as well as a dynamic and fast-paced work environment.
Interested parties, please send your CV to Ron Lee on or contact on for more details. For other available opportunities, please visit

Data Governance Lead — Privacy & Risk (Banking, HK)
ConnectedGroup Limited
网络安全
中西区, 香港
7 天前
全职
办公室工作
专业服务

Platform Engineer (12-Month Contract)
ConnectedGroup Limited
软件工程师
中西区, 香港
7 天前
合约
办公室工作
专业服务

Azure Databricks Data Engineer: Scale End-to-End Pipelines
ConnectedGroup Limited
软件工程师
中西区, 香港
7 天前
全职
办公室工作
专业服务

Data Governance Manager - Data Privacy / 50k-80k per month
ConnectedGroup Limited
网络安全
中西区, 香港
7 天前
全职
办公室工作
专业服务

Lead Frontend Developer
ConnectedGroup Limited
软件工程师
中西区, 香港
7 天前
全职
办公室工作
专业服务

Senior Embedded Systems Architect & Engineering Leader
ConnectedGroup Limited
咨询与系统集成
中西区, 香港
7 天前
全职
办公室工作
专业服务
更多类似工作
🎉 Got an interview?








