Job Description
7 days ago
We need Data Engineer personnel for Singapore with Scala, Spark and Python Any National is fine
Personnel with PR etc will be plus
SKILL :
Data Engineer with expertise in Hadoop, Spark, OpenShift Container Platform (OCP), and DevOps practices. As a Data Engineer, you will be responsible for designing, developing, and maintaining efficient data pipelines, processing large-scale datasets. Your expertise in Hadoop, Spark, OCP, and DevOps will be crucial in ensuring the availability, scalability, and reliability of our ML Solutions.
Responsibilities:
• At least 6 years of experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments
• Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
• Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
• Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
• Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
• Experience with jobs schedulers like Control-m
• Experience with Graphana, Prometheus, Splunk will be an added benefit
• Strong problem-solving and troubleshooting skills with a proactive approach to resolving technical challenges
• Excellent collaboration and communication skills to work effectively with cross-functional teams
• Ability to manage multiple priorities, meet deadlines, and deliver high-quality results in a fast-paced environment
• Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is a plus
Key Domain
• Scala
• Python
• Spark
• Java
• SQL,
• Shell Script, Java
• Hadoop / Cloudera
• Elastic Search
• Read Heat OCP
Experience : 5 to 12 years
Location : SINGAPORE
CODE ' SCALA PYTHON SPARK SINGAPORE 22 JAN 26 '
Thanks and rgds
BALAJI B
VP HR
BR RAYSOFT GLOBAL P LTD
CELL NO 9381035905
Personnel with PR etc will be plus
SKILL :
Data Engineer with expertise in Hadoop, Spark, OpenShift Container Platform (OCP), and DevOps practices. As a Data Engineer, you will be responsible for designing, developing, and maintaining efficient data pipelines, processing large-scale datasets. Your expertise in Hadoop, Spark, OCP, and DevOps will be crucial in ensuring the availability, scalability, and reliability of our ML Solutions.
Responsibilities:
• At least 6 years of experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments
• Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
• Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
• Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
• Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
• Experience with jobs schedulers like Control-m
• Experience with Graphana, Prometheus, Splunk will be an added benefit
• Strong problem-solving and troubleshooting skills with a proactive approach to resolving technical challenges
• Excellent collaboration and communication skills to work effectively with cross-functional teams
• Ability to manage multiple priorities, meet deadlines, and deliver high-quality results in a fast-paced environment
• Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is a plus
Key Domain
• Scala
• Python
• Spark
• Java
• SQL,
• Shell Script, Java
• Hadoop / Cloudera
• Elastic Search
• Read Heat OCP
Experience : 5 to 12 years
Location : SINGAPORE
CODE ' SCALA PYTHON SPARK SINGAPORE 22 JAN 26 '
Thanks and rgds
BALAJI B
VP HR
BR RAYSOFT GLOBAL P LTD
CELL NO 9381035905
More jobs like this
Azure Databricks Data Engineer: Scale End-to-End Pipelines
ConnectedGroup Limited
Central and Western, Hong Kong, China
Data Engineer: Scalable Pipelines (AWS/Azure, Databricks)
Tenth Revolution Group
Central and Western, Hong Kong, China
Data Engineer: Scalable Pipelines (AWS/Azure, Databricks)
Tenth Revolution Group
Central and Western, Hong Kong, China
Data Engineer: Global Data Pipelines & Analytics (Remote)
Deel Hong Kong Limited
Central and Western, Hong Kong, China
AZURE DATABRICKS DATA ENGINEER, LISTED FIRM
Grey Anderson Limited
Central and Western, Hong Kong, China
AZURE DATABRICKS DATA ENGINEER, LISTED FIRM
Grey Anderson Limited
Central and Western, Hong Kong, China
🎉 Got an interview?







