Job Title: Big Data Engineer (Java, Spark, Hadoop)
Location : Singapore
Experience : 7- 12 years
Employment Type : Full-Time
Open to Citizens and SPR only | No Visa sponsorship available
Job Summary
We are looking for a Senior Big Data Engineer with 7–12 years of experience to join our growing data engineering team.
The ideal candidate will bring deep expertise in Java , Apache Spark , and Hadoop ecosystems, and have a strong track record of designing and building scalable, high-performance big data solutions.
This role is critical to ensuring robust data processing and delivering clean, actionable data for business insights and advanced analytics.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data processing systems using Apache Spark , Hadoop , and Java .
- Lead the development and deployment of data ingestion , ETL/ELT pipelines , and data transformation frameworks .
- Work with cross-functional teams to gather data requirements and translate them into scalable data solutions.
- Ensure high performance and reliability of big data systems through performance tuning and best practices.
- Manage and monitor batch and real-time data pipelines from diverse sources including APIs, databases, and streaming platforms like Kafka .
- Apply deep knowledge of Java to build efficient, modular, and reusable codebases.
- Mentor junior engineers, participate in code reviews, and enforce engineering best practices.
- Collaborate with DevOps teams to build CI/CD pipelines and automate deployment processes.
- Ensure data governance , security , and compliance standards are maintained.
Required Qualifications
- 7–12 years of experience in big data engineering or backend data systems.
- Strong hands-on programming skills in Java ; exposure to Scala or Python is a plus.
- Proven experience with Apache Spark , Hadoop (HDFS, YARN, MapReduce), and related tools.
- Solid understanding of distributed computing , data partitioning, and optimization techniques.
- Experience with data access and storage layers like Hive , HBase , or Impala .
- Familiarity with data ingestion tools like Apache Kafka , NiFi , Flume , or Sqoop .
- Comfortable working with SQL for querying large datasets.
- Good understanding of data architecture , data modeling , and data lifecycle management .
- Experience with cloud platforms like AWS , Azure , or Google Cloud Platform .
- Strong problem-solving, analytical, and communication skills.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- Experience with streaming data frameworks such as Spark Streaming , Kafka Streams , or Flink .
- Knowledge of DevOps practices , CI/CD pipelines, and infrastructure as code (e.g., Terraform).
- Exposure to containerization (Docker) and orchestration (Kubernetes) .
- Certifications in Big Data technologies or Cloud platforms are a plus.
Please note that this is an equal opportunities employer.
#J-18808-Ljbffr