Key Responsibilities
Design, develop, and maintain scalable and reliable data pipelines for batch and real-time data processing.
Build and optimize data warehousing solutions using modern technologies (e.g., Snowflake, BigQuery, Redshift).
Implement data ingestion processes from various sources (APIs, databases, logs, third-party platforms).
Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver actionable insights.
Develop and enforce data governance, quality, and security standards.
Monitor, troubleshoot, and improve existing data systems for performance and reliability.
Automate data workflows and contribute to the development of a self-service data platform.
Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
3+ years of hands-on experience in data engineering, with a strong background in data platform construction.
Proficiency in SQL and experience with relational and NoSQL databases.
Strong programming skills in Python, Java, or Scala.
Experience with big data technologies such as Spark, Kafka, Hadoop, or Flink.
Hands-on experience with cloud platforms (AWS, GCP, or Azure) and their data services (e.g., S3, BigQuery, Glue, Dataflow).
Familiarity with data orchestration tools like Airflow, Prefect, or Dagster.
Knowledge of data modeling, dimensional modeling, and data architecture best practices.
Experience with data visualization tools (e.g., Tableau, Metabase, Looker) is a plus.
Excellent communication skills and ability to work in a collaborative environment.
Data provided is for recruitment purposes only
EA Registration No.: Reg No: R
Business Registration Number: W
#J-18808-Ljbffr