Objectives
The objective of the position is to manage the extract/transform/load (ETL) processes ensuring the data availability.
Responsibilities
The holder of the position is mainly responsible for the following areas in coordination with his / her superior:
- Design, create, modify ETL pipelines in Azure Data Factory ensuring efficient data flow from source to destination.
- Ensure data accuracy and data integrity throughout the ETL processes via data validation, cleansing, deduplication, and error handling to ensure reliable and usable data being ingested.
- Monitor the ETL processes and optimize ETL pipelines for speed and efficiency, addressing bottlenecks, and ensuring the ETL system can handle the volume, velocity, and variety of data.
- Participate in data modeling, designing the data structures and schema in the data warehouse to optimize query performance and align with business needs.
- Work closely with different departments and IT teams to understand data requirements and deliver the data infrastructure that supports business goals.
- Provide technical support for ETL systems, troubleshooting issues and ensuring the continuous availability and reliability of data flows.
- Ensure proper documentation of data sources, ETL processes and data architecture.
Requirements
- 3 to 5 years of data engineering experience in Snowflake.
- 3 to 5 years in upstream/downstream Retail industry and/or Supply Chain / Manufacturing domain.
- Sound understanding of data quality principles and data governance best practices.
- Proficiency in data analytics languages like Python, Java, Scala, etc.
- Knowledge of big data technologies like Hadoop, Spark and distributed computing frameworks to manage large-scale data processing.
- Proficient in using version control systems like Git for managing code and configurations.
- SnowPro Core Certification and SnowPro Advanced Certification are an advantage.
#J-18808-Ljbffr