Mandatory Skills
- Possess a degree in Computer Science/Information Technology or related fields.
- At least 3 years of experience in a role focusing on development and support of data ingestion pipelines.
- Experience with building on data platforms, e.g. Snowflake .
- Proficient in SQL and Python .
- Experience with Cloud environments (e.g. AWS ).
- Experience with continuous integration and continuous deployment ( CICD ) using GitHub .
- Experience with Software Development Life Cycle (SDLC) methodology.
- Experience with data warehousing concepts.
- Strong problem-solving and troubleshooting skills.
- Strong communication and collaboration skills .
- Able to design and implement solution and perform code review independently.
- Able to provide production support independently.
- Agile, fast learner and able to adapt to changes.
Brief Job Description
Responsibilities:
- Work closely with data stewards, data analysts and business end-users to implement and support data solutions.
- Design and build robust and scalable data ingestion and data management solutions for batch-loading and streaming from multiple data sources using Python via different mechanisms such as API, Files transfer, direct interface with Oracle and MSSQL databases.
- Familiar with SDLC process: Requirement gathering, design and development, SIT testing, support UAT and CICD deployment using GitHub for enhancement and new ingestion pipeline.
- Ensure compliance with IT security standards, policies, and procedures.
Provide BAU support in terms of production job monitoring, issue resolution, and bug fixes.
- Enable ingestion checks and data quality checks for all data sets in the data platform and ensure the data issues are actively detected, tracked, and fixed without breaching SLA.