Role Overview
We are seeking a skilled Data Engineer with strong expertise in DBT (Data Build Tool) , Snowflake , and PL/SQL .
The selected candidate will design, develop, and maintain data transformation pipelines supporting business intelligence, analytics, and data-science initiatives across the enterprise.
Key Responsibilities
- Design and implement scalable data models and transformation pipelines using DBT on Snowflake.
- Write efficient PL/SQL code for complex data processing and transformation tasks.
- Collaborate with data analysts, data scientists, and business stakeholders to translate requirements into robust data solutions.
- Optimize Snowflake performance through query tuning, clustering, and resource management.
- Ensure data quality, integrity, and governance through testing, documentation, and monitoring.
- Participate in code reviews and architecture discussions to improve data engineering best practices.
- Maintain and enhance CI/CD pipelines for DBT projects.
Required Skills & Experience
- Minimum 3 years’ experience in data engineering or related fields.
- Hands-on expertise with DBT (modular SQL development, testing, documentation) .
- Proficiency in Snowflake (data warehousing, performance tuning, security) .
- Strong knowledge of PL/SQL , including stored procedures and functions.
- Solid understanding of data modeling (star/snowflake schemas, normalization).
- Experience with version control (Git ) and CI/CD practices.
- Familiarity with Airflow, dbt Cloud, or Prefect is an advantage.
#J-18808-Ljbffr