We are looking for a proactive and detail-oriented Data Engineer to join our client's team.
In this role, you will be responsible for building, testing, and optimizing robust data pipelines that ensure high data quality and reliability across our systems.
You'll collaborate closely with data scientists, engineers, and key stakeholders to support data-driven decision-making and innovation.
Key Responsibilities
- Design, test, and validate data pipelines for new releases and deployments.
- Monitor data integrity and completeness to ensure reliability across systems.
- Debug and troubleshoot issues, implement fixes, and document root causes.
- Optimize pipeline performance for scalability and efficiency.
- Write clean, maintainable, and well-tested Python code.
- Collaborate with cross-functional teams to deliver data solutions.
- Manage and support AWS-based pipeline infrastructure (e.g., S3, Lambda, EC2, CloudWatch).
Requirements
- Strong proficiency in Python, with a focus on clean coding and unit testing.
- Hands-on experience in data pipeline development and troubleshooting.
- Familiarity with AWS services (S3, Lambda, EC2, CloudWatch).
- Excellent debugging and problem-solving skills.
- Strong attention to detail with a focus on data quality.
Nice to Have
- Basic knowledge of data science workflows.
- Experience with CI/CD practices and version control tools (e.g., Git).
Ryan Leo (Octomate Staffing)
Registration No.: R
EA License No.: 23C1980