Key Responsibilities:
Manage AI platform by setting up and maintaining development and production infrastructure for AI/ML workloads, ensuring reliability, scalability, and performance.
Build robust data ingestion and transformation pipelines to support AI/ML model training and inference at scale.
Collaborate with data scientists to convert AI/ML models into deployable, production-ready services following MLOps best practices.
Integrate AI solutions with existing platforms and business systems through APIs, microservices, or other integration methods.
Develop APIs or lightweight UIs to make AI capabilities accessible to business users and partners.
Work closely with cross-functional teams to implement models and monitor outcomes.
Key Requirements:
Proficiency in Python programming and Linux systems.
Strong background in software engineering, data engineering, or AI infrastructure.
Hands-on experience with cloud platforms (e.g., Azure) and containerization tools (Docker, Kubernetes).
Familiarity with building RESTful APIs and/or lightweight front-end tools for AI interfaces.
Solid understanding of MLOps practices and tools (e.g., MLflow, FastAPI).
Excellent communication skills to collaborate across technical and business teams.
Growth mindset with the drive to learn and master new technologies.
Nice to Have:
Exposure to big data processing and streaming tools (e.g., Kafka, Spark).
Knowledge of AI/ML frameworks such as TensorFlow or PyTorch.
Experience with CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Jenkins).
Familiarity with security, monitoring, and performance tuning of AI services.
#J-18808-Ljbffr