Why this role exists
We're looking for a
Cloud Native Engineer
to build and optimize the microservices architecture powering our AI cloud platform.
You'll design resilient, scalable systems using cutting-edge cloud native technologies, ensuring our platform can handle massive AI workloads with reliability and performance.
As part of our cloud native engineering team, you'll work on
Kubernetes-based infrastructure, Go microservices, and container orchestration systems
that serve as the backbone of our AI computing platform.
You'll architect the systems that make AI compute seamless for thousands of developers and enterprises.
This is a unique opportunity for someone who's excited to
work with the latest cloud native technologies, solve complex distributed systems challenges, and build infrastructure at scale in the rapidly growing AI space.
What you'll do
Design and develop
microservices using Go
that power our AI cloud platform's core functionality
Build and maintain
Kubernetes-based infrastructure
for container orchestration and workload management
Implement and optimize
cloud native solutions
for scalability, reliability, and performance
Contribute to
code reviews, technical documentation, and knowledge sharing
within the engineering team
Explore and integrate
emerging cloud native technologies
like Volcano, Prometheus, and service mesh solutions
Design
distributed systems architecture
for high-availability AI workload processing
Collaborate with DevOps and SRE teams to ensure
production reliability and monitoring
Leverage AI-assisted coding tools
(GitHub Copilot, ChatGPT, Cursor IDE, etc.) to boost productivity and code quality
You'll thrive here if you
7+ years of software engineering experience
with focus on distributed systems and cloud native technologies
Strong proficiency in Go
with deep understanding of concurrency patterns and standard libraries
Familiarity with
Python or other backend languages
for polyglot development environments
Hands-on Kubernetes experience
including development, deployment, and maintenance of production clusters
Solid understanding of
microservices architecture
design patterns and implementation best practices
Experience with
containerization technologies
(Docker, containerd) and container runtime optimization
Problem-solving mindset
with ability to independently design and implement complex system components
Strong collaboration and communication skills
for working in cross-functional teams
Experience using AI-assisted coding tools
and willingness to integrate them into development workflow
Bonus qualifications
Familiarity with
cloud native ecosystem tools
such as Prometheus, Grafana, Volcano, or service mesh technologies
Open source contributions
to cloud native projects (Kubernetes, CNCF ecosystem)
Experience with
large-scale Kubernetes cluster operations
and troubleshooting in production environments
Knowledge of
microservices architecture patterns
including circuit breakers, service discovery, and distributed tracing
Compensation
Competitive salary
— commensurate with your experience and aligned with industry standards
Meaningful equity
— be part of the upside as we build a category-defining company.
Your grant will align with your role and the experience you bring.
#J-18808-Ljbffr