As Lead MLOps Engineer at knecon, I built and scaled machine learning infrastructure while leading cross-functional teams on complex AI projects.
What I Did
I architected a comprehensive continuous training pipeline on Azure, integrating GitLab CI for automation and Terraform for infrastructure-as-code. This pipeline handled the full ML lifecycle from data ingestion to model deployment, with particular attention to GDPR compliance requirements for European clients.
I led the migration of our model serving infrastructure to MLflow + KServe, replacing a fragmented deployment process with a standardized, scalable solution. This involved defining deployment standards, creating CI/CD pipelines, and training the team on new workflows.
One of my key projects was building an LLM + RAG (Retrieval Augmented Generation) system for a virtual assistant that could work with sensitive company data. This required careful attention to data security, prompt engineering, and retrieval optimization.
I led cross-functional teams using OKR methodology, coordinating between data scientists, backend engineers, and stakeholders to deliver projects on time.
Key Achievements
- •Architected GDPR-compliant continuous training pipeline, reducing time-to-market by 50–60%
- •Migrated to MLflow + KServe, achieving 5× faster deployments through CI standardization
- •Built production LLM + RAG system for secure virtual assistant
- •Led cross-functional teams using OKRs for project management
- •Established MLOps best practices and documentation across the organization