Forward Deployed Engineer Generative AI / LLM Deployment

New Today

Forward Deployed Engineer
Generative AI / LLM Deployment GenAI | LangChain | RAG | AWS | Kubernetes | Customer-Facing We are hiring a Forward Deployed Engineer to lead the deployment of production-grade Generative AI and LLM solutions into enterprise environments. This is a hands-on, customer-facing role focused on building and deploying RAG pipelines, multi-agent systems and fine-tuned LLM applications using tools such as LangChain and LangGraph. You will embed with client teams, own technical delivery from prototype to production, and demonstrate measurable KPI impact. Key Responsibilities Design and deploy GenAI / LLM applications (RAG, agentic AI, fine-tuning) Lead production deployment on AWS, Azure or GCP Build scalable backend systems in Python (JavaScript a plus) Manage infrastructure using Kubernetes, Docker, Terraform Own end-to-end technical delivery across multiple client deployments Conduct debugging, optimisation and root cause analysis Communicate complex AI concepts to technical and non-technical stakeholders Requirements Strong experience building and deploying LLM / Generative AI solutions Hands-on with LangChain, RAG, multi-agent systems Cloud ML deployment experience (AWS, Azure or GCP) DevOps tooling: Kubernetes, CI/CD, GitHub, GitOps Customer-facing technical delivery experience Background in Machine Learning, Data Science or AI Engineering This role suits a Senior Machine Learning Engineer, Applied AI Engineer, AI Solutions Engineer or LLM Engineer who enjoys client impact, rapid prototyping and owning delivery in fast-moving environments. Apply now to work at the forefront of enterprise AI deployment and agentic systems TPBN1_UKTJ
Location:
United Kingdom
Job Type:
FullTime
Category:
IT;IT