Forward Deployed Engineer Generative AI / LLM Deployment

New Yesterday

Forward Deployed Engineer Generative AI / LLM DeploymentGenAI | LangChain | RAG | AWS | Kubernetes | Customer-FacingWe are hiring a Forward Deployed Engineer to lead the deployment of production-grade Generative AI and LLM solutions into enterprise environments.This is a hands-on, customer-facing role focused on building and deploying RAG pipelines, multi-agent systems and fine-tuned LLM applications using tools such as LangChain and LangGraph. You will embed with client teams, own technical delivery from prototype to production, and demonstrate measurable KPI impact.Key ResponsibilitiesDesign and deploy GenAI / LLM applications (RAG, agentic AI, fine-tuning)Lead production deployment on AWS, Azure or GCPBuild scalable backend systems in Python (JavaScript a plus)Manage infrastructure using Kubernetes, Docker, TerraformOwn end-to-end technical delivery across multiple client deploymentsConduct debugging, optimisation and root cause analysisCommunicate complex AI concepts to technical and non-technical stakeholdersRequirementsStrong experience building and deploying LLM / Generative AI solutionsHands-on with LangChain, RAG, multi-agent systemsCloud ML deployment experience (AWS, Azure or GCP)DevOps tooling: Kubernetes, CI/CD, GitHub, GitOpsCustomer-facing technical delivery experienceBackground in Machine Learning, Data Science or AI EngineeringThis role suits a Senior Machine Learning Engineer, Applied AI Engineer, AI Solutions Engineer or LLM Engineer who enjoys client impact, rapid prototyping and owning delivery in fast-moving environments.Apply now to work at the forefront of enterprise AI deployment and agentic systemsJBRP1_UKTJ
Location:
West London
Job Type:
FullTime

We found some similar jobs based on your search