ZenML

LLMOps Tag: open_source

353 tools with this tag

← Back to LLMOps Database

Common industries

View all industries →

Accelerating Drug Development with AI-Powered Clinical Trial Transformation

Novartis

Novartis partnered with AWS Professional Services and Accenture to modernize their drug development infrastructure and integrate AI across clinical trials with the ambitious goal of reducing trial development cycles by at least six months. The initiative involved building a next-generation GXP-compliant data platform on AWS that consolidates fragmented data from multiple domains, implements data mesh architecture with self-service capabilities, and enables AI use cases including protocol generation and an intelligent decision system (digital twin). Early results from the patient safety domain showed 72% query speed improvements, 60% storage cost reduction, and 160+ hours of manual work eliminated. The protocol generation use case achieved 83-87% acceleration in producing compliant protocols, demonstrating significant progress toward their goal of bringing life-saving medicines to patients faster.

Accelerating Game Asset Creation with Fine-Tuned Diffusion Models

Rovio

Rovio, the Finnish gaming company behind Angry Birds, faced challenges in meeting the high demand for game art assets across multiple games and seasonal events, with artists spending significant time on repetitive tasks. The company developed "Beacon Picasso," a suite of generative AI tools powered by fine-tuned diffusion models running on AWS infrastructure (SageMaker, Bedrock, EC2 with GPUs). By training custom models on proprietary Angry Birds art data and building multiple user interfaces tailored to different user needs—from a simple Slackbot to advanced cloud-based workflows—Rovio achieved an 80% reduction in production time for specific use cases like season pass backgrounds, while maintaining brand quality standards and keeping artists in creative control. The solution enabled artists to focus on high-value creative work while AI handled repetitive variations, ultimately doubling content production capacity.

Advanced Context-Aware Code Generation with Custom Infrastructure and Parallel LLM Processing

Codeium

Codeium addressed the limitations of traditional embedding-based retrieval in code generation by developing a novel approach called M-query, which leverages vertical integration and custom infrastructure to run thousands of parallel LLM calls for context analysis. Instead of relying solely on vector embeddings, they implemented a system that can process entire codebases efficiently, resulting in more accurate and contextually aware code generation. Their approach has led to improved user satisfaction and code generation acceptance rates while maintaining rapid response times.

Advanced Fine-Tuning Techniques for Multi-Agent Orchestration at Scale

Amazon

Amazon teams faced challenges in deploying high-stakes LLM applications across healthcare, engineering, and e-commerce domains where basic prompt engineering and RAG approaches proved insufficient. Through systematic application of advanced fine-tuning techniques including Supervised Fine-Tuning (SFT), Proximal Policy Optimization (PPO), Direct Preference Optimization (DPO), and cutting-edge reasoning optimizations like Group-based Reinforcement Learning from Policy Optimization (GRPO) and Direct Advantage Policy Optimization (DAPO), three Amazon business units achieved production-grade results: Amazon Pharmacy reduced dangerous medication errors by 33%, Amazon Global Engineering Services achieved 80% human effort reduction in inspection reviews, and Amazon A+ Content improved quality assessment accuracy from 77% to 96%. These outcomes demonstrate that approximately one in four high-stakes enterprise applications require advanced fine-tuning beyond standard techniques to achieve necessary performance levels in production environments.

Agent-Based Workflow Automation in Spreadsheets for Non-Technical Users

Otto

Otto, founded by Suli Omar, addresses the challenge of making AI agents accessible to non-technical users by embedding agent workflows directly into spreadsheet interfaces. The company transforms unstructured data processing tasks into spreadsheet-based workflows where each cell acts as an autonomous agent capable of executing tasks, waiting for dependencies, and outputting structured results. By leveraging the familiar spreadsheet UX instead of traditional chatbot interfaces, Otto enables finance teams, accountants, and other business users to harness agent capabilities without requiring technical expertise. The solution involves sophisticated model selection across three tiers (workhorse, middle-tier, and heavy reasoning models) to optimize cost and performance, continuous evaluation through customer usage patterns, and iterative model testing to maintain service quality as new LLM capabilities emerge.

Agent-First AI Development Platform with Multi-Surface Orchestration

Google Deepmind

Google DeepMind launched Anti-gravity, an agent-first AI development platform designed to handle increasingly complex, long-running software development tasks powered by Gemini 3 Pro. The platform addresses the challenge of managing AI agents operating across multiple surfaces (editor, browser, and agent manager) by introducing "artifacts" - dynamic representations that help organize agent outputs and enable asynchronous feedback. The solution emerged from close collaboration between product and research teams at DeepMind, creating a feedback loop where internal dogfooding identified model gaps and drove improvements. Initial launch experienced capacity constraints due to high demand, but users who accessed the product reported significant workflow improvements from the multi-surface agent orchestration approach.

Agentic AI Architecture for Meeting Intelligence and Productivity Automation

Zoom

Zoom developed AI Companion 3.0, an agentic AI system that transforms meeting conversations into actionable outcomes through automated planning, reasoning, and execution. The system addresses the challenge of turning hours of meeting content across distributed teams into coordinated action by implementing a federated AI approach combining small language models (SLMs) with large language models (LLMs), deployed on AWS infrastructure including Bedrock and OpenSearch. The solution enables users to automatically generate meeting summaries, perform cross-meeting analysis, schedule meetings with intelligent calendar management, and prepare meeting agendas—reducing what typically takes days of administrative work to minutes while maintaining low latency and cost-effectiveness at scale.

Agentic AI Copilot for Insurance Underwriting with Multi-Tool Integration

Snorkel

Snorkel developed a specialized benchmark dataset for evaluating AI agents in insurance underwriting, leveraging their expert network of Chartered Property and Casualty Underwriters (CPCUs). The benchmark simulates an AI copilot that assists junior underwriters by reasoning over proprietary knowledge, using multiple tools including databases and underwriting guidelines, and engaging in multi-turn conversations. The evaluation revealed significant performance variations across frontier models (single digits to ~80% accuracy), with notable error modes including tool use failures (36% of conversations) and hallucinations from pretrained domain knowledge, particularly from OpenAI models which hallucinated non-existent insurance products 15-45% of the time.

Agentic AI for Legal Research: Building Deep Research in Westlaw and CoCounsel

Thomson Reuters

Thomson Reuters Labs developed Deep Research, an agentic AI system integrated into Westlaw Advantage and CoCounsel that conducts legal research with the sophistication of a practicing attorney. The system addresses the limitation of traditional RAG-based tools by autonomously planning multi-step research strategies, executing searches in parallel, selecting appropriate tools, adapting based on findings, and applying stopping criteria. Deep Research leverages specialized document-type agents, maintains memory across sessions, integrates Westlaw features as modular building blocks, and employs rigorous evaluation frameworks. The system reportedly takes about 10 minutes for comprehensive analyses and includes verification tools with inline citations, KeyCite flags, and highlighted excerpts to enable lawyers to quickly validate AI-generated insights.

Agentic AI Framework for Mainframe Modernization at Scale

Western Union / Unum

Western Union and Unum partnered with AWS and Accenture/Pega to modernize their mainframe-based legacy systems using AWS Transform, an agentic AI service designed for large-scale migration and modernization. Western Union aimed to modernize its 35-year-old money order platform to support growth targets and improve back-office operations, while Unum sought to streamline Colonial Life claims processing. The solution leveraged composable agentic AI frameworks where multiple specialized agents (AWS Transform agents, Accenture industry knowledge agents, and Pega Blueprint agents) worked together through orchestration layers. Results included converting 2.5 million lines of COBOL code in approximately 1.5 hours, reducing project timelines from 3+ months to 6 weeks for Western Union, and achieving a complete COBOL-to-cloud migration with testable applications in 3 months for Unum (compared to previous 7-year, $25 million estimates), while eliminating 7,000 annual manual hours in claims management.

Agentic AI Platform for Clinical Development and Commercial Operations in Pharmaceutical Drug Development

AstraZeneca

AstraZeneca partnered with AWS to deploy agentic AI systems across their clinical development and commercial operations to accelerate their goal of delivering 20 new medicines by 2030. The company built two major production systems: a Development Assistant serving over 1,000 users across 21 countries that integrates 16 data products with 9 agents to enable natural language queries across clinical trials, regulatory submissions, patient safety, and quality domains; and an AZ Brain commercial platform that uses 500+ AI models and agents to provide precision insights for patient identification, HCP engagement, and content generation. The implementation reduced time-to-market for various workflows from months to weeks, with field teams using the commercial assistant generating 2x more prescriptions, and reimbursement dossier authoring timelines dramatically shortened through automated agent workflows.

Agentic AI System for Construction Industry Tender Management and Quote Generation

Tendos AI

Tendos AI built an agentic AI platform to automate the tendering and quoting process for manufacturers in the construction industry. The system addresses the massive inefficiency in back-office workflows where manufacturers receive customer requests via email with attachments, manually extract information, match products, and generate quotes. Their multi-agent LLM system automatically categorizes incoming requests, extracts entities from documents up to thousands of pages, matches products from complex catalogs using semantic understanding, and generates detailed quotes for human review. Starting with a narrow focus on radiators with a single design partner, they iteratively expanded to support full workflows across multiple product categories, employing sophisticated agentic architectures with planning patterns, review agents, and extensive evaluation frameworks at each pipeline step.

Agentic Workflow Automation for Financial Operations

Ramp

Ramp, a finance automation platform serving over 50,000 customers, built a comprehensive suite of AI agents to automate manual financial workflows including expense policy enforcement, accounting classification, and invoice processing. The company evolved from building hundreds of isolated agents to consolidating around a single agent framework with thousands of skills, unified through a conversational interface called Omnichat. Their Policy Agent product, which uses LLMs to interpret and enforce expense policies written in natural language, demonstrates significant production deployment challenges and solutions including iterative development starting with simple use cases, extensive evaluation frameworks, human-in-the-loop labeling sessions, and careful context engineering. Additionally, Ramp built an internal coding agent called Ramp Inspect that now accounts for over 50% of production PRs merged weekly, illustrating how AI infrastructure investments enable broader organizational productivity gains.

AI Agent Evaluation Framework for Travel and Accommodation Platform

Booking.com

Booking.com developed a comprehensive evaluation framework for LLM-based agents that power their AI Trip Planner and other customer-facing features. The framework addresses the unique complexity of evaluating autonomous agents that can use external tools, reason through multi-step problems, and engage in multi-turn conversations. Their solution combines black box evaluation (focusing on task completion using judge LLMs) with glass box evaluation (examining internal decision-making, tool usage, and reasoning trajectories). The framework enables data-driven decisions about deploying agents versus simpler baselines by measuring performance gains against cost and latency tradeoffs, while also incorporating advanced metrics for consistency, reasoning quality, memory effectiveness, and trajectory optimality.

AI Agent for Automated Feature Flag Removal

Duolingo

Duolingo developed an AI agent to automate the removal of feature flags from their codebase, addressing the common engineering problem of technical debt accumulation from abandoned flags. The solution leverages Anthropic's Codex CLI running on Temporal workflow orchestration, allowing engineers to initiate automated code cleanup through an internal self-service UI. The agent clones repositories, uses AI to identify and remove obsolete feature flags across Python and Kotlin codebases, and automatically creates pull requests assigned to the requesting engineer. The tool was developed rapidly—moving from prototype to production in approximately one week—and serves as a foundation pattern for future autonomous coding agents at Duolingo.

AI Agent for Automated Root Cause Analysis in Production Systems

Cleric

Cleric developed an AI agent system to automatically diagnose and root cause production alerts by analyzing observability data, logs, and system metrics. The agent operates asynchronously, investigating alerts when they fire in systems like PagerDuty or Slack, planning and executing diagnostic tasks through API calls, and reasoning about findings to distill information into actionable root causes. The system faces significant challenges around ground truth validation, user feedback loops, and the need to minimize human intervention while maintaining high accuracy across diverse infrastructure environments.

AI Agent Solutions for Data Warehouse Access and Security

Meta

Meta developed a multi-agent system to address the growing complexity of data warehouse access management at scale. The solution employs specialized AI agents that assist data users in obtaining access to warehouse data while helping data owners manage security and access requests. The system includes data-user agents with three sub-agents for suggesting alternatives, facilitating low-risk exploration, and crafting permission requests, alongside data-owner agents that handle security operations and access management. Key innovations include partial data preview capabilities with context-aware access control, query-level granular permissions, data-access budgeting, and rule-based risk management, all supported by comprehensive evaluation frameworks and feedback loops.

AI Agent System for Automated B2B Research and Sales Pipeline Generation

Unify

UniFi built an AI agent system that automates B2B research and sales pipeline generation by deploying research agents at scale to answer customer-defined questions about companies and prospects. The system evolved from initial React-based agents using GPT-4 and O1 models to a more sophisticated architecture incorporating browser automation, enhanced internet search capabilities, and cost-optimized model selection, ultimately processing 36+ billion tokens monthly while reducing per-query costs from 35 cents to 10 cents through strategic model swapping and architectural improvements.

AI Agents for Automated Product Quality Testing and Bug Detection

Coinbase

Coinbase developed an AI-powered QA agent (qa-ai-agent) to dramatically scale their product testing efforts and improve quality assurance. The system addresses the challenge of maintaining high product quality standards while reducing manual testing overhead and costs. The AI agent processes natural language testing requests, uses visual and textual data to execute tests, and leverages LLM reasoning to identify issues. Results showed the agent detected 300% more bugs than human testers in the same timeframe, achieved 75% accuracy (compared to 80% for human testers), enabled new test creation in 15 minutes versus hours, and reduced costs by 86% compared to traditional manual testing, with the goal of replacing 75% of manual testing with AI-driven automation.

AI Agents for Interpretability Research: Experimenter Agents in Production

Goodfire

Goodfire, an AI interpretability research company, deployed AI agents extensively for conducting experiments in their research workflow over several months. They distinguish between "developer agents" (for software development) and "experimenter agents" (for research and discovery), identifying key architectural differences needed for the latter. Their solution, code-named Scribe, leverages Jupyter notebooks with interactive, stateful access via MCP (Model Context Protocol), enabling agents to iteratively run experiments across domains like genomics, vision transformers, and diffusion models. Results showed agents successfully discovering features in genomics models, performing circuit analysis, and executing complex interpretability experiments, though validation, context engineering, and preventing reward hacking remain significant challenges that require human oversight and critic systems.

AI Strategy and LLM Application Development in Swedish Public Sector

Swedish Tax Authority

The Swedish Tax Authority (Skatteverket) has been on a multi-decade digitalization journey, progressively incorporating AI and large language models into production systems to automate and enhance tax services. The organization has developed various NLP applications including text categorization, transcription, OCR pipelines, and question-answering systems using RAG architectures. They have tested both open-source models (Llama 3.1, Mixtral 7B, Cohere) and commercial solutions (GPT-3.5), finding that open-source models perform comparably for simpler queries while commercial models excel at complex questions. The Authority operates within a regulated environment requiring on-premise deployment for sensitive data, adopting Agile/SAFe methodologies and building reusable AI infrastructure components that can serve multiple business domains across different public sector silos.

AI-Augmented Cybersecurity Triage Using Graph RAG for Cloud Security Operations

Deloitte

Deloitte developed a Cybersecurity Intelligence Center to help SecOps engineers manage the overwhelming volume of security alerts generated by cloud security platforms like Wiz and CrowdStrike. Using AWS's open-source Graph RAG Toolkit, Deloitte built "AI for Triage," a human-in-the-loop system that combines long-term organizational memory (stored in hierarchical lexical graphs) with short-term operational data (document graphs) to generate AI-assisted triage records. The solution reduced 50,000 security issues across 7 AWS domains to approximately 1,300 actionable items, converting them into over 6,500 nodes and 19,000 relationships for contextual analysis. This approach enables SecOps teams to make informed remediation decisions based on organizational policies, historical experiences, and production system context, while maintaining human accountability and creating automation recipes rather than brittle code-based solutions.

AI-Driven Collateral Allocation Optimization in Fintech

Mercado Libre

Mercado Pago, the fintech arm of Mercado Libre, faced the challenge of optimizing collateral allocation across billions of dollars in credit lines secured from major banks, requiring daily selection from millions of loans with complex contractual constraints. The company developed Enigma, a solution leveraging linear programming via Google OR-Tools combined with a custom grouping heuristic to handle scalability challenges. While the article primarily focuses on traditional optimization techniques rather than LLMs, it hints at future AI agent exploration for enhanced analytics, strategic constraint proposals, and automated translation of contractual conditions into mathematical constraints, representing a potential future evolution toward LLM integration in financial operations.

AI-Driven Documentation Generation for dbt Data Models

Loblaw Digital

Loblaw Digital addressed the challenge of maintaining comprehensive documentation for over 3,000 dbt data models across their analytics engineering infrastructure. Manual documentation proved labor-intensive and often led to incomplete or outdated documentation that confused business users. The team implemented an LLM-based solution using the open-source dbt-documentor tool integrated with Google Cloud's Vertex AI platform, which automatically generates descriptions for models and their columns by ingesting dbt's manifest.json files without accessing actual data. This automation significantly improved documentation coverage and productivity while maintaining data security, enabling analysts to better understand model purposes and dependencies through the dbt documentation website.

AI-Driven Incident Response and Automated Remediation for Digital Media Platform

iHeart

iHeart Media, serving 250 million monthly users across broadcast radio, digital streaming, and podcasting platforms, faced significant operational challenges with incident response requiring engineers to navigate multiple monitoring systems, VPNs, and dashboards during critical 3 AM outages. The company implemented a multi-agent AI system using AWS Bedrock Agent Core and the Strands AI framework to automate incident triage, root cause analysis, and remediation. The solution reduced triage response time dramatically (from minutes of manual investigation to 30-60 seconds), improved operational efficiency by eliminating repetitive manual tasks, and enabled knowledge preservation across incidents while maintaining 24/7 uptime requirements for their infrastructure handling 5-7 billion requests per month.

AI-Driven Media Analysis and Content Assembly Platform for Large-Scale Video Archives

Bloomberg Media

Bloomberg Media, facing challenges in analyzing and leveraging 13 petabytes of video content growing at 3,000 hours per day, developed a comprehensive AI-driven platform to analyze, search, and automatically create content from their massive media archive. The solution combines multiple analysis approaches including task-specific models, vision language models (VLMs), and multimodal embeddings, unified through a federated search architecture and knowledge graphs. The platform enables automated content assembly using AI agents to create platform-specific cuts from long-form interviews and documentaries, dramatically reducing time to market while maintaining editorial trust and accuracy. This "disposable AI strategy" emphasizes modularity, versioning, and the ability to swap models and embeddings without re-engineering entire workflows, allowing Bloomberg to adapt quickly to evolving AI capabilities while expanding reach across multiple distribution platforms.

AI-Powered Automated GraphQL Schema Cleanup

Whatnot

Whatnot, a livestream shopping platform, faced significant technical debt in their GraphQL schema with over 2,600 unused fields accumulated from deprecated features and old endpoints. Manual cleanup was time-consuming and risky, requiring 1-2 hours per field and deep domain knowledge. The engineering team built an AI subagent integrated into a GitHub Action that automatically identifies unused fields through traffic analysis and generates pull requests to safely remove them. The agent follows the same process an engineer would—removing schema fields, resolvers, dead code, and updating tests—but operates autonomously in the background. Running daily at $1-3 per execution, the system has successfully removed 24 of approximately 200 unused root fields with minimal human intervention, requiring edits to only three PRs, transforming schema maintenance from a neglected one-time project into an ongoing automated process.

AI-Powered Automated Issue Resolution Achieving State-of-the-Art Performance on SWE-bench

Trae

Trae developed an AI engineering system that achieved 70.6% accuracy on the SWE-bench Verified benchmark, setting a new state-of-the-art record for automated software issue resolution. The solution combines multiple large language models (Claude 3.7, Gemini 2.5 Pro, and OpenAI o4-mini) in a sophisticated multi-stage pipeline featuring generation, filtering, and voting mechanisms. The system uses specialized agents including a Coder agent for patch generation, a Tester agent for regression testing, and a Selector agent that employs both syntax-based voting and multi-selection voting to identify the best solution from multiple candidate patches.

AI-Powered Autonomous Infrastructure Monitoring and Self-Healing System

Railway

This case study presents a proof-of-concept system for autonomous infrastructure monitoring and self-healing using AI coding agents. The presenter demonstrates a workflow that automatically detects issues in deployed services on Railway (memory leaks, slow database queries, high error rates), analyzes metrics and logs using LLMs to generate diagnostic plans, and then deploys OpenCode—an open-source AI coding agent—to automatically create pull requests with fixes. The system leverages durable workflows via Inngest for reliability, combines multiple data sources (CPU/memory metrics, HTTP metrics, logs), and uses LLMs to analyze infrastructure health and generate remediation plans. While presented as a demo/concept, the approach showcases how LLMs can move from alerting engineers to autonomously proposing code-level fixes for production issues.

AI-Powered Business Assistant for Solopreneurs

Jimdo

Jimdo, a European website builder serving over 35 million solopreneurs across 190 countries, needed to help their customers—who often lack expertise in marketing, sales, and business strategy—drive more traffic and conversions to their websites. The company built Jimdo Companion, an AI-powered business advisor using LangChain.js and LangGraph.js for orchestration and LangSmith for observability. The system features two main components: Companion Dashboard (an agentic business advisor that queries 10+ data sources to deliver personalized insights) and Companion Assistant (a ChatGPT-like interface that adapts to each business's tone of voice). The solution resulted in 50% more first customer contacts within 30 days and 40% more overall customer activity for users with access to Companion.

AI-Powered Clinical Documentation with Multi-Region Healthcare Compliance

Heidi Health

Heidi Health developed an ambient AI scribe to reduce the administrative burden on healthcare clinicians by automatically generating clinical notes from patient consultations. The company faced significant LLMOps challenges including building confidence in non-deterministic AI outputs through "clinicians in the loop" evaluation processes, scaling clinical validation beyond small teams using synthetic data generation and LLM-as-judge approaches, and managing global expansion across regions with different data sovereignty requirements, model availability constraints, and regulatory compliance needs. Their solution involved standardizing infrastructure-as-code deployments across AWS regions, using a hybrid approach of Amazon Bedrock for immediate availability and EKS for self-hosted model control, and integrating clinical ambassadors in each region to validate medical accuracy and local practice patterns. The platform now serves over 370,000 clinicians processing 10 million consultations per month globally.

AI-Powered Code Editor with Multi-Model Integration and Agentic Workflows

Cursor

Cursor, an AI-powered code editor, has scaled to over $300 million in revenue by integrating multiple language models including Claude 3.5 Sonnet for advanced coding tasks. The platform evolved from basic tab completion to sophisticated multi-file editing capabilities, background agents, and agentic workflows. By combining intelligent retrieval systems with large language models, Cursor enables developers to work across complex codebases, automate repetitive tasks, and accelerate software development through features like real-time code completion, multi-file editing, and background task execution in isolated environments.

AI-Powered Code Review Assistant at Enterprise Scale

Microsoft

Microsoft developed an AI-powered code review assistant to address friction in their pull request (PR) workflow, where reviewers spent time on low-value feedback while meaningful concerns were overlooked, and PRs often waited days for review. The solution integrated an AI assistant into the existing PR workflow that automatically reviews code, flags issues, suggests improvements, generates PR summaries, and answers questions interactively. This system now supports over 90% of PRs across Microsoft, impacting more than 600,000 pull requests monthly, and has resulted in 10-20% median PR completion time improvements for early adopter repositories, improved code quality through early bug detection, and accelerated developer learning, particularly for new hires.

AI-Powered Code Review Platform Using Abstract Syntax Trees and LLM Context

Baz

Baz is building an AI code review agent that addresses the challenge of understanding complex codebases at scale. The platform combines Abstract Syntax Trees (AST) with LLM semantic understanding to provide automated code reviews that go beyond traditional static analysis. By integrating context from multiple sources including code structure, Jira/Linear tickets, CI logs, and deployment patterns, Baz aims to replicate the knowledge of a staff engineer who understands not just the code but the entire business context. The solution has evolved from basic reviews to catching performance issues and schema changes, with customers using it to review code generated by AI coding assistants like Cursor and Codex.

AI-Powered Contact Center Copilot: From Research to Enterprise-Scale Production

Cresta / OpenAI

Cresta, founded in 2017 by Stanford PhD students with OpenAI research experience, developed an AI copilot system for contact center agents that provides real-time suggestions during customer conversations. The company tackled the challenge of transforming academic NLP and reinforcement learning research into production-grade enterprise software by building domain-specific models fine-tuned on customer conversation data. Starting with Intuit as their first customer through an unconventional internship arrangement, they demonstrated measurable ROI through A/B testing, showing improved conversion rates and agent productivity. The solution evolved from custom LSTM and transformer models to leveraging pre-trained foundation models like GPT-3/4 with fine-tuning, ultimately serving Fortune 500 customers across telecommunications, airlines, and banking with demonstrated value including a pilot generating $100 million in incremental revenue.

AI-Powered Conversational Assistant for Streamlined Home Buying Experience

Rocket

Rocket Companies, a Detroit-based FinTech company, developed Rocket AI Agent to address the overwhelming complexity of the home buying process by providing 24/7 personalized guidance and support. Built on Amazon Bedrock Agents, the AI assistant combines domain knowledge, personalized guidance, and actionable capabilities to transform client engagement across Rocket's digital properties. The implementation resulted in a threefold increase in conversion rates from web traffic to closed loans, 85% reduction in transfers to customer care, and 68% customer satisfaction scores, while enabling seamless transitions between AI assistance and human support when needed.

AI-Powered CRM Insights with RAG and Text-to-SQL

TP ICAP

TP ICAP faced the challenge of extracting actionable insights from tens of thousands of vendor meeting notes stored in their Salesforce CRM system, where business users spent hours manually searching through records. Using Amazon Bedrock, their Innovation Lab built ClientIQ, a production-ready solution that combines Retrieval Augmented Generation (RAG) and text-to-SQL approaches to transform hours of manual analysis into seconds. The solution uses Amazon Bedrock Knowledge Bases for unstructured data queries, automated evaluations for quality assurance, and maintains enterprise-grade security through permission-based access controls. Since launch with 20 initial users, ClientIQ has driven a 75% reduction in time spent on research tasks and improved insight quality with more comprehensive and contextual information being surfaced.

AI-Powered Customer Feedback Analysis at Scale

Github

GitHub faced the challenge of manually processing vast amounts of customer feedback from support tickets, with data scientists spending approximately 80% of their time on data collection and organization tasks. To address this, GitHub's Customer Success Engineering team developed an internal AI analytics tool that combines open-source machine learning models (BERTopic with BERT embeddings and HDBSCAN clustering) to identify patterns in feedback, and GPT-4 to generate human-readable summaries of customer pain points. This system transformed their feedback analysis from manual classification to automated trend identification, enabling faster identification of common issues, improved feature prioritization, data-driven decision making, and discovery of self-service opportunities for customers.

AI-Powered Customer Interest Generation for Personalized E-commerce Recommendations

Wayfair

Wayfair developed a GenAI-powered system to generate nuanced, free-form customer interests that go beyond traditional behavioral models and fixed taxonomies. Using Google's Gemini LLM, the system processes customer search queries, product views, cart additions, and purchase history to infer deep insights about preferences, functional needs, and lifestyle values. These LLM-generated interests power personalized product carousels on the homepage and product detail pages, driving measurable engagement and revenue gains while enabling more transparent and adaptable personalization at scale across millions of customers.

AI-Powered Data Copilot for Autonomous Analysis in IDEs

BlaBlaCar

BlaBlaCar developed an AI-powered Data Copilot to address the inefficient workflow between Software Engineers and Data Analysts, where engineers lacked data warehouse access and analysts were overwhelmed with repetitive queries. The solution embeds an LLM-powered assistant directly in VS Code that connects to BigQuery, provides contextual business logic from curated queries, generates SQL and Python code with unit tests, and enables engineers to perform their own analyses with data health checks as guardrails. The tool leverages a "zero-infrastructure" RAG approach using VS Code's native capabilities and GitHub Copilot, treating analyses as code artifacts in pull requests that analysts review, resulting in faster question resolution (from weeks to minutes) and freeing analysts to focus on high-value modeling work.

AI-Powered Developer Productivity and Product Discovery at Wholesale Marketplace

Faire

Faire, a wholesale marketplace connecting brands and retailers, implemented multiple AI initiatives across their engineering organization to enhance both internal developer productivity and external customer-facing features. The company deployed agentic development workflows using GitHub Copilot and custom orchestration systems to automate repetitive coding tasks, introduced natural-language and image-based search capabilities for retailers seeking products, and built a hybrid Python-Kotlin architecture to support multi-step AI agents that compose purchasing recommendations. These efforts aimed to reduce manual workflows, accelerate product discovery, and deliver more personalized experiences for their wholesale marketplace customers.

AI-Powered Developer Productivity Platform with MCP Servers and Agent-Based Automation

Bloomberg

Bloomberg's Technology Infrastructure team, led by Lei, implemented an enterprise-wide AI coding platform to enhance developer productivity across 9,000+ engineers working with one of the world's largest JavaScript codebases. Starting approximately two years before this presentation, the team moved beyond initial experimentation with various AI coding tools to focus on strategic use cases: automated code uplift agents for patching and refactoring, and incident response agents for troubleshooting. To avoid organizational chaos, they built a platform-as-a-service (PaaS) approach featuring a unified AI gateway for model selection, an MCP (Model Context Protocol) directory/hub for tool discovery, and standardized tool creation/deployment infrastructure. The solution was supported by integration into onboarding training programs and cross-organizational communities. Results included improved adoption, reduced duplication of efforts, faster proof-of-concepts, and notably, a fundamental shift in the cost function of software engineering that enabled teams to reconsider trade-offs in their development practices.

AI-Powered Developer Tools for Code Quality and Test Generation

Uber

Uber's developer platform team built AI-powered developer tools using LangGraph to improve code quality and automate test generation for their 5,000 engineers. Their approach focuses on three pillars: targeted product development for developer workflows, cross-cutting AI primitives, and intentional technology transfer. The team developed Validator, an IDE-integrated tool that flags best practices violations and security issues with automatic fixes, and AutoCover, which generates comprehensive test suites with coverage validation. These tools demonstrate the successful deployment of multi-agent systems in production, achieving measurable improvements including thousands of daily fix interactions, 10% increase in developer platform coverage, and 21,000 developer hours saved through automated test generation.

AI-Powered Digital Co-Workers for Customer Support and Business Process Automation

Neople

Neople, a European startup founded almost three years ago, has developed AI-powered "digital co-workers" (called Neeles) primarily targeting customer success and service teams in e-commerce companies across Europe. The problem they address is the repetitive, high-volume work that customer service agents face, which reduces job satisfaction and efficiency. Their solution evolved from providing AI-generated response suggestions to human agents, to fully automated ticket responses, to executing actions across multiple systems, and finally to enabling non-technical users to build custom workflows conversationally. The system now serves approximately 200 customers, with AI agents handling repetitive tasks autonomously while human agents focus on complex cases. Results include dramatic improvements in first response rates (from 10% to 70% in some cases), reduced resolution times, and expanded use cases beyond customer service into finance, operations, and marketing departments.

AI-Powered Engineering Team Management and Code Review Platform

Entelligence

Entelligence addresses the challenges of managing large engineering teams by providing AI agents that handle code reviews, documentation maintenance, and team performance analytics. The platform combines LLM-based code analysis with learning from team feedback to provide contextually appropriate reviews, while maintaining up-to-date documentation and offering insights into engineering productivity beyond traditional metrics like lines of code.

AI-Powered Escrow Agent for Programmable Money Settlement

Circle

Circle developed an experimental AI-powered escrow agent system that combines OpenAI's multimodal models with their USDC stablecoin and smart contract infrastructure to automate agreement verification and payment settlement. The system uses AI to parse PDF contracts, extract key terms and payment amounts, deploy smart contracts programmatically, and verify work completion through image analysis, enabling near-instant settlement of escrow transactions while maintaining human oversight for final approval.

AI-Powered Fan Engagement and Content Personalization for Global Football Audiences

DFL / Bundesliga

DFL / Bundesliga, the organization behind Germany's premier football league, partnered with AWS to enhance fan engagement for their 1 billion global fans through AI and generative AI solutions. The primary challenges included personalizing content at scale across diverse geographies and languages, automating manual content creation processes, and making decades of archival footage searchable and accessible. The solutions implemented included an AI-powered live ticker providing real-time commentary in multiple languages and styles within 7 seconds of events, an intelligent metadata generation (IGM) system to analyze 9+ petabytes of historical footage using multimodal AI, automated content localization for speech-to-speech and speech-to-text translation, AI-generated "Stories" format content from existing articles, and personalized app experiences. Results demonstrated significant impact: 20% increase in overall app usage, 67% increase in articles read through personalization, 75% reduction in processing time for localized content with 5x content output, 2x increase in app dwell time from AI-generated stories, and 67% story retention rate indicating strong user engagement.

AI-Powered Fax Processing Automation for Healthcare Referrals

Providence

Providence Health System automated the processing of over 40 million annual faxes using GenAI and MLflow on Databricks to transform manual referral workflows into real-time automated triage. The system combines OCR with GPT-4.0 models to extract referral data from diverse document formats and integrates seamlessly with Epic EHR systems, eliminating months-long backlogs and freeing clinical staff to focus on patient care across 1,000+ clinics.

AI-Powered Food Image Generation System at Scale

Delivery Hero

Delivery Hero built a comprehensive AI-powered image generation system to address the problem that 86% of food products lacked images, which significantly impacted conversion rates. The solution involved implementing both text-to-image generation and image inpainting workflows using Stable Diffusion models, with extensive optimization for cost efficiency and quality assurance. The system successfully generated over 100,000 production images, achieved 6-8% conversion rate improvements, and reduced costs to under $0.003 per image through infrastructure optimization and model fine-tuning.

AI-Powered Government Service Assistant with Advanced RAG and Multi-Agent Architecture

City of Buenos Aires

The Government of the City of Buenos Aires partnered with AWS to enhance their existing WhatsApp-based AI assistant "Boti" with advanced generative AI capabilities to help citizens navigate over 1,300 government procedures. The solution implemented an agentic AI system using LangGraph and Amazon Bedrock, featuring custom input guardrails and a novel reasoning retrieval system that achieved 98.9% top-1 retrieval accuracy—a 12.5-17.5% improvement over standard RAG methods. The system successfully handles 3 million conversations monthly while maintaining safety through content filtering and delivering responses in culturally appropriate Rioplatense Spanish dialect.

AI-Powered Healthcare: Building Reliable Care Agents in Production

Sword Health

Sword Health, a digital health company specializing in remote physical therapy, developed Phoenix, an AI care agent that provides personalized support to patients during and after rehabilitation sessions while acting as a co-pilot for physical therapists. The company faced challenges deploying LLMs in a highly regulated healthcare environment, requiring robust guardrails, evaluation frameworks, and human oversight. Through iterative development focusing on prompt engineering, RAG for domain knowledge, comprehensive evaluation systems combining human and LLM-based ratings, and continuous data monitoring, Sword Health successfully shipped AI-powered features that improve care accessibility and efficiency while maintaining clinical safety through human-in-the-loop validation for all clinical decisions.

AI-Powered Hybrid Approach for Large-Scale Test Migration from Enzyme to React Testing Library

Slack

Slack faced the challenge of migrating 15,500 Enzyme test cases to React Testing Library to enable upgrading to React 18, an effort estimated at over 10,000 engineering hours across 150+ developers. The team developed an innovative hybrid approach combining Abstract Syntax Tree (AST) transformations with Large Language Models (LLMs), specifically Claude 2.1, to automate the conversion process. The solution involved a sophisticated pipeline that collected context including DOM trees, performed partial AST conversions with annotations, and leveraged LLMs to handle complex cases that traditional codemods couldn't address. This hybrid approach achieved an 80% success rate for automated conversions and saved developers 22% of their migration time, ultimately enabling the complete migration by May 2024.

AI-Powered Hyper-Personalized Email Campaign Automation

PromptLayer

PromptLayer built an automated AI sales system that creates hyper-personalized email campaigns by using three specialized AI agents to research leads, score their fit, generate subject lines, and draft tailored email sequences. The system integrates with existing sales tools like Apollo, HubSpot, and Make.com, achieving 50-60% open rates and ~7% positive reply rates while enabling non-technical sales teams to manage prompts and content directly through PromptLayer's platform without requiring engineering support.

AI-Powered Legal Document Analysis and Hearing Transcription for Social Security Disability Law

LexMed

LexMed developed an AI-native suite of tools leveraging large language models to streamline pain points for social security disability attorneys who advocate for claimants applying for disability benefits. The solution addresses the challenge of analyzing thousands of pages of medical records to find evidence that maps to complex regulatory requirements, as well as transcribing and auditing administrative hearings for procedural errors. By using LLMs with RAG architecture and custom logic, the platform automates the previously manual process of finding "needles in haystacks" within medical documentation and identifying regulatory compliance issues, enabling attorneys to provide more effective advocacy for all clients regardless of case complexity.

AI-Powered Marketing Content Generation and Compliance Platform at Scale

Volkswagen

Volkswagen Group Services partnered with AWS to build a production-scale generative AI platform for automotive marketing content generation and compliance evaluation. The problem was a slow, manual content supply chain that took weeks to months, created confidentiality risks with pre-production vehicles, and faced massive compliance bottlenecks across 10 brands and 200+ countries. The solution involved fine-tuning diffusion models on proprietary vehicle imagery (including digital twins from CAD), automated prompt enhancement using LLMs, and multi-stage image evaluation using vision-language models for both component-level accuracy and brand guideline compliance. Results included massive time savings (weeks to minutes), automated compliance checks across legal and brand requirements, and a reusable shared platform supporting multiple use cases across the organization.

AI-Powered Menu Description Generation for Restaurant Platforms

Doordash

DoorDash developed a production-grade AI system to automatically generate menu item descriptions for restaurants on their platform, addressing the challenge that many small restaurant owners face in creating compelling descriptions for every menu item. The solution combines three interconnected systems: a multimodal retrieval system that gathers relevant data even when information is sparse, a learning and generation system that adapts to each restaurant's unique voice and style, and an evaluation system that incorporates both automated and human feedback loops to ensure quality and continuous improvement.

AI-Powered Network Operations Assistant with Multi-Agent RAG Architecture

Swisscom

Swisscom, Switzerland's leading telecommunications provider, developed a Network Assistant using Amazon Bedrock to address the challenge of network engineers spending over 10% of their time manually gathering and analyzing data from multiple sources. The solution implements a multi-agent RAG architecture with specialized agents for documentation management and calculations, combined with an ETL pipeline using AWS services. The system is projected to reduce routine data retrieval and analysis time by 10%, saving approximately 200 hours per engineer annually while maintaining strict data security and sovereignty requirements for the telecommunications sector.

AI-Powered Performance Optimization System for Go Code

Uber

Uber developed PerfInsights, a production system that combines runtime profiling data with generative AI to automatically detect performance antipatterns in Go services and recommend optimizations. The system addresses the challenge of expensive manual performance tuning by using LLMs to analyze the most CPU-intensive functions identified through profiling, applying sophisticated prompt engineering and validation techniques including LLM juries and rule-based checkers to reduce false positives from over 80% to the low teens. This has resulted in hundreds of merged optimization diffs, significant engineering time savings (93% reduction from 14.5 hours to 1 hour per issue), and measurable compute cost reductions across Uber's Go services.

AI-Powered Self-Remediation Loop for Large-Scale Kubernetes Operations

Salesforce

Salesforce's Hyperforce Kubernetes platform team manages over 1,400 clusters scaling millions of pods, facing significant operational challenges with engineers spending over 1,000 hours monthly on support tasks. They developed a multi-agent AI-powered self-remediation loop built on AWS Bedrock's multi-agent collaboration framework, integrating with their existing monitoring and automation tools (Prometheus, K8sGPT, Argo CD, and custom tools like Sloop and Periscope). The solution features a manager AI agent that orchestrates multiple specialized worker agents to retrieve telemetry data, perform root cause analysis using RAG-augmented runbooks, and execute safe remediation actions with human-in-the-loop approval via Slack. The implementation achieved a 30% improvement in troubleshooting time and saved approximately 150 hours per month in operational toil, with plans to expand capabilities using knowledge graphs and advanced anomaly detection.

AI-Powered Similar Issues Detection for Project Management

Linear

Linear developed a Similar Issues matching feature to address the persistent challenge of duplicate issues and backlog management in large team workflows. The solution uses large language models to generate vector embeddings that capture the semantic meaning of issue descriptions, enabling accurate detection of related or duplicate issues across their project management platform. The feature integrates at multiple touchpoints—during issue creation, in the Triage inbox, and within support integrations like Intercom—allowing teams to identify duplicates before they enter the system. The implementation uses PostgreSQL with pgvector on Google Cloud Platform for vector storage and search, with partitioning strategies to handle tens of millions of issues at scale.

AI-Powered SNAP Benefits Notice Interpretation System

Propel

Propel developed an AI system to help SNAP (food stamp) recipients better understand official notices they receive. The system uses LLMs to analyze notice content and provide clear explanations of importance and required actions. The prototype successfully interprets complex government communications and provides simplified, actionable guidance while maintaining high safety standards for this sensitive use case.

AI-Powered Supply Chain Visibility and ETA Prediction System

Toyota / IBM

Toyota partnered with IBM and AWS to develop an AI-powered supply chain visibility platform that addresses the automotive industry's challenges with delivery prediction accuracy and customer transparency. The system uses machine learning models (XGBoost, AdaBoost, random forest) for time series forecasting and regression to predict estimated time of arrival (ETA) for vehicles throughout their journey from manufacturing to dealer delivery. The solution integrates real-time event streaming, feature engineering with Amazon SageMaker, and batch inference every four hours to provide near real-time predictions. Additionally, the team implemented an agentic AI chatbot using AWS Bedrock to enable natural language queries about vehicle status. The platform provides customers and dealers with visibility into vehicle journeys through a "pizza tracker" style interface, improving customer satisfaction and enabling proactive delay management.

AI-Powered Travel Assistant for Rail and Coach Platform

Trainline

Trainline, the world's leading rail and coach ticketing platform serving 27 million customers across 40 countries, developed an AI-powered travel assistant to address underserved customer needs during the travel experience. The company identified that while they excelled at selling tickets, customers lacked support during their journeys when disruptions occurred or they had questions about their travel. They built an agentic AI system using LLMs that could answer diverse customer questions ranging from refund requests to real-time train information to unusual queries like bringing pets or motorbikes on trains. The solution went from concept to production in five months, launching in February 2025, and now handles over 300,000 conversations monthly. The system uses a central orchestrator with multiple tools including RAG with 700,000 pages of curated content, real-time train data APIs, terms and conditions lookups, and automated refund capabilities, all protected by multiple layers of guardrails to ensure safety and factual accuracy.

Architecture and Production Patterns of Autonomous Coding Agents

Anthropic

This talk explores the architecture and production implementation patterns behind modern autonomous coding agents like Claude Code, Cursor, and others, presented by Jared from Prompt Layer. The speaker examines why coding agents have recently become effective, arguing that the key innovation is a simple while-loop architecture with tool calling, combined with improved models, rather than complex DAGs or RAG systems. The presentation covers implementation details including tool design (particularly bash as the universal adapter), context management strategies, sandboxing approaches, and evaluation methodologies. The speaker's company, Prompt Layer, has reorganized their engineering practices around Claude Code, establishing a rule that any task completable in under an hour using the agent should be done immediately, demonstrating practical production adoption and measurable productivity gains.

Architecture Patterns for Production AI Systems: Lessons from Building and Failing with Generative AI Products

Outropy

Phil Calçado shares a post-mortem analysis of Outropy, a failed AI productivity startup that served thousands of users, revealing why most AI products struggle in production. Despite having superior technology compared to competitors like Salesforce's Slack AI, Outropy failed commercially but provided valuable insights into building production AI systems. Calçado argues that successful AI products require treating agents as objects and workflows as data pipelines, applying traditional software engineering principles rather than falling into "Twitter-driven development" or purely data science approaches.

Automated Agent Improvement Through Production Telemetry and Reinforcement Learning

Quotient AI

Quotient AI addresses the challenge of manually improving AI agents in production by building an infrastructure platform that automatically transforms real-world telemetry data into reinforcement learning signals. The platform ingests agent traces with minimal code integration, analyzes production behavior using specialized models, and generates custom fine-tuned models that perform better at specific tasks than the original base models. The solution reduces the improvement cycle from weeks or months to approximately one hour (with plans to optimize to 20 minutes), enabling developers to deploy continuously improving agents without the manual testing and analysis overhead typically required in traditional LLMOps workflows.

Automated Code Reviews with LLMs

Faire

Faire, an e-commerce marketplace connecting retailers with brands, implemented an LLM-powered automated code review pipeline to enhance developer productivity by handling generic code review tasks. The solution leverages OpenAI's Assistants API through an internal orchestrator service called Fairey, which uses RAG (Retrieval Augmented Generation) to fetch context-specific information about pull requests including diffs, test coverage reports, and build logs. The system performs various automated reviews such as enforcing style guides, assessing PR descriptions, diagnosing build failures with auto-fix suggestions, recommending test coverage improvements, and detecting backward-incompatible changes. Early results demonstrated success with positive user satisfaction and high accuracy, freeing up engineering talent to focus on more complex review aspects like architecture decisions and long-term maintainability.

Automated Inventory Counting with Multimodal LLMs in Grocery Fulfillment

Picnic

Picnic, an online grocery delivery company, implemented a multimodal LLM-based computer vision system to automate inventory counting in their automated warehouse. The manual stock counting process was time-consuming at scale, and traditional approaches like weighing scales proved unreliable due to measurement variance. The solution involved deploying camera setups to capture high-quality images of grocery totes, using Google Gemini's multimodal models with carefully crafted prompts and supply chain reference images to count products. Through fine-tuning, they achieved performance comparable to expensive pro-tier models using cost-effective flash models, deployed via a Fast API service with LiteLLM as a proxy layer for model interchangeability, and implemented continuous validation through selective manual checks.

Automated Knowledge Base Enhancement Using LLMs and Clustering for Customer Support

Doordash

DoorDash developed an automated system to enhance their support chatbot's knowledge base by identifying content gaps through clustering analysis of escalated customer conversations and using LLMs to generate draft articles from user-generated content. The system uses semantic clustering to identify high-impact knowledge gaps, classifies issues as actionable problems or informational queries, and automatically generates polished knowledge base articles that are then reviewed by human specialists before deployment through a RAG-based retrieval system. The implementation resulted in significant improvements, with escalation rates dropping from 78% to 43% for high-traffic clusters, while maintaining human oversight for quality control and edge case handling.

Automated LLM Pipeline Optimization with DSPy for Multi-Stage Agent Development

JetBlue

JetBlue faced challenges in manually tuning prompts across complex, multi-stage LLM pipelines for applications like customer feedback classification and RAG-powered predictive maintenance chatbots. The airline adopted DSPy, a framework for building self-optimizing LLM pipelines, integrated with Databricks infrastructure including Model Serving and Vector Search. By leveraging DSPy's automatic optimization capabilities and modular architecture, JetBlue achieved 2x faster RAG chatbot deployment compared to their previous Langchain implementation, eliminated manual prompt engineering, and enabled automatic optimization of pipeline quality metrics using LLM-as-a-judge evaluations, resulting in more reliable and efficient LLM applications at scale.

Automated News Analysis and Bias Detection Platform

AskNews

AskNews developed a news analysis platform that processes 500,000 articles daily across multiple languages, using LLMs to extract facts, analyze bias, and identify contradictions between sources. The system employs edge computing with open-source models like Llama for cost-effective processing, builds knowledge graphs for complex querying, and provides programmatic APIs for automated news analysis. The platform helps users understand global perspectives on news topics while maintaining journalistic standards and transparency.

Automated Performance Optimization with GenAI-Powered Code Analysis

Uber

Uber developed PerfInsights to address unsustainable compute costs from inefficient Go services, where traditionally manual performance optimization required deep expertise and days or weeks of effort. The system combines runtime CPU/memory profiling with GenAI-powered static analysis to automatically detect performance antipatterns in Go code, using LLM juries and rule-based validation (LLMCheck) to reduce hallucinations and false positives from over 80% to the low teens. Since deployment, PerfInsights has generated hundreds of merged optimization diffs, reduced antipattern detection time by 93% (from 14.5 hours to under 1 hour per issue), eliminated approximately 3,800 hours of manual engineering effort annually, and achieved a 33.5% reduction in codebase antipatterns over four months while delivering measurable compute cost savings.

Automated Software Development Insights and Communication Platform

Blueprint AI

Blueprint AI addresses the challenge of communication and understanding between business and technical teams in software development by leveraging LLMs. The platform automatically analyzes data from various sources like GitHub and Jira, creating intelligent reports that surface relevant insights, track progress, and identify potential blockers. The system provides 24/7 monitoring and context-aware updates, helping teams stay informed about development progress without manual reporting overhead.

Automated Synopsis Generation Pipeline with Human-in-the-Loop Quality Control

Netflix

Netflix developed an automated pipeline for generating show and movie synopses using LLMs, replacing a highly manual context-gathering process. The system uses Metaflow to orchestrate LLM-based content summarization and synopsis generation, with multiple human feedback loops and automated quality control checks. While maintaining human writers and editors in the process, the system has significantly improved efficiency and enabled the creation of more synopses per title while maintaining quality standards.

Automating Code Sample Updates with LLMs for Technical Documentation

Wix

When Wix needed to update over 2,000 code samples in their API reference documentation due to a syntax change, they implemented an LLM-based automation solution instead of manual updates. The team used GPT-4 for code classification and GPT-3.5 Turbo for code conversion, combined with TypeScript compilation for validation. This automated approach reduced what would have been weeks of manual work to a single morning of team involvement, while maintaining high accuracy in the code transformations.

Automating Healthcare Documentation and Rule Management with GenAI

Orizon

Orizon, a healthcare tech company, faced challenges with manual code documentation and rule interpretation for their medical billing fraud detection system. They implemented a GenAI solution using Databricks' platform to automate code documentation and rule interpretation, resulting in 63% of tasks being automated and reducing documentation time to under 5 minutes. The solution included fine-tuned Llama2-code and DBRX models deployed through Mosaic AI Model Serving, with strict governance and security measures for protecting sensitive healthcare data.

Automating Healthcare Procedure Code Selection Through Domain-Specific LLM Platform

Hasura / PromptQL

A large public healthcare company specializing in radiology software deployed an AI-powered automation solution to streamline the complex process of procedure code selection during patient appointment scheduling. The traditional manual process took 12-15 minutes per call, requiring operators to navigate complex UIs and select from hundreds of procedure codes that varied by clinic, regulations, and patient circumstances. Using PromptQL's domain-specific LLM platform, non-technical healthcare administrators can now write automation logic in natural language that gets converted into executable code, reducing call times and potentially delivering $50-100 million in business impact through increased efficiency and reduced training costs.

Automating Supplier Ticket Management with LLM Agents

Wayfair

Wayfair developed Wilma, an LLM-based ticket automation system, to automate the manual triage of supplier support tickets in their SupportHub JIRA-based system. The solution uses LangGraph to orchestrate LLM calls and tool interactions for intent classification, language detection, and supplier ID lookup through a ReAct agent with BigQuery access. The system achieved better-than-human performance with 93% accuracy on question type identification (vs. 75% human accuracy), 98% on language detection, and 88% on supplier ID identification, while reducing processing time and allowing associates to focus on higher-value work.

Autonomous Codebase Migration at Scale Using LLM-Powered Agents

Spotify

Spotify faced the challenge of maintaining a massive, diverse codebase across thousands of repositories, with developers spending less than one hour per day actually writing code and the rest on maintenance tasks. While they had pre-existing automation through their "fleet management" system that could handle simple migrations like dependency bumps, this approach struggled with the complex "long tail" of edge cases affecting 30% of their codebase. The solution involved building an agentic LLM system that replaces deterministic scripts with AI-powered code generation combined with automated verification loops, enabling unsupervised migrations from prompt to pull request. In the first three months, the system generated over 1,000 merged production PRs, enabling previously impossible large-scale refactors and allowing non-experts to perform complex migrations through natural language prompts rather than writing complicated transformation scripts.

Autonomous Coding Agent Evolution: From Short-Burst to Extended Runtime Operations

Replit

Replit evolved their AI coding agent from V1 (running autonomously for only a couple of minutes) to V2 (running for 10-15 minutes of productive work) through significant rearchitecting and leveraging new frontier models. The company focuses on enabling non-technical users to build complete applications without writing code, emphasizing performance and cost optimization over latency while maintaining comprehensive observability through tools like Langsmith to manage the complexity of production AI agents at scale.

Autonomous Network Operations Using Agentic AI

British Telecom

British Telecom (BT) partnered with AWS to deploy agentic AI systems for autonomous network operations across their 5G standalone mobile network infrastructure serving 30 million subscribers. The initiative addresses major operational challenges including high manual operations costs (up to 20% of revenue), complex failure diagnosis in containerized networks with 20,000 macro sites generating petabytes of data, and difficulties in change impact analysis with 11,000 weekly network changes. The solution leverages AWS Bedrock Agent Core, Amazon SageMaker for multivariate anomaly detection, Amazon Neptune for network topology graphs, and domain-specific community agents for root cause analysis and service impact assessment. Early results focus on cost reduction through automation, improved service level agreements, faster customer impact identification, and enhanced change efficiency, with plans to expand coverage optimization, dynamic network slicing, and further closed-loop automation across all network domains.

Autonomous Observability with AI Agents and Model Context Protocol

Pinterest

Pinterest's observability team faced a fragmented infrastructure challenge where logs, metrics, traces, and change events existed in disconnected silos, predating modern standards like OpenTelemetry. Engineers had to navigate multiple interfaces during incident resolution, increasing mean time to resolution (MTTR) and creating steep learning curves. To address this without a complete infrastructure overhaul, Pinterest developed an MCP (Model Context Protocol) server that acts as a unified interface for AI agents to access all observability data pillars. The centerpiece is "Tricorder Agent," which autonomously gathers relevant information from alerts, generates filtered dashboard links, queries dependencies, and provides root cause hypotheses. Early results show the agent successfully navigating dependency graphs and correlating data across previously disconnected systems, streamlining incident response and reducing the time engineers spend context-switching between tools.

Autonomous SRE Agent for Cloud Infrastructure Monitoring Using FastMCP

FuzzyLabs

FuzzyLabs developed an autonomous Site Reliability Engineering (SRE) agent using Anthropic's Model Context Protocol (MCP) with FastMCP to automate the diagnosis of production incidents in cloud-native applications. The agent integrates with Kubernetes, GitHub, and Slack to automatically detect issues, analyze logs, identify root causes in source code, and post diagnostic summaries to development teams. While the proof-of-concept successfully demonstrated end-to-end incident response automation using a custom MCP client with optimizations like tool caching and filtering, the project raises important questions about effectiveness measurement, security boundaries, and cost optimization that require further research.

Benchmarking AI Agents for Software Bug Detection and Maintenance Tasks

Bismuth

Bismuth, a startup focused on software agents, developed SM-100, a comprehensive benchmark to evaluate AI agents' capabilities in software maintenance tasks, particularly bug detection and fixing. The benchmark revealed significant limitations in existing popular agents, with most achieving only 7% accuracy in finding complex bugs and exhibiting high false positive rates (90%+). While agents perform well on feature development benchmarks like SWE-bench, they struggle with real-world maintenance tasks that require deep system understanding, cross-file reasoning, and holistic code evaluation. Bismuth's own agent achieved better performance (10 out of 100 bugs found vs. 7 for the next best), demonstrating that targeted improvements in model architecture, prompting strategies, and navigation techniques can enhance bug detection capabilities in production software maintenance scenarios.

Best Practices for Building Production-Grade MCP Servers for AI Agents

Prefect

This case study presents best practices for designing and implementing Model Context Protocol (MCP) servers for AI agents in production environments, addressing the widespread problem of poorly designed MCP servers that fail to account for agent-specific constraints. The speaker, founder and CEO of Prefect Technologies and creator of fastmcp (a widely-adopted framework downloaded 1.5 million times daily), identifies key design principles including outcome-oriented tool design, flattened arguments, comprehensive documentation, token budget management, and ruthless curation. The solution involves treating MCP servers as agent-optimized user interfaces rather than simple REST API wrappers, acknowledging fundamental differences between human and agent capabilities in discovery, iteration, and context management. Results include actionable guidelines that have shaped the MCP ecosystem, with the fastmcp framework becoming the de facto standard for building MCP servers and influencing the official Anthropic SDK design.

Building a Collaborative Multi-Agent AI Ecosystem for Enterprise Knowledge Access

DoorDash

DoorDash developed an internal agentic AI platform to address the challenge of fragmented knowledge spread across experimentation platforms, metrics hubs, dashboards, wikis, and team communications. The solution evolved from deterministic workflows through single agents to hierarchical deep agents and exploratory agent swarms, built on foundational capabilities including hybrid vector search with RRF-based re-ranking, schema-aware SQL generation with pre-cached examples, multi-stage zero-data query validation, and LLM-as-judge evaluation frameworks. The platform integrates with Slack and Cursor to meet users in their existing workflows, enabling business teams and developers to access complex data and insights without context-switching, democratizing data access across the organization while maintaining rigorous guardrails and provenance tracking.

Building a Context-Aware AI Assistant with RAG for Developer Support

Vectorize

Vectorize, a platform for building RAG pipelines, faced a challenge where users frequently asked questions already answered in their documentation but were reluctant to leave the UI to search for answers. To address this, they built an AI assistant integrated directly into their product interface using RAG technology. The solution leverages their own platform to ingest documentation from multiple sources (docs site, Discord, Intercom), implements context-sensitive retrieval using page topics, employs reranking models to filter irrelevant results, and uses anti-hallucination prompting with Llama 3.1 70B on Groq. The resulting assistant provides users with immediate, contextually relevant answers without requiring them to leave their workflow, while the system continuously improves as new support content and documentation are added.

Building a Custom Vision LLM for Document Processing at Scale

Grab

Grab developed a custom lightweight vision LLM to address the challenges of extracting information from diverse user-submitted documents like ID cards and driver's licenses across Southeast Asia. Traditional OCR systems struggled with the variety of document templates and languages, while proprietary LLMs had high latency and poor SEA language support. The team fine-tuned and ultimately built a custom ~1B parameter vision LLM from scratch, achieving performance comparable to larger 2B models while significantly reducing latency. The solution involved a four-stage training process using synthetic OCR datasets, an auto-labeling framework called Documint, and full-parameter fine-tuning, resulting in dramatic accuracy improvements (+70pp for Thai, +40pp for Vietnamese) and establishing a unified model to replace traditional OCR pipelines.

Building a Delicate Text Detection System for Content Safety

Grammarly

Grammarly developed a novel approach to detect delicate text content that goes beyond traditional toxicity detection, addressing a gap in content safety. They created DeTexD, a benchmark dataset of 40,000 training samples and 1,023 test paragraphs, and developed a RoBERTa-based classification model that achieved 79.3% F1 score, significantly outperforming existing toxic text detection methods for identifying potentially triggering or emotionally charged content.

Building a Global Product Catalogue with Multimodal LLMs at Scale

Shopify

Shopify addressed the challenge of fragmented product data across millions of merchants by building a Global Catalogue using multimodal LLMs to standardize and enrich billions of product listings. The system processes over 10 million product updates daily through a four-layer architecture involving product data foundation, understanding, matching, and reconciliation. By fine-tuning open-source vision language models and implementing selective field extraction, they achieve 40 million LLM inferences daily with 500ms median latency while reducing GPU usage by 40%. The solution enables improved search, recommendations, and conversational commerce experiences across Shopify's ecosystem.

Building a Healthcare Copilot for Biology and Life Science Research

Owkin

Owkin, a company focused on drug discovery and AI for healthcare, developed a copilot system in four months to help biology and life science researchers navigate complex healthcare data and answer scientific questions. The system addresses challenges unique to healthcare including strict regulations, semantic complexity, and data sensitivity by implementing two main tools: a text-to-SQL system that queries structured biological databases (using natural language to SQL translation with Polars), and a RAG-based literature search tool that retrieves relevant information from PubMed's 26 million abstracts. The copilot was deployed for academic researchers with monitoring via LangFuse and OpenTelemetry, though the team faced challenges with evaluation in a domain where questions rarely have binary answers, and noted that frameworks and models change rapidly in the LLM space.

Building a Horizontal Enterprise Agent Platform with Infrastructure-First Approach

Dust.tt

Dust.tt evolved from a developer framework competitor to LangChain into a horizontal enterprise platform for deploying AI agents, achieving remarkable 88% daily active user rates in some deployments. The company focuses on building robust infrastructure for agent deployment, maintaining its own integrations with enterprise systems like Notion and Slack, while making agent creation accessible to non-technical users through careful UX design and abstraction of technical complexities.

Building a Knowledge as a Service Platform with LLMs and Developer Community Data

Stack Overflow

Stack Overflow addresses the challenges of LLM brain drain, answer quality, and trust by transforming their extensive developer Q&A platform into a Knowledge as a Service offering. They've developed API partnerships with major AI companies like Google, OpenAI, and GitHub, integrating their 40 billion tokens of curated technical content to improve LLM accuracy by up to 20%. Their approach combines AI capabilities with human expertise while maintaining social responsibility and proper attribution.

Building a Multi-Agent Research System for Complex Information Tasks

Anthropic

Anthropic developed a production multi-agent system for their Claude Research feature that uses multiple specialized AI agents working in parallel to conduct complex research tasks across web and enterprise sources. The system employs an orchestrator-worker architecture where a lead agent coordinates and delegates to specialized subagents that operate simultaneously, achieving 90.2% performance improvement over single-agent systems on internal evaluations. The implementation required sophisticated prompt engineering, robust evaluation frameworks, and careful production engineering to handle the stateful, non-deterministic nature of multi-agent interactions at scale.

Building a Multi-Model AI Platform and Agent Marketplace

Quora

Quora built Poe as a unified platform providing consumer access to multiple large language models and AI agents through a single interface and subscription. Starting with experiments using GPT-3 for answer generation on Quora, the company recognized the paradigm shift toward chat-based AI interactions and developed Poe to serve as a "web browser for AI" - enabling users to access diverse models, create custom agents through prompting or server integrations, and monetize AI applications. The platform has achieved significant scale with creators earning millions annually while supporting various modalities including text, image, and voice models.

Building a Multi-Model LLM API Marketplace and Infrastructure Platform

OpenRouter

OpenRouter was founded in early 2023 to address the fragmented landscape of large language models by creating a unified API marketplace that aggregates over 400 models from 60+ providers. The company identified that the LLM inference market would not be winner-take-all, and built infrastructure to normalize different model APIs, provide intelligent routing, caching, and uptime guarantees. Their platform enables developers to switch between models with near-zero switching costs while providing better prices, uptime, and choice compared to using individual model providers directly.

Building a Natural Language Business Intelligence Interface with MCP

Ramp

Ramp built an MCP (Model Context Protocol) server to enable natural language querying of business spend data through their developer API. The initial prototype allowed Claude to generate visualizations and run analyses, but struggled with scale due to context window limitations and high token usage. By pivoting to a SQL-based approach using an in-memory SQLite database with a lightweight ETL pipeline, they enabled Claude to query tens of thousands of transactions efficiently. The solution includes load tools for API data extraction, data transformation capabilities, and query execution tools, allowing users to gain insights into business spend patterns through conversational queries while addressing security concerns through audit logging and OAuth scopes.

Building a Next-Generation AI-Powered Code Editor

Cursor

Cursor, founded by MIT graduates, developed an AI-powered code editor that goes beyond simple code completion to reimagine how developers interact with AI while coding. By focusing on innovative features like instructed edits and codebase indexing, along with developing custom models for specific tasks, they achieved rapid growth to $100M in revenue. Their success demonstrates how combining frontier LLMs with custom-trained models and careful UX design can transform developer productivity.

Building a Production Fantasy Football AI Assistant in 8 Weeks

NFL

The NFL, in collaboration with AWS Generative AI Innovation Center, developed a fantasy football AI assistant for NFL Plus users that went from concept to production in just 8 weeks. Fantasy football managers face overwhelming amounts of data and conflicting expert advice, making roster decisions stressful and time-consuming. The team built an agentic AI system using Amazon Bedrock, Strands Agent framework, and Model Context Protocol (MCP) to provide analyst-grade fantasy advice in under 5 seconds, achieving 90% analyst approval ratings. The system handles complex multi-step reasoning, accesses NFL NextGen Stats data through semantic data layers, and successfully manages peak Sunday traffic loads with zero reported incidents in the first month of 10,000+ questions.

Building a Production MCP Server for AI Assistant Integration

Hugging Face

Hugging Face developed an official Model Context Protocol (MCP) server to enable AI assistants to access their AI model hub and thousands of AI applications through a simple URL. The team faced complex architectural decisions around transport protocols, choosing Streamable HTTP over deprecated SSE transport, and implementing a stateless, direct response configuration for production deployment. The server provides customizable tools for different user types and integrates seamlessly with existing Hugging Face infrastructure including authentication and resource quotas.

Building a Rust-Based AI Agentic Framework for Multimodal Data Quality Monitoring

Zectonal

Zectonal, a data quality monitoring company, developed a custom AI agentic framework in Rust to scale their multimodal data inspection capabilities beyond traditional rules-based approaches. The framework enables specialized AI agents to autonomously call diagnostic function tools for detecting defects, errors, and anomalous conditions in large datasets, while providing full audit trails through "Agent Provenance" tracking. The system supports multiple LLM providers (OpenAI, Anthropic, Ollama) and can operate both online and on-premise, packaged as a single binary executable that the company refers to as their "genie-in-a-binary."

Building a Search Engine for AI Agents: Infrastructure, Product Development, and Production Deployment

Exa.ai

Exa.ai has built the first search engine specifically designed for AI agents rather than human users, addressing the fundamental problem that existing search engines like Google are optimized for consumer clicks and keyword-based queries rather than semantic understanding and agent workflows. The company trained its own models, built its own index, and invested heavily in compute infrastructure (including purchasing their own GPU cluster) to enable meaning-based search that returns raw, primary data sources rather than listicles or summaries. Their solution includes both an API for developers building AI applications and an agentic search tool called Websites that can find and enrich complex, multi-criteria queries. The results include serving hundreds of millions of queries across use cases like sales intelligence, recruiting, market research, and research paper discovery, with 95% inbound growth and expanding from 7 to 28+ employees within a year.

Building a Secure and Scalable LLM Gateway for Enterprise GenAI Adoption

Wealthsimple

Wealthsimple developed a comprehensive LLM platform to enable secure and productive use of generative AI across their organization. They started with a basic gateway for audit trails, evolved to include PII redaction, self-hosted models, and RAG capabilities, while focusing on user adoption and security. The platform now serves over half the company with 2,200+ daily messages, demonstrating successful enterprise-wide GenAI adoption while maintaining data security.

Building a Secure Enterprise AI Assistant with Amazon Bedrock for Financial Services

PayU

PayU, a Central Bank-regulated financial services company in India, faced the challenge of employees using unsecured public generative AI tools that posed data security and regulatory compliance risks. The company implemented a comprehensive enterprise AI solution using Amazon Bedrock, Open WebUI, and AWS PrivateLink to create a secure, role-based AI assistant that enables employees to perform tasks like technical troubleshooting, email drafting, and business data querying while maintaining strict data residency requirements and regulatory compliance. The solution achieved a reported 30% improvement in business analyst team productivity while ensuring sensitive data never leaves the company's VPC.

Building a Visual Agentic Tool for AI-First Workflow Transformation

Craft

Craft, a five-year-old startup with over 1 million users and a 20-person engineering team, spent three years experimenting with AI features that lacked user stickiness before achieving a breakthrough in late 2025. During the 2025 Christmas holidays, the founder built "Craft Agents," a visual UI wrapper around Claude Code and the Claude Agent SDK, completing it in just two weeks using Electron despite no prior experience with that stack. The tool connected multiple data sources (APIs, databases, MCP servers) and provided a more accessible interface than terminal-based alternatives. After mandating company-wide adoption in January 2026, non-engineering teams—particularly customer support—became the heaviest users, automating workflows that previously took 20-30 minutes down to 2-3 minutes, while engineering teams experienced dramatic productivity gains with difficult migrations completing in a week instead of months.

Building a Voice Assistant with Open Source LLMs: From Demo to Production

Weights & Biases

A case study of building an open-source Alexa alternative using LLMs, demonstrating the journey from prototype to production. The project used Llama 2 and Mistral models running on affordable hardware, combined with Whisper for speech recognition. Through iterative improvements including prompt engineering and fine-tuning with QLoRA, the system's accuracy improved from 0% to 98%, while maintaining real-time performance requirements.

Building Agent-Native Infrastructure for Autonomous AI Development

Daytona

Daytona addresses the challenge of building infrastructure specifically designed for AI agents rather than humans, recognizing that agents will soon be the primary users of development tools. The company created an "agent-native runtime" - secure, elastic sandboxes that spin up in 27 milliseconds, providing agents with computing environments to run code, perform data analysis, and execute tasks autonomously. Their solution includes declarative image builders, shared volume systems, and parallel execution capabilities, all accessible via APIs to enable agents to operate without human intervention in the loop.

Building Agentic AI Assistant for Observability Platform

Grafana

Grafana Labs developed an agentic AI assistant integrated into their observability platform to help users query data, create dashboards, troubleshoot issues, and learn the platform. The team started with a hackathon project that ran entirely in the browser, iterating rapidly from a proof-of-concept to a production system. The assistant uses Claude as the primary LLM, implements tool calling with extensive context about Grafana's features, and employs multiple techniques including tool overloading, error feedback loops, and natural language tool responses. The solution enables users to investigate incidents, generate queries across multiple data sources, and modify visualizations through conversational interfaces while maintaining transparency by showing all intermediate steps and data to keep humans in the loop.

Building AI Developer Tools Using LangGraph for Large-Scale Software Development

Uber

Uber's developer platform team built a suite of AI-powered developer tools using LangGraph to improve productivity for 5,000 engineers working on hundreds of millions of lines of code. The solution included tools like Validator (for detecting code violations and security issues), AutoCover (for automated test generation), and various other AI assistants. By creating domain-expert agents and reusable primitives, they achieved significant impact including thousands of daily code fixes, 10% improvement in developer platform coverage, and an estimated 21,000 developer hours saved through automated test generation.

Building AI Memory Layers with File-Based Vector Storage and Knowledge Graphs

Cognee

Cognee, a platform that helps AI agents retrieve, reason, and remember with structured context, needed a vector storage solution that could support per-workspace isolation for parallel development and testing without the operational overhead of managing multiple database services. The company implemented LanceDB, a file-based vector database, which enables each developer, user, or test instance to have its own fully independent vector store. This solution, combined with Cognee's Extract-Cognify-Load pipeline that builds knowledge graphs alongside embeddings, allows teams to develop locally with complete isolation and then seamlessly transition to production through Cognee's hosted service (cogwit). The results include faster development cycles due to eliminated shared state conflicts, improved multi-hop reasoning accuracy through graph-aware retrieval, and a simplified path from prototype to production without architectural redesign.

Building AI-Native Platforms: Agentic Systems, Infrastructure Evolution, and Production LLM Deployment

Delphi / Seam AI / APIsec

This panel discussion features three AI-native companies—Delphi (personal AI profiles), Seam AI (sales/marketing automation agents), and APIsec (API security testing)—discussing their journeys building production LLM systems over three years. The companies address infrastructure evolution from single-shot prompting to fully agentic systems, the shift toward serverless and scalable architectures, managing costs at scale (including burning through a trillion OpenAI tokens), balancing deterministic workflows with model autonomy, and measuring ROI through outcome-based metrics rather than traditional productivity gains. Key technical themes include moving away from opinionated architectures to let models reason autonomously, implementing state machines for high-confidence decisions, using tools like Pydantic AI and Logfire for instrumentation, and leveraging Pinecone for vector search at scale.

Building Alfred: Production-Ready Agentic Orchestration Layer for E-commerce

Loblaws

Loblaws Digital, the technology arm of one of Canada's largest retail companies, developed Alfred—a production-ready orchestration layer for running agentic AI workflows across their e-commerce, pharmacy, and loyalty platforms. The system addresses the challenge of moving agent prototypes into production at enterprise scale by providing a reusable template-based architecture built on LangGraph, FastAPI, and Google Cloud Platform components. Alfred enables teams across the organization to quickly deploy conversational commerce applications and agentic workflows (such as recipe-based shopping) while handling critical enterprise requirements including security, privacy, PII masking, observability, and integration with 50+ platform APIs through their Model Context Protocol (MCP) ecosystem.

Building Alyx: An AI Agent for LLM Observability and Debugging

Arize AI

Arize AI built "Alyx," an AI agent embedded in their observability platform to help users debug and optimize their machine learning and LLM applications. The problem they addressed was that their platform had advanced features that required significant expertise to use effectively, with customers needing guidance from solutions architects to extract maximum value. Their solution was to create an AI agent that emulates an expert solutions architect, capable of performing complex debugging workflows, optimizing prompts, generating evaluation templates, and educating users on platform features. Starting in November 2023 with GPT-3.5 and launching at their July 2024 conference, Alyx evolved from a highly structured, on-rails decision tree architecture to a more autonomous agent leveraging modern LLM capabilities. The team used their own platform to build and evaluate Alex, establishing comprehensive evaluation frameworks across multiple levels (tool calls, tasks, sessions, traces) and involving cross-functional stakeholders in defining success criteria.

Building an AI Agent Platform with Cloud-Based Virtual Machines and Extended Context

Manus

Manus AI, founded in late 2024, developed a consumer-focused AI agent platform that addresses the limitation of frontier LLMs having intelligence but lacking the ability to take action in digital environments. The company built a system where each user task is assigned a fully functional cloud-based virtual machine (Linux, with plans for Windows and Android) running real applications including file systems, terminals, VS Code, and Chromium browsers. By adopting a "less structure, more intelligence" philosophy that avoids predefined workflows and multi-role agent systems, and instead provides rich context to foundation models (primarily Anthropic's Claude), Manus created an agent capable of handling diverse long-horizon tasks from office location research to furniture shopping to data extraction, with users reporting up to 2 hours of daily GPU consumption. The platform launched publicly in March 2024 after five months of development and reportedly spent $1 million on Claude API usage in its first 14 days.

Building an AI Sales Development Representative with Advanced RAG Knowledge Base

Alice

11X developed Alice, an AI Sales Development Representative (SDR) that automates lead generation and email outreach at scale. The key innovation was replacing a manual product library system with an intelligent knowledge base that uses advanced RAG (Retrieval Augmented Generation) techniques to automatically ingest and understand seller information from various sources including documents, websites, and videos. This system processes multiple resource types through specialized parsing vendors, chunks content strategically, stores embeddings in Pinecone vector database, and uses deep research agents for context retrieval. The result is an AI agent that sends 50,000 personalized emails daily compared to 20-50 for human SDRs, while serving 300+ business organizations with contextually relevant outreach.

Building an AI-Powered IDE at Scale: Architectural Deep Dive

Cursor

Cursor, an AI-powered IDE built by Anysphere, faced the challenge of scaling from zero to serving billions of code completions daily while handling 1M+ queries per second and 100x growth in load within 12 months. The solution involved building a sophisticated architecture using TypeScript and Rust, implementing a low-latency sync engine for autocomplete suggestions, utilizing Merkle trees and embeddings for semantic code search without storing source code on servers, and developing Anyrun, a Rust-based orchestrator service. The results include reaching $500M+ in annual revenue, serving more than half of the Fortune 500's largest tech companies, and processing hundreds of millions of lines of enterprise code written daily, all while maintaining privacy through encryption and secure indexing practices.

Building an Autonomous AI Software Engineer with Multi-Turn RL and Codebase Understanding

Devin

Cognition, the company behind Devon (an AI software engineer), addresses the challenge of enabling AI agents to work effectively within large, existing codebases where traditional LLMs struggle with limited context windows and complex dependencies. Their solution involves creating DeepWiki, a continuously-updated interactive knowledge graph and wiki system that indexes codebases using both code and metadata (pull requests, git history, team discussions), combined with Devon Search for deep codebase research, and custom post-training using multi-turn reinforcement learning to optimize models for specific narrow domains. Results include Devon being used by teams worldwide to autonomously go from ticket to pull request, the release of Kevin 32B (an open-source model achieving 91% correctness on CUDA kernel generation, outperforming frontier models like GPT-4), and thousands of open-source projects incorporating DeepWiki into their official documentation.

Building an Enterprise-Grade AI Agent for Recruiting at Scale

LinkedIn

LinkedIn developed Hiring Assistant, an AI agent designed to transform the recruiting workflow by automating repetitive tasks like candidate sourcing, evaluation, and engagement across 1.2+ billion profiles. The system addresses the challenge of recruiters spending excessive time on pattern-recognition tasks rather than high-value decision-making and relationship building. Using a plan-and-execute agent architecture with specialized sub-agents for intake, sourcing, evaluation, outreach, screening, and learning, Hiring Assistant combines real-time conversational interfaces with large-scale asynchronous execution. The solution leverages LinkedIn's Economic Graph for talent insights, custom fine-tuned LLMs for candidate evaluation, and cognitive memory systems that learn from recruiter behavior over time. The result is a globally available agentic product that enables recruiters to work with greater speed, scale, and intelligence while maintaining human-in-the-loop control for critical decisions.

Building an Evaluation-First Development Strategy for AI Service Agents

Monday

Monday Service built an AI-native Enterprise Service Management platform featuring customizable, role-based AI agents to automate customer service across IT, HR, and Legal departments. The team embedded evaluation into their development cycle from Day 0, creating a dual-layered approach with offline "safety net" evaluations for regression testing and online "monitor" evaluations for real-time production quality. This eval-driven development framework, built on LangGraph agents with LangSmith and Vitest integration, achieved 8.7x faster evaluation feedback loops (from 162 seconds to 18 seconds), comprehensive testing across hundreds of examples in minutes, real-time end-to-end quality monitoring on production traces using multi-turn evaluators, and GitOps-style CI/CD deployment with evaluations managed as version-controlled code.

Building and Automating Comprehensive LLM Evaluation Framework for SNAP Benefits

Propel

Propel developed a sophisticated evaluation framework for testing and benchmarking LLM performance in handling SNAP (food stamp) benefit inquiries. The company created two distinct evaluation approaches: one for benchmarking current base models on SNAP topics, and another for product development. They implemented automated testing using Promptfoo and developed innovative ways to evaluate model responses, including using AI models as judges for assessing response quality and accessibility.

Building and Deploying the Codex App: A Multi-Agent AI Development Environment

OpenAI

OpenAI's Codex team developed a dedicated GUI application for AI-powered coding that serves as a command center for multi-agent systems, moving beyond traditional IDE and terminal interfaces. The team addressed the challenge of making AI coding agents accessible to broader audiences while maintaining professional-grade capabilities for software developers. By combining the GPT-5.3 Codex model with agent skills, automations, and a purpose-built interface, they created a production system that enables delegation-based development workflows where users supervise AI agents performing complex coding tasks. The result was over one million downloads in the first week, widespread internal adoption at OpenAI including by research teams, and a strategic shift positioning AI coding tools for mainstream use, culminating in a Super Bowl advertisement.

Building and Evaluating Production AI Agents: From Function Calling to Complex Multi-Agent Systems

Google Deepmind

This case study explores the evolution of LLM-based systems in production through discussions with Raven Kumar from Google DeepMind about building products like Notebook LM, Project Mariner, and working with the Gemini and Gemma model families. The conversation covers the rapid progression from simple function calling to complex agentic systems capable of multi-step reasoning, the critical importance of evaluation harnesses as competitive advantages, and practical considerations around context engineering, tool orchestration, and model selection. Key insights include how model improvements are causing teams to repeatedly rebuild agent architectures, the importance of shipping products quickly to learn from real users, and strategies for evaluating increasingly complex multi-modal agentic systems across different scales from edge devices to cloud-based deployments.

Building and Optimizing AI Programming Agents with MLOps Infrastructure at Scale

Weights & Biases

This case study describes Weights & Biases' development of programming agents that achieved top performance on the SWEBench benchmark, demonstrating how MLOps infrastructure can systematically improve AI agent performance through experimental workflows. The presenter built "Tiny Agent," a command-line programming agent, then optimized it through hundreds of experiments using OpenAI's O1 reasoning model to achieve the #1 position on SWEBench leaderboard. The approach emphasizes systematic experimentation with proper tracking, evaluation frameworks, and infrastructure scaling, while introducing tools like Weave for experiment management and WB Launch for distributed computing. The work also explores reinforcement learning for agent improvement and introduces the concept of "researcher agents" that can autonomously improve AI systems.

Building and Scaling an AI Coding Agent Through Rapid Iteration and User Feedback

Anthropic

Anthropic developed Claude Code, an AI-powered coding agent that started as an internal prototyping tool and evolved into a widely-adopted product through organic growth and rapid iteration. The team faced challenges in making an LLM-based coding assistant that could handle complex, multi-step software engineering tasks while remaining accessible and customizable across diverse developer environments. Their solution involved a minimalist terminal-first interface, extensive customization capabilities through hooks and sub-agents, rigorous internal dogfooding with over 1,000 Anthropic employees, and tight feedback loops that enabled weekly iteration cycles. The product achieved high viral adoption internally before external launch, expanded beyond professional developers to designers and product managers who now contribute code directly, and established a fast-shipping culture where features often go from prototype to production within weeks based on real user feedback rather than extensive upfront planning.

Building and Scaling Codex: OpenAI's Production Coding Agent

OpenAI

OpenAI developed Codex, a coding agent that serves as an AI-powered software engineering teammate, addressing the challenge of accelerating software development workflows. The solution combines a specialized coding model (GPT-5.1 Codex Max), a custom API layer with features like context compaction, and an integrated harness that works through IDE extensions and CLI tools using sandboxed execution environments. Since launching and iterating based on user feedback in August, Codex has grown 20x, now serves many trillions of tokens per week, has become the most-served coding model both in first-party use and via API, and has enabled dramatic productivity gains including shipping the Sora Android app (which became the #1 app in the app store) in just 28 days with 2-3 engineers, demonstrating significant acceleration in production software development at scale.

Building and Sunsetting Ada: An Internal LLM-Powered Chatbot Assistant

Leboncoin

Leboncoin, a French e-commerce platform, built Ada—an internal LLM-powered chatbot assistant—to provide employees with secure access to GenAI capabilities while protecting sensitive data from public LLM services. Starting in late 2023, the project evolved from a general-purpose Claude-based chatbot to a suite of specialized RAG-powered assistants integrated with internal knowledge sources like Confluence, Backstage, and organizational data. Despite achieving strong technical results and valuable learning outcomes around evaluation frameworks, retrieval optimization, and enterprise LLM deployment, the project was phased out in early 2025 in favor of ChatGPT Enterprise with EU data residency, allowing the team to redirect their expertise toward more user-facing use cases while reducing operational overhead.

Building ART·E: Reinforcement Learning for Email Search Agent Development

OpenPipe

OpenPipe developed ART·E, an email research agent that outperforms OpenAI's o3 model on email search tasks. The project involved creating a synthetic dataset from the Enron email corpus, implementing a reinforcement learning training pipeline using Group Relative Policy Optimization (GRPO), and developing a multi-objective reward function. The resulting model achieved higher accuracy while being faster and cheaper than o3, taking fewer turns to answer questions correctly and hallucinating less frequently, all while being trained on a single H100 GPU for under $80.

Building Claude Code: Scaling AI-Powered Development from Terminal Prototype to Production

Anthropic

Anthropic's Boris Churnney, creator of Claude Code, describes the journey from an accidental terminal prototype in September 2024 to a production coding tool used by 70% of startups and responsible for 4% of all public commits globally. Starting as a simple API testing tool, Claude Code evolved through continuous user feedback and rapid iteration, with the entire codebase rewritten every few months to adapt to improving model capabilities. The tool achieved remarkable productivity gains at Anthropic itself, with engineers seeing 70% productivity increases per capita despite team doubling, and total productivity improvements of 150% since launch. The development philosophy centered on building for future model capabilities rather than current ones, anticipating improvements 6 months ahead, and minimizing scaffolding that would become obsolete with each new model release.

Building Customer Intelligence MCP Server for AI Agent Integration

Dovetail

Dovetail, a customer intelligence platform, developed an MCP (Model Context Protocol) server to enable AI agents to access and utilize customer feedback data stored in their platform. The solution addresses the challenge of teams wanting to integrate their customer intelligence into internal AI workflows, allowing for automated report generation, roadmap development, and faster decision-making across product management, customer success, and design teams.

Building Economic Infrastructure for AI with Foundation Models and Agentic Commerce

Stripe

Stripe, processing approximately 1.3% of global GDP, has evolved from traditional ML-based fraud detection to deploying transformer-based foundation models for payments that process every transaction in under 100ms. The company built a domain-specific foundation model treating charges as tokens and behavior sequences as context windows, ingesting tens of billions of transactions to power fraud detection, improving card-testing detection from 59% to 97% accuracy for large merchants. Stripe also launched the Agentic Commerce Protocol (ACP) jointly with OpenAI to standardize how agents discover and purchase from merchant catalogs, complemented by internal AI adoption reaching 8,500 employees daily using LLM tools, with 65-70% of engineers using AI coding assistants and achieving significant productivity gains like reducing payment method integrations from 2 months to 2 weeks.

Building Enterprise AI-Powered Software Engineering Tools with Multi-Modal Agent Architecture

Windsurf

Windsurf developed an enterprise-focused AI-powered software development platform that extends beyond traditional code generation to encompass the full software engineering workflow. The company built a comprehensive system including a VS Code fork (Windsurf IDE), custom models, advanced retrieval systems, and integrations across multiple developer touchpoints like browsers and PR reviews. Their approach focuses on human-AI collaboration through "flows" while systematically expanding from code-only context to multi-modal data sources, achieving significant improvements in code acceptance rates and demonstrating frontier performance compared to leading models like Claude Sonnet.

Building Enterprise-Grade GenAI Platform with Multi-Cloud Architecture

Coinbase

Coinbase developed CB-GPT, an enterprise GenAI platform, to address the challenges of deploying LLMs at scale across their organization. Initially focused on optimizing cost versus accuracy, they discovered that enterprise-grade LLM deployment requires solving for latency, availability, trust and safety, and adaptability to the rapidly evolving LLM landscape. Their solution was a multi-cloud, multi-LLM platform that provides unified access to models across AWS Bedrock, GCP VertexAI, and Azure, with built-in RAG capabilities, guardrails, semantic caching, and both API and no-code interfaces. The platform now serves dozens of internal use cases and powers customer-facing applications including a conversational chatbot launched in June 2024 serving all US consumers.

Building Evaluation Frameworks for AI Product Managers: A Workshop on Production LLM Testing

Arize

This workshop, presented by Aman, an AI product manager at Arize, addresses the challenge of shipping reliable AI applications in production by establishing evaluation frameworks specifically designed for product managers. The problem identified is that LLMs inherently hallucinate and are non-deterministic, making traditional software testing approaches insufficient. The solution involves implementing "LLM as a judge" evaluation systems, building comprehensive datasets, running experiments with prompt variations, and establishing human-in-the-loop validation workflows. The approach demonstrates how product managers can move from "vibe coding" to "thrive coding" by using data-driven evaluation methods, prompt playgrounds, and continuous monitoring. Results show that systematic evaluation can catch issues like mismatched tone, missing features, and hallucinations before production deployment, though the workshop candidly acknowledges that evaluations themselves require validation and iteration.

Building Fair Housing Guardrails for Real Estate LLMs: Zillow's Multi-Strategy Approach to Preventing Discrimination

Zillow

Zillow developed a comprehensive Fair Housing compliance system for LLMs in real estate applications, combining three distinct strategies to prevent discriminatory responses: prompt engineering, stop lists, and a custom classifier model. The system addresses critical Fair Housing Act requirements by detecting and preventing responses that could enable steering or discrimination based on protected characteristics. Using a BERT-based classifier trained on carefully curated and augmented datasets, combined with explicit stop lists and prompt engineering, Zillow created a dual-layer protection system that validates both user inputs and model outputs. The approach achieved high recall in detecting non-compliant content while maintaining reasonable precision, demonstrating how domain-specific guardrails can be successfully implemented for LLMs in regulated industries.

Building Foundation Models for Computer Use Agents

Tzafon

Tzafon, a research lab focused on training foundation models for computer use agents, tackled the challenge of enabling LLMs to autonomously interact with computers through visual understanding and action execution. The company identified fundamental limitations in existing models' ability to ground visual information and coordinate actions, leading them to develop custom infrastructure (Waypoint) for data generation at scale, fine-tune vision encoders on screenshot data, and ultimately pre-train models from scratch with specialized computer interaction capabilities. While initial approaches using supervised fine-tuning and reinforcement learning on successful trajectories showed limited generalization, their focus on solving the grounding problem through improved vision-language integration and domain-specific pre-training has positioned them to release models and desktop applications for autonomous computer use, though performance on benchmarks like OS World remains a challenge across the industry.

Building GitHub Copilot: Working with OpenAI's LLMs in Production

GitHub

GitHub developed GitHub Copilot by integrating OpenAI's large language models, starting with GPT-3 and evolving through multiple iterations of the Codex model. The problem was creating an effective AI-powered code generation tool that could work seamlessly within developer IDEs. The solution involved extensive prompt crafting to create optimal "pseudo-documents" that guide the model toward better completions, fine-tuning on specific codebases, and implementing contextual improvements such as incorporating code from neighboring editor tabs and file paths. The results included dramatic improvements in code acceptance rates, with the multilingual model eventually solving over 90% of test problems compared to about 50% initially, and noticeable quality improvements particularly for non-top-five programming languages when new model versions were deployed.

Building Goal-Oriented Retrieval Agents for Low-Latency Recommendations at Scale

Faber Labs

Faber Labs developed Gora (Goal-Oriented Retrieval Agents), a system that transforms subjective relevance ranking using cutting-edge technologies. The system optimizes for specific KPIs like conversion rates and average order value in e-commerce, or minimizing surgical engagements in healthcare. They achieved this through a combination of real-time user feedback processing, unified goal optimization, and high-performance infrastructure built with Rust, resulting in consistent 200%+ improvements in key metrics while maintaining sub-second latency.

Building LinkedIn's First Production Agent: Hiring Assistant Platform and Architecture

LinkedIn

LinkedIn evolved from simple GPT-based collaborative articles to sophisticated AI coaches and finally to production-ready agents, culminating in their Hiring Assistant product announced in October 2025. The company faced the challenge of moving from conversational assistants with prompt chains to task automation using agent-based architectures that could handle high-scale candidate evaluation while maintaining quality and enabling rapid iteration. They built a comprehensive agent platform with modular sub-agent architecture, centralized prompt management, LLM inference abstraction, messaging-based orchestration for resilience, and a skill registry for dynamic tool discovery. The solution enabled parallel development of agent components, independent quality evaluation, and the ability to serve both enterprise recruiters and SMB customers with variations of the same underlying platform, processing thousands of candidate evaluations at scale while maintaining the flexibility to iterate on product design.

Building Multi-Agent AI Systems for Developer Support and Infrastructure Operations

Electrolux

Electrolux, a Swedish home appliances manufacturer with over 100 years of history, developed "Infra Assistant," an AI-powered multi-agent system to support their internal development teams and reduce bottlenecks in their platform engineering organization. The company faced challenges with their small Site Reliability Engineering (SRE) team being overwhelmed with repetitive support requests via Slack channels. Using Amazon Bedrock agents with both retrieval-augmented generation (RAG) and multi-agent collaboration patterns, they built a sophisticated system that answers questions based on organizational documentation, executes operations via API integrations, and can even troubleshoot cloud infrastructure issues autonomously. The system has proven cost-efficient compared to manual effort, successfully handles repetitive tasks like access management, and provides context-aware responses by accessing multiple organizational knowledge sources, though challenges remain around response latency and achieving consistent accuracy across all interactions.

Building Open-Source RL Environments from Real-World Coding Tasks for Model Training

Cline

Cline's head of AI presents their experience operating a model-agnostic AI coding agent platform, arguing that the industry has over-invested in "clever scaffolding" like RAG and tool-calling frameworks when frontier models can succeed with simpler approaches. The real bottleneck to progress, they contend, isn't prompt engineering or agent architecture but rather the quality of benchmarks and RL environments used to train models. Cline developed an automated "RL environments factory" system that transforms real-world coding tasks captured from actual user interactions into standardized, containerized training environments. They announce Cline Bench, an open-source benchmark derived from genuine software development work, inviting the community to contribute by simply working on open-source projects with Cline and opting into the initiative, thereby creating a shared substrate for improving frontier models.

Building Production Agentic Systems with Platform-Level LLMOps Features

Anthropic

Anthropic's presentation at the AI Engineer conference outlined their platform evolution for building high-performance agentic systems, using Claude Code as the primary example. The company identified three core challenges in production LLM deployments: harnessing model capabilities through API features, managing context windows effectively, and providing secure computational infrastructure for autonomous agent operation. Their solution involved developing platform-level features including extended thinking modes, tool use APIs, Model Context Protocol (MCP) for standardized external system integration, memory management for selective context retrieval, context editing capabilities, and secure code execution environments with container orchestration. The combination of memory tools and context editing demonstrated a 39% performance improvement on internal benchmarks, while their infrastructure solutions enabled Claude Code to run autonomously on web and mobile platforms with session persistence and secure sandboxing.

Building Production AI Agents and Agentic Platforms at Scale

Vercel

This AWS re:Invent 2025 session explores the challenges organizations face moving AI projects from proof-of-concept to production, addressing the statistic that 46% of AI POC projects are canceled before reaching production. AWS Bedrock team members and Vercel's director of AI engineering present a comprehensive framework for production AI systems, focusing on three critical areas: model switching, evaluation, and observability. The session demonstrates how Amazon Bedrock's unified APIs, guardrails, and Agent Core capabilities combined with Vercel's AI SDK and Workflow Development Kit enable rapid development and deployment of durable, production-ready agentic systems. Vercel showcases real-world applications including V0 (an AI-powered prototyping platform), Vercel Agent (an AI code reviewer), and various internal agents deployed across their organization, all powered by Amazon Bedrock infrastructure.

Building Production AI Agents for E-commerce and Food Delivery at Scale

Prosus

This case study explores how Prosus builds and deploys AI agents across e-commerce and food delivery businesses serving two billion customers globally. The discussion covers critical lessons learned from deploying conversational agents in production, with a particular focus on context engineering as the most important factor for success—more so than model selection or prompt engineering alone. The team found that successful production deployments require hybrid approaches combining semantic and keyword search, generative UI experiences that mix chat with dynamic visual components, and sophisticated evaluation frameworks. They emphasize that technology has advanced faster than user adoption, leading to failures when pure chatbot interfaces were tested, and success only came through careful UI/UX design, contextual interventions, and extensive testing with both synthetic and real user data.

Building Production AI Agents for Enterprise HR, IT, and Finance Platform

Rippling

Rippling, an enterprise platform providing HR, payroll, IT, and finance solutions, has evolved its AI strategy from simple content summarization to building complex production agents that assist administrators and employees across their entire platform. Led by Anker, their head of AI, the company has developed agents that handle payroll troubleshooting, sales briefing automation, interview transcript summarization, and talent performance calibration. They've transitioned from deterministic workflow-based approaches to more flexible deep agent paradigms, leveraging LangChain and LangSmith for development and tracing. The company maintains a dual focus: embedding AI capabilities within their product for customers running businesses on their platform, and deploying AI internally to increase productivity across all teams. Early results show promise in handling complex, context-dependent queries that traditional rule-based systems couldn't address.

Building Production AI Agents with API Platform and Multi-Modal Capabilities

Manus AI

Manus AI demonstrates their production-ready AI agent platform through a technical workshop showcasing their API and application framework. The session covers building complex AI applications including a Slack bot, web applications, browser automation, and invoice processing systems. The platform addresses key production challenges such as infrastructure scaling, sandboxed execution environments, file handling, webhook management, and multi-turn conversations. Through live demonstrations and code walkthroughs, the workshop illustrates how their platform enables developers to build and deploy AI agents that handle millions of daily conversations while providing consistent pricing and functionality across web, mobile, Slack, and API interfaces.

Building Production AI Agents: Lessons from Claude Code and Enterprise Deployments

Anthropic

Anthropic's Applied AI team shares learnings from building and deploying AI agents in production throughout 2024-2025, focusing on their Claude Code product and enterprise customer implementations. The presentation covers the evolution from simple Q&A chatbots and RAG systems to sophisticated agentic architectures that run LLMs in loops with tools. Key technical challenges addressed include context engineering, prompt optimization, tool design, memory management, and handling long-running tasks that exceed context windows. The team transitioned from workflow-based architectures (chained LLM calls with deterministic logic) to agent-based systems where models autonomously use tools to solve open-ended problems, resulting in more robust error handling and the ability to tackle complex tasks like multi-hour coding sessions.

Building Production AI Coding Assistants and Agents at Scale

Sourcegraph

Sourcegraph's CTO discusses the evolution from their code search engine to building Cody, an enterprise AI coding assistant, and AMP, a coding agent released in 2024. The company serves hundreds of Fortune 500 companies and government agencies, deploying LLM-powered tools that achieve 30-60% developer productivity gains. Their approach emphasizes multi-model architectures, rapid iteration without traditional code review processes, and building application scaffolds around frontier models to generate training data for next-generation systems. The discussion explores the transition from chat-based LLM applications (requiring sophisticated RAG systems) to agentic architectures (using simple tool-calling loops), the challenges of scaling in enterprise environments, and philosophical debates about whether pure model scaling will lead to AGI or whether alternating between application development and model training is necessary for continued progress.

Building Production Analytics Agents with Semantic Layer Integration

Wobby

Wobby, a company that helps business teams get insights from their data warehouses in under one minute, shares their journey building production-ready analytics agents over two years. The team developed three specialized agents (Quick, Deep, and Steward) that work with semantic layers to answer business questions. Their solution emphasizes Slack/Teams integration for adoption, building their own semantic layer to encode business logic, preferring prompt-based logic over complex workflows, implementing comprehensive testing strategies beyond just evals, and optimizing for latency through caching and progressive disclosure. The approach led to successful adoption by clients, with analytics agents being actively used in production to handle ad-hoc business intelligence queries.

Building Production Evaluation Systems for GitHub Copilot at Scale

Github

This case study examines the challenges of building evaluation systems for AI products in production, drawing from the author's experience leading the evaluation team at GitHub Copilot serving 100M developers. The problem addressed was the gap between evaluation tooling and developer workflows, as most AI teams consist of engineers rather than data scientists, yet evaluation tools are designed for data science workflows. The solution involved building a comprehensive evaluation stack including automated harnesses for code completion testing, A/B testing infrastructure, and implicit user behavior metrics like acceptance rates. The results showed that while sophisticated evaluation systems are valuable, successful AI products in practice rely heavily on rapid iteration, monitoring in production, and "vibes-based" testing, with the dominant strategy being to ship fast and iterate based on real user feedback rather than extensive offline evaluation.

Building Production LLM Applications with DSPy Framework

AlixPartners

A technical consultant presents a comprehensive workshop on using DSPy, a declarative framework for building modular LLM-powered applications in production. The presenter demonstrates how DSPy enables rapid iteration on LLM applications by treating LLMs as first-class citizens in Python programs, with built-in support for structured outputs, type guarantees, tool calling, and automatic prompt optimization. Through multiple real-world use cases including document classification, contract analysis, time entry correction, and multi-modal processing, the workshop shows how DSPy's core primitives—signatures, modules, tools, adapters, optimizers, and metrics—allow teams to build production-ready systems that are transferable across models, optimizable without fine-tuning, and maintainable at scale.

Building Production-Grade AI Agents with Guardrails, Context Management, and Security

Portia / Riff / Okta

This panel discussion features founders from Portia AI and Rift.ai (formerly Databutton) discussing the challenges of moving AI agents from proof-of-concept to production. The speakers address critical production concerns including guardrails for agent reliability, context engineering strategies, security and access control challenges, human-in-the-loop patterns, and identity management. They share real-world customer examples ranging from custom furniture makers to enterprise CRM enrichment, emphasizing that while approximately 40% of companies experimenting with AI have agents in production, the journey requires careful attention to trust, security, and supportability. Key solutions include conditional example-based prompting, sandboxed execution environments, role-based access controls, and keeping context windows smaller for better precision rather than utilizing maximum context lengths.

Building Production-Grade AI Agents with Observability, Evaluation, and Insights

Langchain

Langchain discusses the evolution of their LangSmith platform for managing AI agents in production, addressing the challenge of bringing rigor and reliability to deployed LLM applications. The company describes launching two major feature sets: Insights, which automatically discovers patterns and trends in millions of production traces to help teams understand user interactions and agent behavior, and thread-based evaluations, which enable assessment of multi-turn conversations and complete user sessions rather than just individual interactions. These features aim to help teams transition from informal "vibe testing" to more methodical approaches as agents move from initial prototypes to production deployments handling millions of daily traces, with the goal of reducing unknowns and improving reliability in production AI systems.

Building Production-Grade AI Agents: Overcoming Reasoning and Tool Challenges

Kentauros AI

Kentauros AI presents their experience building production-grade AI agents, detailing the challenges in developing agents that can perform complex, open-ended tasks in real-world environments. They identify key challenges in agent reasoning (big brain, little brain, and tool brain problems) and propose solutions through reinforcement learning, generalizable algorithms, and scalable data approaches. Their evolution from G2 to G5 agent architectures demonstrates practical solutions to memory management, task-specific reasoning, and skill modularity.

Building Production-Ready Agentic AI Systems in Financial Services

Fitch Group

Jayeeta Putatunda, Director of AI Center of Excellence at Fitch Group, shares lessons learned from deploying agentic AI systems in the financial services industry. The discussion covers the challenges of moving from proof-of-concept to production, emphasizing the importance of evaluation frameworks, observability, and the "data prep tax" required for reliable AI agent deployments. Key insights include the need to balance autonomous agents with deterministic workflows, implement comprehensive logging at every checkpoint, combine LLMs with traditional predictive models for numerical accuracy, and establish strong business-technical partnerships to define success metrics. The conversation highlights that while agentic frameworks enable powerful capabilities, production success requires careful system design, multi-layered evaluation, human-in-the-loop validation patterns, and a focus on high-ROI use cases rather than chasing the latest model architectures.

Building Production-Ready Agentic Systems with the Claude Developer Platform

Anthropic

Anthropic's Claude Developer Platform team discusses their evolution from a simple API to a comprehensive platform for building autonomous AI agents in production. The conversation covers their philosophy of "unhobbling" models by reducing scaffolding and giving Claude more autonomous decision-making capabilities through tools like web search, code execution, and context management. They introduce the Claude Code SDK as a general-purpose agentic harness that handles the tool-calling loop automatically, making it easier for developers to prototype and deploy agents. The platform addresses key production challenges including prompt caching, context window management, observability for long-running tasks, and agentic memory, with a roadmap focused on higher-order abstractions and self-improving systems.

Building Production-Ready AI Agent Systems: Multi-Agent Orchestration and LLMOps at Scale

Galileo / Crew AI

This podcast discussion between Galileo and Crew AI leadership explores the challenges and solutions for deploying AI agents in production environments at enterprise scale. The conversation covers the technical complexities of multi-agent systems, the need for robust evaluation and observability frameworks, and the emergence of new LLMOps practices specifically designed for non-deterministic agent workflows. Key topics include authentication protocols, custom evaluation metrics, governance frameworks for regulated industries, and the democratization of agent development through no-code platforms.

Building Production-Ready AI Agents for Internal Workflow Automation

Vercel

Vercel, a web hosting and deployment platform, addressed the challenge of identifying and implementing successful AI agent projects across their organization by focusing on employee pain points—specifically repetitive, boring tasks that humans disliked. The company deployed three internal production agents: a lead processing agent that automated sales qualification and research (saving hundreds of days of manual work), an anti-abuse agent that accelerated content moderation decisions by 59%, and a data analyst agent that automated SQL query generation for business intelligence. Their methodology centered on asking employees "What do you hate most about your job?" to identify tasks that were repetitive enough for current AI models to handle reliably while still delivering high business impact.

Building Production-Ready AI Agents: Lessons from BeeAI Framework Development

IBM

IBM Research's team spent a year developing and deploying AI agents in production, leading to the creation of the open-source BeeAI Framework. The project addressed the challenge of making LLM-powered agents accessible to developers while maintaining production-grade reliability. Their journey included creating custom evaluation frameworks, developing novel user interfaces for agent interaction, and establishing robust architecture patterns for different use cases. The team successfully launched an open-source stack that gained particular traction with TypeScript developers.

Building Production-Ready AI Agents: OpenAI Codex CLI Architecture and Agent Loop Design

OpenAI

OpenAI's Codex CLI is a cross-platform software agent that executes reliable code changes on local machines, demonstrating production-grade LLMOps through its sophisticated agent loop architecture. The system orchestrates interactions between users, language models, and tools through an iterative process that manages inference calls, tool execution, and conversation state. Key technical achievements include stateless request handling for Zero Data Retention compliance, strategic prompt caching optimization to achieve linear rather than quadratic performance, automatic context window management through intelligent compaction, and robust handling of multi-turn conversations while maintaining conversation coherence across potentially hundreds of model-tool iterations.

Building Production-Ready CRM Integration for ChatGPT using Model Context Protocol

Hubspot

HubSpot developed the first third-party CRM connector for ChatGPT using the Model Context Protocol (MCP), creating a remote MCP server that enables 250,000+ businesses to perform deep research through conversational AI without requiring local installations. The solution involved building a homegrown MCP server infrastructure using Java and Dropwizard, implementing OAuth-based user-level permissions, creating a distributed service discovery system for automatic tool registration, and designing a query DSL that allows AI models to generate complex CRM searches through natural language interactions.

Building Production-Scale AI Agents with Extended GenAI Tech Stack

LinkedIn

LinkedIn extended their generative AI application tech stack to support building complex AI agents that can reason, plan, and act autonomously while maintaining human oversight. The evolution from their original GenAI stack to support multi-agent orchestration involved leveraging existing infrastructure like gRPC for agent definitions, messaging systems for multi-agent coordination, and comprehensive observability through OpenTelemetry and LangSmith. The platform enables agents to work both synchronously and asynchronously, supports background processing, and includes features like experiential memory, human-in-the-loop controls, and cross-device state synchronization, ultimately powering products like LinkedIn's Hiring Assistant which became globally available.

Building Production-Scale AI Search with Knowledge Graphs, MCP, and DSPy

Dropbox

Dropbox faced the challenge of enabling users to search and query their work content scattered across 50+ SaaS applications and tabs, which proprietary LLMs couldn't access. They built Dash, an AI-powered universal search and agent platform using a sophisticated context engine that combines custom connectors, content understanding, knowledge graphs, and index-based retrieval (primarily BM25) over federated approaches. The system addresses MCP scalability challenges through "super tools," uses LLM-as-a-judge for relevancy evaluation (achieving high agreement with human evaluators), and leverages DSPy for prompt optimization across 30+ prompts in their stack. This infrastructure enables cross-app intelligence with fast, accurate, and ACL-compliant retrieval for agentic queries at enterprise scale.

Building Production-Scale Code Completion Tools with Continuous Evaluation and Prompt Engineering

Gitlab

Gitlab's ModelOps team developed a sophisticated code completion system using multiple LLMs, implementing a continuous evaluation and improvement pipeline. The system combines both open-source and third-party LLMs, featuring a comprehensive architecture that includes continuous prompt engineering, evaluation benchmarks, and reinforcement learning to consistently improve code completion accuracy and usefulness for developers.

Building QueryAnswerBird: An AI Data Analyst with Text-to-SQL and RAG

Delivery Hero

Woowa Brothers, part of Delivery Hero, developed QueryAnswerBird (QAB), an LLM-based AI data analyst to address employee challenges with SQL query generation and data literacy. Through a company-wide survey, they identified that 95% of employees used data for work, but over half struggled with SQL due to time constraints or difficulty translating business logic into queries. The solution leveraged RAG, LangChain, and GPT-4 to build a Slack-integrated assistant that automatically generates SQL queries from natural language, interprets queries, validates syntax, and explores tables. After winning first place at an internal hackathon in 2023, a dedicated task force spent six months developing the production system with comprehensive LLMOps practices including A/B testing, monitoring dashboards, API load balancing, GPT caching, and CI/CD deployment, conducting over 500 tests to optimize performance.

Building Reliable AI Agent Systems with Effect TypeScript Framework

14.ai

14.ai, an AI-native customer support platform, uses Effect, a TypeScript framework, to manage the complexity of building reliable LLM-powered agent systems that interact directly with end users. The company built a comprehensive architecture using Effect across their entire stack to handle unreliable APIs, non-deterministic model outputs, and complex workflows through strong type guarantees, dependency injection, retry mechanisms, and structured error handling. Their approach enables reliable agent orchestration with fallback strategies between LLM providers, real-time streaming capabilities, and comprehensive testing through dependency injection, resulting in more predictable and resilient AI systems.

Building Reliable AI Agents Through Production Monitoring and Intent Discovery

Raindrop

Raindrop, a monitoring platform for AI products, addresses the challenge of building reliable AI agents in production where traditional offline evaluations fail to capture real-world usage patterns. The company developed a "Sentry for AI products" approach that emphasizes experimentation, production monitoring, and discovering user intents through clustering and signal detection. Their solution combines explicit signals (like thumbs up/down, regenerations) and implicit signals (detecting refusals, task failures, user frustration) to identify issues that don't manifest as traditional software errors. The platform trains custom models to detect issues across production data at scale, enabling teams to discover unknown problems, track their impact on users, and fix them systematically without breaking existing functionality.

Building Reliable LLM Workflows in Biotech Research

Moderna

Moderna Therapeutics applies large language models primarily for document reformatting and regulatory submission preparation within their research organization, deliberately avoiding autonomous agents in favor of highly structured workflows. The team, led by Eric Maher in research data science, focuses on automating what they term "intellectual drudgery" - reformatting laboratory records and experiment documentation into regulatory-compliant formats. Their approach prioritizes reliability over novelty, implementing rigorous evaluation processes matched to consequence levels, with particular emphasis on navigating the complex security and permission mapping challenges inherent in regulated biotech environments. The team employs a "non-LLM filter" methodology, only reaching for generative AI after exhausting simpler Python or traditional ML approaches, and leverages serverless infrastructure like Modal and reactive notebooks with Marimo to enable rapid experimentation and deployment.

Building Stateful AI Agents with In-Context Learning and Memory Management

Letta

Letta addresses the fundamental limitation of current LLM-based agents: their inability to learn and retain information over time, leading to degraded performance as context accumulates. The platform enables developers to build stateful agents that learn by updating their context windows rather than model parameters, making learning interpretable and model-agnostic. The solution includes a developer platform with memory management tools, context window controls, and APIs for creating production agents that improve over time. Real-world deployments include a support agent that has been learning from Discord interactions for a month and recommendation agents for Built Rewards, demonstrating that agents with persistent memory can achieve performance comparable to fine-tuned models while remaining flexible and debuggable.

Building Uma: In-House AI Research and Custom Fine-Tuning for Marketplace Intelligence

Upwork

Upwork developed Uma, their "mindful AI" assistant, by rejecting off-the-shelf LLM solutions in favor of building custom-trained models using proprietary platform data and in-house AI research. The company hired expert freelancers to create high-quality training datasets, generated synthetic data anchored in real platform interactions, and fine-tuned open-source LLMs specifically for hiring workflows. This approach enabled Uma to handle complex, business-critical tasks including crafting job posts, matching freelancers to opportunities, autonomously coordinating interviews, and evaluating candidates. The strategy resulted in models that substantially outperform generic alternatives on domain-specific tasks while reducing costs by up to 10x and improving reliability in production environments. Uma now operates as an increasingly agentic system that takes meaningful actions across the full hiring lifecycle.

Climate Tech Foundation Models for Environmental AI Applications

Various

Climate tech startups are leveraging Amazon SageMaker HyperPod to build specialized foundation models that address critical environmental challenges including weather prediction, sustainable material discovery, ecosystem monitoring, and geological modeling. Companies like Orbital Materials and Hum.AI are training custom models from scratch on massive environmental datasets, achieving significant breakthroughs such as tenfold performance improvements in carbon capture materials and the ability to see underwater from satellite imagery. These startups are moving beyond traditional LLM fine-tuning to create domain-specific models with billions of parameters that process multimodal environmental data including satellite imagery, sensor networks, and atmospheric measurements at scale.

Company-Wide GenAI Transformation Through Hackathon-Driven Culture and Centralized Infrastructure

Agoda

Agoda transformed from GenAI experiments to company-wide adoption through a strategic approach that began with a 2023 hackathon, grew into a grassroots culture of exploration, and was supported by robust infrastructure including a centralized GenAI proxy and internal chat platform. Starting with over 200 developers prototyping 40+ ideas, the initiative evolved into 200+ applications serving both internal productivity (73% employee adoption, 45% of tech support tickets automated) and customer-facing features, demonstrating how systematic enablement and community-driven innovation can scale GenAI across an entire organization.

Context Engineering and Agent Development at Scale: Building Open Deep Research

LangChain

Lance Martin from LangChain discusses the emerging discipline of "context engineering" through his experience building Open Deep Research, a deep research agent that evolved over a year to become the best-performing open-source solution on Deep Research Bench. The conversation explores how managing context in production agent systems—particularly across dozens to hundreds of tool calls—presents challenges distinct from simple prompt engineering, requiring techniques like context offloading, summarization, pruning, and multi-agent isolation. Martin's iterative development journey illustrates the "bitter lesson" for AI engineering: structured workflows that work well with current models can become bottlenecks as models improve, requiring engineers to continuously remove structure and embrace more general approaches to capture exponential model improvements.

Context Engineering and Tool Design for Background Coding Agents at Scale

Spotify

Spotify deployed a background coding agent to automate large-scale software maintenance across thousands of repositories, initially experimenting with open-source tools like Goose and Aider before building a custom agentic loop, and ultimately adopting Claude Code with the Anthropic Agent SDK. The primary challenge shifted from building the agent to effective context engineering—crafting prompts that produce reliable, mergeable pull requests at scale. Through extensive experimentation, Spotify developed prompt engineering principles (tailoring to the agent, stating preconditions, using examples, defining end states through tests) and designed a constrained tool ecosystem (limited bash commands, custom verify tool, git tool) to maintain predictability. The system has successfully merged approximately 50 migrations with thousands of AI-generated pull requests into production, demonstrating that careful prompt design and strategic tool limitation are critical for production LLM deployments in code generation scenarios.

Context Engineering for Background Coding Agents at Scale

Spotify

Spotify built a background coding agent system to automate large-scale software maintenance and migrations across thousands of repositories. The company initially experimented with open-source agents like Goose and Aider, then built a custom agentic loop, before ultimately adopting Claude Code from Anthropic. The core challenge centered on context engineering—crafting effective prompts and selecting appropriate tools to enable the agent to reliably generate mergeable pull requests. By developing sophisticated prompt engineering practices and carefully constraining the agent's toolset, Spotify has successfully applied this system to approximately 50 migrations with thousands of merged PRs across hundreds of repositories.

Context Engineering for Production AI Agents at Scale

Manus

Manus, a general AI agent platform, addresses the challenge of context explosion in long-running autonomous agents that can accumulate hundreds of tool calls during typical tasks. The company developed a comprehensive context engineering framework encompassing five key dimensions: context offloading (to file systems and sandbox environments), context reduction (through compaction and summarization), context retrieval (using file-based search tools), context isolation (via multi-agent architectures), and context caching (for KV cache optimization). This approach has been refined through five major refactors since launch in March, with the system supporting typical tasks requiring around 50 tool calls while maintaining model performance and managing token costs effectively through their layered action space architecture.

Context Engineering Platform for Multi-Domain RAG and Agentic Systems

Contextual

Contextual has developed an end-to-end context engineering platform designed to address the challenges of building production-ready RAG and agentic systems across multiple domains including e-commerce, code generation, and device testing. The platform combines multimodal ingestion, hierarchical document processing, hybrid search with reranking, and dynamic agents to enable effective reasoning over large document collections. In a recent context engineering hackathon, Contextual's dynamic agent achieved competitive results on a retail dataset of nearly 100,000 documents, demonstrating the value of constrained sub-agents, turn limits, and intelligent tool selection including MCP server management.

Context Engineering Strategies for Production AI Agents

Manus

Manus AI developed a production AI agent system that uses context engineering instead of fine-tuning to enable rapid iteration and deployment. The company faced the challenge of building an effective agentic system that could operate reliably at scale while managing complex multi-step tasks. Their solution involved implementing several key strategies including KV-cache optimization, tool masking instead of removal, file system-based context management, attention manipulation through task recitation, and deliberate error preservation for learning. These approaches allowed Manus to achieve faster development cycles, improved cost efficiency, and better agent performance across millions of users while maintaining system stability and scalability.

Contextual Agent Playbooks and Tools: Enterprise-Scale AI Coding Agent Integration

LinkedIn

LinkedIn faced the challenge that while AI coding agents were powerful, they lacked organizational context about the company's thousands of microservices, internal frameworks, data infrastructure, and specialized systems. To address this, they built CAPT (Contextual Agent Playbooks & Tools), a unified framework built on the Model Context Protocol (MCP) that provides AI agents with access to internal tools and executable playbooks encoding institutional workflows. The system enables over 1,000 engineers to perform complex tasks like experiment cleanup, data analysis, incident debugging, and code review with significant productivity gains: 70% reduction in issue triage time, 3× faster data analysis workflows, and automated debugging that cuts time spent by more than half in many cases.

Cost Optimization and Performance Panel Discussion: Strategies for Running LLMs in Production

Various

A panel discussion featuring experts from Neva, Intercom, Prompt Layer, and OctoML discussing strategies for optimizing costs and performance when running LLMs in production. The panel explores various approaches from using API services to running models in-house, covering topics like model compression, hardware selection, latency optimization, and monitoring techniques. Key insights include the trade-offs between API usage and in-house deployment, strategies for cost reduction, and methods for performance optimization.

CPU-Based Deployment of Large MoE Models Using Intel Xeon 6 Processors

Lmsys

Intel PyTorch Team collaborated with the SGLang project to develop a cost-effective CPU-only deployment solution for large Mixture of Experts (MoE) models like DeepSeek R1, addressing the challenge of high memory requirements that typically necessitate multiple expensive AI accelerators. Their solution leverages Intel Xeon 6 processors with Advanced Matrix Extensions (AMX) and implements highly optimized kernels for attention mechanisms and MoE computations, achieving 6-14x speedup in time-to-first-token (TTFT) and 2-4x speedup in time-per-output-token (TPOT) compared to llama.cpp, while supporting multiple quantization formats including BF16, INT8, and FP8.

Dark Vessel Detection System Using SAR Imagery and ML

Defense Innovation Unit

The Defense Innovation Unit developed a system to detect illegal, unreported, and unregulated fishing vessels using satellite-based synthetic aperture radar (SAR) imagery and machine learning. They created a large annotated dataset of SAR images, developed ML models for vessel detection, and deployed the system to over 100 countries through a platform called SeaVision. The system successfully identifies "dark vessels" that turn off their AIS transponders to hide illegal fishing activities, enabling better maritime surveillance and law enforcement.

Deploying Agentic AI in Financial Services at Scale

Nvidia

Financial institutions including Capital One, Royal Bank of Canada (RBC), and Visa are deploying agentic AI systems in production to handle real-time financial transactions and complex workflows. These multi-agent systems go beyond simple generative AI by reasoning through problems and taking action autonomously, requiring 100-200x more computational resources than traditional single-shot inference. The implementations focus on use cases like automotive purchasing assistance, investment research automation, and fraud detection, with organizations building proprietary models using open-source foundations (like Llama or Mistral) combined with bank-specific data to achieve 60-70% accuracy improvements. The results include 60% cycle time improvements in report generation, 10x more data analysis capacity, and enhanced fraud detection capabilities, though these gains require substantial investment in AI infrastructure and talent development.

Deploying Generative AI at Scale Across 5,000 Developers

Liberty IT

Liberty IT, the technology division of Fortune 100 insurance company Liberty Mutual, embarked on a large-scale deployment of generative AI tools across their global workforce of over 5,000 developers and 50,000+ employees. The initiative involved rolling out custom GenAI platforms including Liberty GPT (an internal ChatGPT variant) to 70% of employees and GitHub Copilot to over 90% of IT staff within the first year. The company faced challenges including rapid technology evolution, model availability constraints, cost management, RAG implementation complexity, and achieving true adoption beyond basic usage. Through building a centralized AI platform with governance controls, implementing comprehensive learning programs across six streams, supporting 28 different models optimized for various use cases, and developing custom dashboards for cost tracking and observability, Liberty IT successfully navigated these challenges while maintaining enterprise security and compliance requirements.

Deploying Secure AI Agents in Highly Regulated Financial and Gaming Environments

Sicoob / Holland Casino

Two organizations operating in highly regulated industries—Sicoob, a Brazilian cooperative financial institution, and Holland Casino, a government-mandated Dutch gaming operator—share their approaches to deploying generative AI workloads while maintaining strict compliance requirements. Sicoob built a scalable infrastructure using Amazon EKS with GPU instances, leveraging open-source tools like Karpenter, KEDA, vLLM, and Open WebUI to run multiple open-source LLMs (Llama, Mistral, DeepSeek, Granite) for code generation, robotic process automation, investment advisory, and document interaction use cases, achieving cost efficiency through spot instances and auto-scaling. Holland Casino took a different path, using Anthropic's Claude models via Amazon Bedrock and developing lightweight AI agents using the Strands framework, later deploying them through Bedrock Agent Core to provide management stakeholders with self-service access to cost, security, and operational insights. Both organizations emphasized the importance of security, governance, compliance frameworks (including ISO 42001 for AI), and responsible AI practices while demonstrating that regulatory requirements need not inhibit AI adoption when proper architectural patterns and AWS services are employed.

Domain-Adapted Foundation Models for Enterprise-Scale LLM Deployment

LinkedIn

LinkedIn developed a family of domain-adapted foundation models (EON models) to enhance their GenAI capabilities across their platform serving 1B+ members. By adapting open-source models like Llama through multi-task instruction tuning and safety alignment, they created cost-effective models that maintain high performance while being 75x more cost-efficient than GPT-4. The EON-8B model demonstrated significant improvements in production applications, including a 4% increase in candidate-job-requirements matching accuracy compared to GPT-4o mini in their Hiring Assistant product.

Domain-Adapted LLMs Through Continued Pretraining on E-commerce Data

Ebay

eBay developed customized large language models by adapting Meta's Llama 3.1 models (8B and 70B parameters) to the e-commerce domain through continued pretraining on a mixture of proprietary eBay data and general domain data. This hybrid approach allowed them to infuse domain-specific knowledge while avoiding the resource intensity of training from scratch. Using 480 NVIDIA H100 GPUs and advanced distributed training techniques, they trained the models on 1 trillion tokens, achieving approximately 25% improvement on e-commerce benchmarks for English (30% for non-English) with only 1% degradation on general domain tasks. The resulting "e-Llama" models were further instruction-tuned and aligned with human feedback to power various AI initiatives across the company in a cost-effective, scalable manner.

Domain-Specific Agentic AI for Personalized Korean Skincare Recommendations

Glowe / Weaviate

Glowe, developed by Weaviate, addresses the challenge of finding effective skincare product combinations by building a domain-specific AI agent that understands Korean skincare science. The solution leverages dual embedding strategies with TF-IDF weighting to capture product effects from 94,500 user reviews, uses Weaviate's vector database for similarity search, and employs Gemini 2.5 Flash for routine generation. The system includes an agentic chat interface powered by Elysia that provides real-time personalized guidance, resulting in scientifically-grounded skincare recommendations based on actual user experiences rather than marketing claims.

Domain-Specific AI Platform for Manufacturing and Supply Chain Optimization

Articul8

Articul8 developed a generative AI platform to address enterprise challenges in manufacturing and supply chain management, particularly for a European automotive manufacturer. The platform combines public AI models with domain-specific intelligence and proprietary data to create a comprehensive knowledge graph from vast amounts of unstructured data. The solution reduced incident response time from 90 seconds to 30 seconds (3x improvement) and enabled automated root cause analysis for manufacturing defects, helping experts disseminate daily incidents and optimize production processes that previously required manual analysis by experienced engineers.

Dynamic Context Discovery for Production Coding Agents

Cursor

Cursor, a coding agent platform, developed a "dynamic context discovery" approach to optimize how their AI agents use context windows and token budgets when working on long-running software development tasks. Instead of loading all potentially relevant information upfront (static context), their system enables agents to dynamically pull only the necessary context as needed. They implemented five key techniques: converting long tool outputs to files, using chat history files during summarization, supporting the Agent Skills standard, selectively loading MCP tools (reducing tokens by 46.9%), and treating terminal sessions as files. This approach improves token efficiency and response quality by reducing context window bloat and preventing information overload for the underlying LLM.

Empowering Non-Technical Domain Experts to Drive AI Quality in Conversational AI

Portola

Portola built Tolan, an AI companion app focused on creating authentic emotional connections through natural voice conversations. The challenge was ensuring conversation quality, emotional intelligence, and authentic behavior—qualities that couldn't be captured by automated evaluations alone. Portola's solution involved creating a workflow that empowered non-technical subject matter experts (behavioral researchers, writers, game designers) to review logs, curate problem-specific datasets, iterate on prompts using playground environments, and deploy changes directly to production without engineering handoffs. This approach resulted in a 4x improvement in prompt iteration velocity and systematic improvements in conversation quality, memory authenticity, and brand voice consistency.

Engineering Principles and Practices for Production LLM Systems

Langchain

This case study captures insights from Lance Martin, ML engineer at Langchain, discussing the evolution from traditional ML to LLM-based systems and the emerging engineering discipline of building production GenAI applications. The discussion covers key challenges including the shift from model training to model orchestration, the need to continuously rearchitect systems as foundation models rapidly improve, and the critical importance of context engineering to manage token usage and prevent context degradation. Solutions explored include workflow versus agent architectures, the three-part context engineering playbook (reduce, offload, isolate), and evaluation strategies that emphasize user feedback and tracing over static benchmarks. Results demonstrate that teams like Manis have rearchitected their systems five times since March 2025, and that simpler approaches with proper observability often outperform complex architectures, with the understanding that today's solutions must be rebuilt as models improve.

Enhanced Agentic RAG for On-Call Engineering Support

Uber

Uber developed Genie, an internal on-call copilot that uses an enhanced agentic RAG (EAg-RAG) architecture to provide real-time support for engineering security and privacy queries through Slack. The system addressed significant accuracy issues in traditional RAG approaches by implementing LLM-powered agents for query optimization, source identification, and context refinement, along with enriched document processing that improved table extraction and metadata enhancement. The enhanced system achieved a 27% relative improvement in acceptable answers and a 60% relative reduction in incorrect advice, enabling deployment across critical security and privacy channels while reducing the support load on subject matter experts and on-call engineers.

Enhanced Agentic-RAG for Internal On-Call Support Copilot

Uber

Uber developed Genie, an internal on-call copilot powered by LLMs, to provide real-time support for engineering queries in Slack. When initial testing revealed significant accuracy issues with responses in the engineering security and privacy domain, the team transitioned from traditional RAG to an Enhanced Agentic RAG (EAg-RAG) architecture. This involved enriched document processing with custom Google Docs loaders and LLM-powered content formatting, plus pre- and post-processing agents for query optimization, source identification, and context refinement. The improvements resulted in a 27% relative increase in acceptable answers and a 60% relative reduction in incorrect advice, enabling deployment across critical security and privacy channels while reducing the support load on subject matter experts.

Enterprise Agentic AI for Customer Support and Sales Using Amazon Bedrock AgentCore

Swisscom

Swisscom, Switzerland's leading telecommunications provider, implemented Amazon Bedrock AgentCore to build and scale enterprise AI agents for customer support and sales operations across their organization. The company faced challenges in orchestrating AI agents across different departments while maintaining Switzerland's strict data protection compliance, managing secure cross-departmental authentication, and preventing redundant efforts. By leveraging Amazon Bedrock AgentCore's Runtime, Identity, and Memory services along with the Strands Agents framework, Swisscom deployed two B2C use cases—personalized sales pitches and automated technical support—achieving stakeholder demos within 3-4 weeks, handling thousands of monthly requests with low latency, and establishing a scalable foundation that enables secure agent-to-agent communication while maintaining regulatory compliance.

Enterprise AI Platform Integration for Secure Production Deployment

Rubrik

Predibase, a fine-tuning and model serving platform, announced its acquisition by Rubrik, a data security and governance company, with the goal of combining Predibase's generative AI capabilities with Rubrik's secure data infrastructure. The integration aims to address the critical challenge that over 50% of AI pilots never reach production due to issues with security, model quality, latency, and cost. By combining Predibase's post-training and inference capabilities with Rubrik's data security posture management, the merged platform seeks to provide an end-to-end solution that enables enterprises to deploy generative AI applications securely and efficiently at scale.

Enterprise Data Extraction Evolution from Simple RAG to Multi-Agent Architecture

Box

Box, a B2B unstructured data platform serving Fortune 500 companies, initially built a straightforward LLM-based metadata extraction system that successfully processed 10 million pages but encountered limitations with complex documents, OCR challenges, and scale requirements. They evolved from a simple pre-process-extract-post-process pipeline to a sophisticated multi-agent architecture that intelligently handles document complexity, field grouping, and quality feedback loops, resulting in a more robust and easily evolving system that better serves enterprise customers' diverse document processing needs.

Enterprise LLM Deployment with Multi-Cloud Data Platform Integration

Databricks

This presentation by Databricks' Product Management lead addresses the challenges large enterprises face when deploying LLMs into production, particularly around data governance, evaluation, and operational control. The talk centers on two primary case studies: FactSet's transformation of their query language translation system (improving from 59% to 85% accuracy while reducing latency from 15 to 6 seconds), and Databricks' internal use of Claude for automating analyst questionnaire responses. The solution involves decomposing complex prompts into multi-step agentic workflows, implementing granular governance controls across data and model access, and establishing rigorous evaluation frameworks to achieve production-grade reliability in high-risk enterprise environments.

Enterprise Unstructured Data Quality Management for Production AI Systems

Anomalo

Anomalo addresses the critical challenge of unstructured data quality in enterprise AI deployments by building an automated platform on AWS that processes, validates, and cleanses unstructured documents at scale. The solution automates OCR and text parsing, implements continuous data observability to detect anomalies, enforces governance and compliance policies including PII detection, and leverages Amazon Bedrock for scalable LLM-based document quality analysis. This approach enables enterprises to transform their vast collections of unstructured text data into trusted assets for production AI applications while reducing operational burden, optimizing costs, and maintaining regulatory compliance.

Enterprise-Grade RAG Systems for Legal AI Platform

Harvey

Harvey, a legal AI platform serving professional services firms, addresses the complex challenge of building enterprise-grade Retrieval-Augmented Generation (RAG) systems that can handle sensitive legal documents while maintaining high performance, accuracy, and security. The company leverages specialized vector databases like LanceDB Enterprise and Postgres with PGVector to power their RAG systems across three key data sources: user-uploaded files, long-term vault projects, and third-party legal databases. Through careful evaluation of vector database options and collaboration with domain experts, Harvey has built a system that achieves 91% preference over ChatGPT in tax law applications while serving users in 45 countries with strict privacy and compliance requirements.

Enterprise-Scale Cloud Event Management with Generative AI for Operational Intelligence

Fidelity Investments

Fidelity Investments faced the challenge of managing massive volumes of AWS health events and support case data across 2,000+ AWS accounts and 5 million resources in their multi-cloud environment. They built CENTS (Cloud Event Notification Transport Service), an event-driven data pipeline that ingests, enriches, routes, and acts on AWS health and support data at scale. Building upon this foundation, they developed and published the MAKI (Machine Augmented Key Insights) framework using Amazon Bedrock, which applies generative AI to analyze support cases and health events, identify trends, provide remediation guidance, and enable agentic workflows for vulnerability detection and automated code fixes. The solution reduced operational costs by 57%, improved stakeholder engagement through targeted notifications, and enabled proactive incident prevention by correlating patterns across their infrastructure.

Enterprise-Scale GenAI and Agentic AI Deployment in B2B Supply Chain Operations

Wesco

Wesco, a B2B supply chain and industrial distribution company, presents a comprehensive case study on deploying enterprise-grade AI applications at scale, moving from POC to production. The company faced challenges in transitioning from traditional predictive analytics to cognitive intelligence using generative AI and agentic systems. Their solution involved building a composable AI platform with proper governance, MLOps/LLMOps pipelines, and multi-agent architectures for use cases ranging from document processing and knowledge retrieval to fraud detection and inventory management. Results include deployment of 50+ use cases, significant improvements in employee productivity through "everyday AI" applications, and quantifiable ROI through transformational AI initiatives in supply chain optimization, with emphasis on proper observability, compliance, and change management to drive adoption.

Enterprise-Scale LLM Deployment with Self-Evolving Models and Graph-Based RAG

Writer

Writer, an enterprise AI company founded in 2020, has evolved from building basic transformer models to delivering full-stack GenAI solutions for Fortune 500 companies. They've developed a comprehensive approach to enterprise LLM deployment that includes their own Palmera model series, graph-based RAG systems, and innovative self-evolving models. Their platform focuses on workflow automation and "action AI" in industries like healthcare and financial services, achieving significant efficiency gains through a hybrid approach that combines both no-code interfaces for business users and developer tools for IT teams.

Enterprise-Wide AI Assistant Deployment for Collective Discovery

Prosus

Prosus, a global technology investment company serving a quarter of the world's population across 100+ countries, developed and deployed an internal AI assistant called Toqan.ai to enable collective discovery and exploration of generative AI capabilities across their organization. Starting with early LLM experiments in 2019-2021 using models like BERT and GPT-2, they conducted over 20 field experiments before launching a comprehensive chatbot accessible via Slack to approximately 13,000 employees across 24 companies. The assistant integrates over 20 models and tools including commercial and open-source LLMs, image generation, voice encoding, document processing, and code creation capabilities, with robust privacy guardrails. Results showed that over 81% of users reported productivity increases exceeding 5-10%, with 50% of usage devoted to engineering tasks and the remainder spanning diverse business functions. The platform reduced "Pinocchio" (hallucination) feedback from 10% to 1.5% through model improvements and user education, while enabling bottom-up use case discovery that graduated into production applications at multiple portfolio companies including learning assistants, conversational ordering systems, and coding mentors.

Eval-Driven Development for AI Applications

Vercel

Vercel presents their approach to building and deploying AI applications through eval-driven development, moving beyond traditional testing methods to handle AI's probabilistic nature. They implement a comprehensive evaluation system combining code-based grading, human feedback, and LLM-based assessments to maintain quality in their v0 product, an AI-powered UI generation tool. This approach creates a positive feedback loop they call the "AI-native flywheel," which continuously improves their AI systems through data collection, model optimization, and user feedback.

Evaluation Driven Development for LLM Reliability at Scale

Dosu

Dosu, a company providing an AI teammate for software development and maintenance, implemented Evaluation Driven Development (EDD) to ensure reliability of their LLM-based product. As their system scaled to thousands of repositories, they integrated LangSmith for monitoring and evaluation, enabling them to identify failure modes, maintain quality, and continuously improve their AI assistant's performance through systematic testing and iteration.

Evaluation Patterns for Deep Agents in Production

Langchain

LangChain built and deployed four production applications powered by "Deep Agents" - stateful, long-running AI agents capable of complex tasks including coding, email assistance, and agent building. The challenge was developing comprehensive evaluation strategies for these agents that went beyond traditional LLM evaluation approaches. Their solution involved five key patterns: bespoke test logic for each datapoint with custom assertions, single-step evaluations for validating specific decision points, full agent turn testing for end-to-end behavior, multi-turn conversations with conditional logic to simulate realistic interactions, and proper environment setup with clean, reproducible test conditions. Using LangSmith's Pytest and Vitest integrations, they implemented flexible evaluation frameworks that could assess agent trajectories, final responses, and state artifacts while maintaining fast, debuggable test suites through techniques like API mocking and containerized environments.

Evaluation-Driven LLM Production Workflows with Morgan Stanley and Grab Case Studies

OpenAI

OpenAI's applied evaluation team presented best practices for implementing LLMs in production through two case studies: Morgan Stanley's internal document search system for financial advisors and Grab's computer vision system for Southeast Asian mapping. Both companies started with simple evaluation frameworks using just 5 initial test cases, then progressively scaled their evaluation systems while maintaining CI/CD integration. Morgan Stanley improved their RAG system's document recall from 20% to 80% through iterative evaluation and optimization, while Grab developed sophisticated vision fine-tuning capabilities for recognizing road signs and lane counts in Southeast Asian contexts. The key insight was that effective evaluation systems enable rapid iteration cycles and clear communication between teams and external partners like OpenAI for model improvement.

Evolution from Vector Search to Graph-Based RAG for Enterprise Knowledge Systems

Writer

Writer, an enterprise AI platform company, evolved their retrieval-augmented generation (RAG) system from traditional vector search to a sophisticated graph-based approach to address limitations in handling dense, specialized enterprise data. Starting with keyword search and progressing through vector embeddings, they encountered accuracy issues with chunking and struggled with concentrated enterprise data where documents shared similar terminology. Their solution combined knowledge graphs with fusion-in-decoder techniques, using specialized models for graph structure conversion and storing graph data as JSON in Lucene-based search engines. This approach resulted in improved accuracy, reduced hallucinations, and better performance compared to seven different vector search systems in benchmarking tests.

Evolution of AI Systems and LLMOps from Research to Production: Infrastructure Challenges and Application Design

NVIDA / Lepton

This lecture transcript from Yangqing Jia, VP at NVIDIA and founder of Lepton AI (acquired by NVIDIA), explores the evolution of AI system design from an engineer's perspective. The talk covers the progression from research frameworks (Caffe, TensorFlow, PyTorch) to production AI infrastructure, examining how LLM applications are built and deployed at scale. Jia discusses the emergence of "neocloud" infrastructure designed specifically for AI workloads, the challenges of GPU cluster management, and practical considerations for building consumer and enterprise LLM applications. Key insights include the trade-offs between open-source and closed-source models, the importance of RAG and agentic AI patterns, infrastructure design differences between conventional cloud and AI-specific platforms, and the practical challenges of operating LLMs in production, including supply chain management for GPUs and cost optimization strategies.

Evolution of an Internal AI Platform from No-Code LLM Apps to Agentic Systems

Grab

Grab developed SpellVault, an internal no-code AI platform that evolved from a simple RAG-based LLM app builder into a sophisticated agentic system supporting thousands of apps across the organization. Initially designed to democratize AI access for non-technical users through knowledge integrations and plugins, the platform progressively incorporated advanced capabilities including workflow orchestration, ReAct agent execution, unified tool frameworks, and Model Context Protocol (MCP) compatibility. This evolution enabled SpellVault to transform from supporting static question-answering apps into powering dynamic AI agents capable of reasoning, acting, and interacting with internal and external systems, while maintaining its core mission of accessibility and ease of use.

Evolution of Code Assistant Integration in a Cloud Development Platform

Val Town

Val Town's journey in implementing and evolving code assistance features showcases the challenges and opportunities in productionizing LLMs for code generation. Through iterative improvements and fast-following industry innovations, they progressed from basic ChatGPT integration to sophisticated features including error detection, deployment automation, and multi-file code generation, while addressing key challenges like generation speed and accuracy.

Evolution of Code Evaluation Benchmarks: From Single-Line Completion to Full Codebase Translation

Cursor

This research presentation details four years of work developing evaluation methodologies for coding LLMs across varying time horizons, from second-level code completions to hour-long codebase translations. The speaker addresses critical challenges in evaluating production coding AI systems including data contamination, insufficient test suites, and difficulty calibration. Key solutions include LiveCodeBench's dynamic evaluation approach with periodically updated problem sets, automated test generation using LLM-driven approaches, and novel reward hacking detection systems for complex optimization tasks. The work demonstrates how evaluation infrastructure must evolve alongside model capabilities, incorporating intermediate grading signals, latency-aware metrics, and LLM-as-judge approaches to detect non-idiomatic coding patterns that pass traditional tests but fail real-world quality standards.

Evolving GitHub Copilot with LLM Experimentation Across the Developer Lifecycle

GitHub

GitHub details their internal experimentation process with GPT-4 and other large language models to extend GitHub Copilot beyond code completion into multiple stages of the software development lifecycle. The GitHub Next research team received early access to GPT-4 and prototyped numerous AI-powered features including Copilot for Pull Requests, Copilot for Docs, Copilot for CLI, and GitHub Copilot Chat. Through iterative experimentation and internal testing with GitHub employees, the team discovered that user experience design, particularly how AI suggestions are presented and allow for developer control, is as critical as model accuracy for successful adoption. The experiments resulted in technical previews released in March 2023 that demonstrated AI integration across documentation, command-line interfaces, and pull request workflows, with key learnings around making AI outputs predictable, tolerable, steerable, and verifiable.

Fine-tuning and Scaling LLMs for Search Relevance Prediction

Faire

Faire, an e-commerce marketplace, tackled the challenge of evaluating search relevance at scale by transitioning from manual human labeling to automated LLM-based assessment. They first implemented a GPT-based solution and later improved it using fine-tuned Llama models. Their best performing model, Llama3-8b, achieved a 28% improvement in relevance prediction accuracy compared to their previous GPT model, while significantly reducing costs through self-hosted inference that can handle 70 million predictions per day using 16 GPUs.

Fine-Tuning LLMs for Multi-Agent Orchestration in Code Generation

Cosine

Cosine, a company building enterprise coding agents, faced the challenge of deploying high-performance AI systems in highly constrained environments including on-premise and air-gapped deployments where large frontier models were not viable. They developed a multi-agent architecture using specialized orchestrator and worker models, leveraging model distillation, supervised fine-tuning, preference optimization, and reinforcement fine-tuning to create smaller models that could match or exceed the performance of much larger models. The result was a 31% performance increase on the SWE-bench Freelancer benchmark, 3X latency improvement, 60% reduction in GPU footprint, and 20% fewer errors in generated code, all while operating on as few as 4 H100 GPUs and maintaining full deployment flexibility across cloud, VPC, and on-premise environments.

Forward Deployed Engineering for Enterprise LLM Deployments

OpenAI

OpenAI's Forward Deployed Engineering (FDE) team embeds with enterprise customers to solve high-value problems using LLMs, aiming for production deployments that generate tens of millions to billions in value. The team works on complex use cases across industries—from wealth management at Morgan Stanley to semiconductor verification and automotive supply chain optimization—building custom solutions while extracting generalizable patterns that inform OpenAI's product development. Through an "eval-driven development" approach combining LLM capabilities with deterministic guardrails, the FDE team has grown from 2 to 52 engineers in 2025, successfully bridging the gap between AI capabilities and enterprise production requirements while maintaining focus on zero-to-one problem solving rather than long-term consulting engagements.

Forward Deployed Engineering: Bringing Enterprise LLM Applications to Production

OpenAI

OpenAI's Forward Deployed Engineering (FDE) team, led by Colin Jarvis, embeds with enterprise customers to solve high-value problems using LLMs and deliver production-grade AI applications. The team focuses on problems worth tens of millions to billions in value, working with companies across industries including finance (Morgan Stanley), manufacturing (semiconductors, automotive), telecommunications (T-Mobile, Klarna), and others. By deeply understanding customer domains, building evaluation frameworks, implementing guardrails, and iterating with users over months, the FDE team achieves 20-50% efficiency improvements and high adoption rates (98% at Morgan Stanley). The approach emphasizes solving hard, novel problems from zero-to-one, extracting learnings into reusable products and frameworks (like Swarm and Agent Kit), then scaling solutions across the market while maintaining strategic focus on product development over services revenue.

Foundation Model for Ads Recommendation at Scale

Meta

Meta developed GEM (Generative Ads Recommendation Model), an LLM-scale foundation model trained on thousands of GPUs to enhance ads recommendation across Facebook and Instagram. The model addresses challenges of sparse signals in billions of daily user-ad interactions, diverse multimodal data, and efficient large-scale training. GEM achieves 4x efficiency improvement over previous models through novel architecture innovations including stackable factorization machines, pyramid-parallel sequence processing, and cross-feature learning. The system employs sophisticated post-training knowledge transfer techniques achieving 2x the effectiveness of standard distillation, propagating learnings across hundreds of vertical models. Since launch in early 2025, GEM delivered a 5% increase in ad conversions on Instagram and 3% on Facebook Feed in Q2, with Q3 architectural improvements doubling performance gains from additional compute and data.

Foundation Model for Unified Personalization at Scale

Netflix

Netflix developed a unified foundation model based on transformer architecture to consolidate their diverse recommendation systems, which previously consisted of many specialized models for different content types, pages, and use cases. The foundation model uses autoregressive transformers to learn user representations from interaction sequences, incorporating multi-token prediction, multi-layer representation, and long context windows. By scaling from millions to billions of parameters over 2.5 years, they demonstrated that scaling laws apply to recommendation systems, achieving notable performance improvements while creating high leverage across downstream applications through centralized learning and easier fine-tuning for new use cases.

From Simple RAG to Multi-Agent Architecture for Document Data Extraction

Box

Box evolved their document data extraction system from a simple single-model approach to a sophisticated multi-agent architecture to handle enterprise-scale unstructured data processing. The initial straightforward approach of preprocessing documents and feeding them to an LLM worked well for basic use cases but failed when customers presented complex challenges like 300-page documents, poor OCR quality, hundreds of extraction fields, and confidence scoring requirements. By redesigning the system using an agentic approach with specialized sub-agents for different tasks, Box achieved better accuracy, easier system evolution, and improved maintainability while processing millions of pages for enterprise customers.

GenAI Agent for Partner-Guest Messaging Automation

Booking.com

Booking.com developed a GenAI agent to assist accommodation partners in responding to guest inquiries more efficiently. The problem was that manual responses through their messaging platform were time-consuming, especially during busy periods, potentially leading to delayed responses and lost bookings. The solution involved building a tool-calling agent using LangGraph and GPT-4 Mini that can suggest relevant template responses, generate custom free-text answers, or abstain from responding when appropriate. The system includes guardrails for PII redaction, retrieval tools using embeddings for template matching, and access to property and reservation data. Early results show the system handles tens of thousands of daily messages, with pilots demonstrating 70% improvement in user satisfaction, reduced follow-up messages, and faster response times.

GenAI Governance in Practice: Access Control, Data Quality, and Monitoring for Production LLM Systems

Xomnia

Martin Der, a data scientist at Xomnia, presents practical approaches to GenAI governance addressing the challenge that only 5% of GenAI projects deliver immediate ROI. The talk focuses on three key pillars: access and control (enabling self-service prototyping through tools like Open WebUI while avoiding shadow AI), unstructured data quality (detecting contradictions and redundancies in knowledge bases through similarity search and LLM-based validation), and LLM ops monitoring (implementing tracing platforms like LangFuse and creating dynamic golden datasets for continuous testing). The solutions include deploying Chrome extensions for workflow integration, API gateways for centralized policy enforcement, and developing a knowledge agent called "Genie" for internal use cases across telecom, healthcare, logistics, and maritime industries.

GenAI-Powered Automated Resource Leak Fixing in Java Codebases

Uber

Uber developed FixrLeak, a generative AI-based framework to automate the detection and repair of resource leaks in their Java codebase. Resource leaks—where files, database connections, or streams aren't properly released—cause performance degradation and system failures, and while tools like SonarQube detect them, fixing remains manual and error-prone. FixrLeak combines Abstract Syntax Tree (AST) analysis with generative AI (specifically OpenAI ChatGPT-4O) to produce accurate, idiomatic fixes following Java best practices like try-with-resources. When tested on 124 resource leaks in Uber's codebase, FixrLeak successfully automated fixes for 93 out of 102 eligible cases (after filtering out deprecated code and complex inter-procedural leaks), significantly reducing manual effort and improving code quality at scale.

Generating Production-Ready MCP Servers from OpenAPI Specifications

SpeakEasy

SpeakEasy tackled the challenge of enabling AI agents to interact with existing APIs by developing a tool that automatically generates Model Context Protocol (MCP) servers from OpenAPI documents. The company identified critical issues when generating over 50 production MCP servers for customers, including tool explosion (too many exposed operations), verbose descriptions consuming excessive tokens, complex data formats confusing LLMs, and inadequate access controls. Their solution involved a three-layer optimization approach: pruning OpenAPI documents with custom extensions, building intelligence into the generator to handle complex formats and streaming responses, and providing customization files for precise tool control. The result is production-ready MCP servers that balance LLM context window constraints with functional completeness, using techniques like scope-based access control, automatic data transformation, and optimized descriptions.

Generative AI-Powered Intelligent Document Processing for Healthcare Operations

Myriad Genetics

Myriad Genetics, a genetic testing and precision medicine provider, faced challenges processing thousands of healthcare documents daily with their existing Amazon Comprehend and Amazon Textract solution, which cost $15,000 monthly per business unit with 8.5-minute processing times and required manual information extraction involving up to 10 full-time employees. Partnering with AWS Generative AI Innovation Center, they deployed the open-source GenAI IDP Accelerator using Amazon Bedrock with Amazon Nova models, implementing advanced prompt engineering techniques including AI-driven prompt engineering, negative prompting, few-shot learning, and chain-of-thought reasoning. The solution increased classification accuracy from 94% to 98%, reduced classification costs by 77%, decreased processing time by 80% (from 8.5 to 1.5 minutes), and automated key information extraction at 90% accuracy, projected to save $132K annually while reducing prior authorization processing time by 2 minutes per submission.

Graph RAG and Multi-Agent Systems for Legal Case Discovery and Document Analysis

WhyHow

WhyHow.ai, a legal technology company, developed a system that combines graph databases, multi-agent architectures, and retrieval-augmented generation (RAG) to identify class action and mass tort cases before competitors by scraping web data, structuring it into knowledge graphs, and generating personalized reports for law firms. The company claims to find potential cases within 15 minutes compared to the industry standard of 8-9 months, using a pipeline that processes complaints from various online sources, applies lawyer-specific filtering schemas, and generates actionable legal intelligence through automated multi-agent workflows backed by graph-structured knowledge representation.

Harness Engineering for Agentic Coding Systems

Langchain

LangChain improved their coding agent (deepagents-cli) from 52.8% to 66.5% on Terminal Bench 2.0, advancing from Top 30 to Top 5 performance, solely through harness engineering without changing the underlying model (gpt-5.2-codex). The solution focused on three key areas: system prompts emphasizing self-verification loops, enhanced tools and context injection to help agents understand their environment, and middleware hooks to detect problematic patterns like doom loops. The approach leveraged LangSmith tracing at scale to identify failure modes and iteratively optimize the harness through automated trace analysis, demonstrating that systematic engineering around the model can yield significant performance improvements in production agentic systems.

Healthcare Search Discovery Using ML and Generative AI on E-commerce Platform

Amazon Health Services

Amazon Health Services faced the challenge of integrating healthcare services into Amazon's e-commerce search experience, where traditional product search algorithms weren't designed to handle complex relationships between symptoms, conditions, treatments, and healthcare services. They developed a comprehensive solution combining machine learning for query understanding, vector search for product matching, and large language models for relevance optimization. The solution uses AWS services including Amazon SageMaker for ML models, Amazon Bedrock for LLM capabilities, and Amazon EMR for data processing, implementing a three-component architecture: query understanding pipeline to classify health searches, LLM-enhanced product knowledge base for semantic search, and hybrid relevance optimization using both human labeling and LLM-based classification. This system now serves daily health-related search queries, helping customers find everything from prescription medications to primary care services through improved discovery pathways.

High-Performance AI Network Infrastructure for Distributed Training at Scale

Meta

Meta faced significant challenges with AI model training as checkpoint data grew from hundreds of gigabytes to tens of terabytes, causing network bottlenecks and GPU idle time. Their solution involved implementing bidirectional multi-NIC utilization through ECMP-based load balancing for egress traffic and BGP-based virtual IP injection for ingress traffic, enabling optimal use of all available network interfaces. The implementation resulted in dramatic performance improvements, reducing job read latency from 300 seconds to 1 second and checkpoint loading time from 800 seconds to 100 seconds, while achieving 4x throughput improvement through proper traffic distribution across multiple network interfaces.

Hybrid RAG for Technical Training Knowledge Assistant in Mining Operations

Rio Tinto

Rio Tinto Aluminium faced challenges in providing technical experts in refining and smelting sectors with quick and accurate access to vast amounts of specialized institutional knowledge during their internal training programs. They developed a generative AI-powered knowledge assistant using hybrid RAG (retrieval augmented generation) on Amazon Bedrock, combining both vector search and knowledge graph databases to enable more accurate, contextually rich responses. The hybrid system significantly outperformed traditional vector-only RAG across all metrics, particularly in context quality and entity recall, showing over 53% reduction in standard deviation while maintaining high mean scores, and leveraging 11-17 technical documents per query compared to 2-3 for vector-only approaches, ultimately streamlining how employees find and utilize critical business information.

Implementing Evaluation Framework for MCP Server Tool Selection

Neon

Neon developed a comprehensive evaluation framework to test their Model Context Protocol (MCP) server's ability to correctly use database migration tools. The company faced challenges with LLMs selecting appropriate tools from a large set of 20+ tools, particularly for complex stateful workflows involving database migrations. Their solution involved creating automated evals using Braintrust, implementing "LLM-as-a-judge" scoring techniques, and establishing integrity checks to ensure proper tool usage. Through iterative prompt engineering guided by these evaluations, they improved their tool selection success rate from 60% to 100% without requiring code changes.

Implementing LLM Observability for Natural Language Querying Interface

Honeycomb

Honeycomb implemented a natural language querying interface for their observability product and faced challenges in maintaining and improving it post-launch. They solved this by implementing comprehensive observability practices, capturing everything from user inputs to LLM responses using distributed tracing. This approach enabled them to monitor the entire user experience, isolate issues, and establish a continuous improvement flywheel, resulting in higher product retention and conversion rates.

Implementing MCP Remote Server for CRM Agent Integration

HubSpot

HubSpot built a remote Model Context Protocol (MCP) server to enable AI agents like ChatGPT to interact with their CRM data. The challenge was to provide seamless, secure access to CRM objects (contacts, companies, deals) for ChatGPT's 500 million weekly users, most of whom aren't developers. In less than four weeks, HubSpot's team extended the Java MCP SDK to create a stateless, HTTP-based microservice that integrated with their existing REST APIs and RPC system, implementing OAuth 2.0 for authentication and user permission scoping. The solution made HubSpot the first CRM with an OpenAI connector, enabling read-only queries that allow customers to analyze CRM data through natural language interactions while maintaining enterprise-grade security and scale.

Infrastructure for AI Agents: Panel Discussion on Production Challenges and Solutions

Various

This panel discussion brings together infrastructure experts from Groq, NVIDIA, Lambda, and AMD to discuss the unique challenges of deploying AI agents in production. The panelists explore how agentic AI differs from traditional AI workloads, requiring significantly higher token generation, lower latency, and more diverse infrastructure spanning edge to cloud. They discuss the evolution from training-focused to inference-focused infrastructure, emphasizing the need for efficiency at scale, specialized hardware optimization, and the importance of smaller distilled models over large monolithic models. The discussion highlights critical operational challenges including power delivery, thermal management, and the need for full-stack engineering approaches to debug and optimize agentic systems in production environments.

Integrating Symbolic Reasoning with LLMs for AI-Native Telecom Infrastructure

Ericsson

Ericsson's System Comprehension Lab is exploring the integration of symbolic reasoning capabilities into telecom-oriented large language models to address critical limitations in current LLM architectures for telecommunications infrastructure management. The problem centers on LLMs' inability to provide deterministic, explainable reasoning required for telecom network optimization, security, and anomaly detection—domains where hallucinations, lack of logical consistency, and black-box behavior are unacceptable. The proposed solution involves hybrid neural-symbolic AI architectures that combine the pattern recognition strengths of transformer-based LLMs with rule-based reasoning engines, connected through techniques like symbolic chain-of-thought prompting, program-aided reasoning, and external solver integration. This approach aims to enable AI-native wireless systems for 6G infrastructure that can perform cross-layer optimization, real-time decision-making, and intent-driven network management while maintaining the explainability and logical rigor demanded by production telecom environments.

Iterative Prompt Optimization and Model Selection for Nutritional Calorie Estimation

Taralli

Taralli, a calorie tracking application, demonstrates systematic LLM improvement through rigorous evaluation and prompt optimization. The developer addressed the challenge of accurate nutritional estimation by creating a 107-example evaluation dataset, testing multiple prompt optimization techniques (vanilla, few-shot bootstrapping, MIPROv2, and GEPA) across several models (Gemini 2.5 Flash, Gemini 3 Flash, and DeepSeek v3.2). Through this methodical approach, they achieved a 15% accuracy improvement by switching from Gemini 2.5 Flash to Gemini 3 Flash while using a few-shot learning approach with 16 examples, reaching 60% accuracy within a 10% calorie prediction threshold. The system was deployed with fallback model configurations and extended to support fully offline on-device inference for iOS.

Knowledge Augmented Generation (KAG) System for Competitive Intelligence and Strategic Advisory

Patho AI

Patho AI developed a Knowledge Augmented Generation (KAG) system for enterprise clients that goes beyond traditional RAG by integrating structured knowledge graphs to provide strategic advisory and research capabilities. The system addresses the limitations of vector-based RAG systems in handling complex numerical reasoning and multi-hop queries by implementing a "wisdom graph" architecture that captures expert decision-making processes. Using Node-RED for orchestration and Neo4j for graph storage, the system achieved 91% accuracy in structured data extraction and successfully automated competitive analysis tasks that previously required dedicated marketing departments.

Large-Scale AI Red Teaming Competition Platform for Production Model Security

HackAPrompt, LearnPrompting

Sandra Fulof from HackAPrompt and LearnPrompting presents a comprehensive case study on developing the first AI red teaming competition platform and educational resources for prompt engineering in production environments. The case study covers the creation of LearnPrompting, an open-source educational platform that trained millions of users worldwide on prompt engineering techniques, and HackAPrompt, which ran the first prompt injection competition collecting 600,000 prompts used by all major AI companies to benchmark and improve their models. The work demonstrates practical challenges in securing LLMs in production, including the development of systematic prompt engineering methodologies, automated evaluation systems, and the discovery that traditional security defenses are ineffective against prompt injection attacks.

Large-Scale Enterprise Data Platform Migration Using AI and Generative AI Automation

CommBank

Commonwealth Bank of Australia (CBA), Australia's largest bank serving 17.5 million customers, faced the challenge of modernizing decades of rich data spread across hundreds of on-premise source systems that lacked interoperability and couldn't scale for AI workloads. In partnership with HCL Tech and AWS, CBA migrated 61,000 on-premise data pipelines (equivalent to 10 petabytes of data) to an AWS-based data mesh ecosystem in 9 months. The solution leveraged AI and generative AI to transform code, check for errors, and test outputs with 100% accuracy reconciliation, conducting 229,000 tests across the migration. This enabled CBA to establish a federated data architecture called CommBank.data that empowers 40 lines of business with self-service data access while maintaining strict governance, positioning the bank for AI-driven innovation at scale.

Large-Scale Foundation Model Training Infrastructure for National AI Initiative

AWS GENAIC (Japan)

Japan's GENIAC program partnered with AWS to provide 12 organizations with massive compute resources (127 P5 instances and 24 Trn1 instances) for foundation model development. The challenge revealed that successful FM training required far more than raw hardware access - it demanded structured organizational support, reference architectures, cross-functional teams, and comprehensive enablement programs. Through systematic deployment guides, monitoring infrastructure, and dedicated communication channels, multiple large-scale models were successfully trained including 100B+ parameter models, demonstrating that large-scale AI development is fundamentally an organizational rather than purely technical challenge.

Large-Scale Legal RAG Implementation with Multimodal Data Infrastructure

Harvey / Lance

Harvey, a legal AI assistant company, partnered with LanceDB to address complex retrieval-augmented generation (RAG) challenges across massive datasets of legal documents. The case study demonstrates how they built a scalable system to handle diverse legal queries ranging from small on-demand uploads to large data corpuses containing millions of documents from various jurisdictions. Their solution combines advanced vector search capabilities with a multimodal lakehouse architecture, emphasizing evaluation-driven development and flexible infrastructure to support the complex, domain-specific nature of legal AI applications.

Large-Scale LLM Batch Processing Platform for Millions of Prompts

Instacart

Instacart faced challenges processing millions of LLM calls required by various teams for tasks like catalog data cleaning, item enrichment, fulfillment routing, and search relevance improvements. Real-time LLM APIs couldn't handle this scale effectively, leading to rate limiting issues and high costs. To solve this, Instacart built Maple, a centralized service that automates large-scale LLM batch processing by handling batching, encoding/decoding, file management, retries, and cost tracking. Maple integrates with external LLM providers through batch APIs and an internal AI Gateway, achieving up to 50% cost savings compared to real-time calls while enabling teams to process millions of prompts reliably without building custom infrastructure.

Large-Scale LLM Infrastructure for E-commerce Applications

Coupang

Coupang, a major e-commerce platform operating primarily in South Korea and Taiwan, faced challenges in scaling their ML infrastructure to support LLM applications across search, ads, catalog management, and recommendations. The company addressed GPU supply shortages and infrastructure limitations by building a hybrid multi-region architecture combining cloud and on-premises clusters, implementing model parallel training with DeepSpeed, and establishing GPU-based serving using Nvidia Triton and vLLM. This infrastructure enabled production applications including multilingual product understanding, weak label generation at scale, and unified product categorization, with teams using patterns ranging from in-context learning to supervised fine-tuning and continued pre-training depending on resource constraints and quality requirements.

Large-Scale Personalization and Product Knowledge Graph Enhancement Through LLM Integration

DoorDash

DoorDash faced challenges in scaling personalization and maintaining product catalogs as they expanded beyond restaurants into new verticals like grocery, retail, and convenience stores, dealing with millions of SKUs and cold-start scenarios for new customers and products. They implemented a layered approach combining traditional machine learning with fine-tuned LLMs, RAG systems, and LLM agents to automate product knowledge graph construction, enable contextual personalization, and provide recommendations even without historical user interaction data. The solution resulted in faster, more cost-effective catalog processing, improved personalization for cold-start scenarios, and the foundation for future agentic shopping experiences that can adapt to real-time contexts like emergency situations.

Large-Scale Personalization System Using LLMs for Buyer Profile Generation

Etsy

Etsy tackled the challenge of personalizing shopping experiences for nearly 90 million buyers across 100+ million listings by implementing an LLM-based system to generate detailed buyer profiles from browsing and purchasing behaviors. The system analyzes user session data including searches, views, purchases, and favorites to create structured profiles capturing nuanced interests like style preferences and shopping missions. Through significant optimization efforts including data source improvements, token reduction, batch processing, and parallel execution, Etsy reduced profile generation time from 21 days to 3 days for 10 million users while cutting costs by 94% per million users, enabling economically viable large-scale personalization for search query rewriting and refinement pills.

Launching an MCP Server for AI-Powered Debugging and Development

Multiplayer

Multiplayer, a provider of full-stack session recording and debugging tools, launched a Model Context Protocol (MCP) server to connect their platform's engineering context with AI coding agents like Cursor, Claude Code, and Windsurf. The challenge was enabling AI agents to access session recordings, backend server calls, and debugging data to provide contextually-aware assistance for bug fixes and feature development. By designing use-case-driven MCP tools that abstract multiple API calls, Multiplayer created a streamlined integration that has shown good adoption among developers. The gradual rollout to power users revealed best practices such as keeping tools minimal and scoped, focusing on read-only operations for security, and providing human-readable data formats to LLMs.

Lessons Learned from Information Retrieval in Vector Database Development

Weaviate

This case study captures insights gained from two years of experience working at Weaviate, a vector database company, focusing on information retrieval challenges in production environments. The article appears to document 37 key learnings about implementing and operating information retrieval systems that support LLM-powered applications. While the full content is not accessible due to access restrictions, the title suggests comprehensive practical knowledge about vector databases, embeddings, and retrieval systems that underpin RAG (Retrieval Augmented Generation) and other LLM applications in production. The insights likely cover technical implementation details, operational challenges, and best practices for building scalable information retrieval infrastructure.

LLM Feature Extraction for Content Categorization and Search Query Understanding

Canva

Canva implemented LLMs as a feature extraction method for two key use cases: search query categorization and content page categorization. By replacing traditional ML classifiers with LLM-based approaches, they achieved higher accuracy, reduced development time from weeks to days, and lowered operational costs from $100/month to under $5/month for query categorization. For content categorization, LLM embeddings outperformed traditional methods in terms of balance, completion, and coherence metrics while simplifying the feature extraction process.

LLM Security: Discovering and Mitigating Repeated Token Attacks in Production Models

Dropbox

Dropbox's security research team discovered vulnerabilities in OpenAI's GPT-3.5 and GPT-4 models where repeated tokens could trigger model divergence and extract training data. They identified that both single-token and multi-token repetitions could bypass OpenAI's initial security controls, leading to potential data leakage and denial of service risks. The findings were reported to OpenAI, who subsequently implemented improved filtering mechanisms and server-side timeouts to address these vulnerabilities.

LLM-Assisted Personalization Framework for Multi-Vertical Retail Discovery

DoorDash

DoorDash developed an LLM-assisted personalization framework to help customers discover products across their expanding catalog of hundreds of thousands of SKUs spanning multiple verticals including grocery, convenience, alcohol, retail, flowers, and gifting. The solution combines traditional machine learning approaches like two-tower embedding models and multi-task learning rankers with LLM capabilities for semantic understanding, collection generation, query rewriting, and knowledge graph augmentation. The framework balances three core consumer value dimensions—familiarity (showing relevant favorites), affordability (optimizing for price sensitivity and deals), and novelty (introducing new complementary products)—across the entire personalization stack from retrieval to ranking to presentation. While specific quantitative results are not provided, the case study presents this as a production system deployed across multiple discovery surfaces including category pages, checkout aisles, personalized carousels, and search.

LLM-Generated Entity Profiles for Personalized Food Delivery Platform

DoorDash

DoorDash evolved from traditional numerical embeddings to LLM-generated natural language profiles for representing consumers, merchants, and food items to improve personalization and explainability. The company built an automated system that generates detailed, human-readable profiles by feeding structured data (order history, reviews, menu metadata) through carefully engineered prompts to LLMs, enabling transparent recommendations, editable user preferences, and richer input for downstream ML models. While the approach offers scalability and interpretability advantages over traditional embeddings, the implementation requires careful evaluation frameworks, robust serving infrastructure, and continuous iteration cycles to maintain profile quality in production.

LLM-Powered 3D Model Generation for 3D Printing

Build Great AI

Build Great AI developed a prototype application that leverages multiple LLM models to generate 3D printable models from text descriptions. The system uses various models including LLaMA 3.1, GPT-4, and Claude 3.5 to generate OpenSCAD code, which is then converted to STL files for 3D printing. The solution demonstrates rapid prototyping capabilities, reducing design time from hours to minutes, while handling the challenges of LLMs' spatial reasoning limitations through multiple simultaneous generations and iterative refinement.

LLM-Powered GraphQL Mock Data Generation for Developer Productivity

Airbnb

Airbnb developed an innovative solution to address the persistent challenge of creating and maintaining realistic GraphQL mock data for testing and prototyping. Engineers traditionally spent significant time manually writing and updating mock data, which would often drift out of sync with evolving queries. Airbnb introduced the @generateMock directive, which combines GraphQL schema validation, product context (including design mockups), and LLMs (specifically Gemini 2.5 Pro) to automatically generate type-safe, realistic mock data. The solution integrates seamlessly into their existing code generation workflow (Niobe CLI), keeping engineers in their local development loops. A companion @respondWithMock directive enables client engineers to prototype features before server implementations are complete. Since deployment, Airbnb engineers have generated and merged over 700 mocks across iOS, Android, and Web platforms, significantly reducing manual effort and accelerating development cycles.

LLM-Powered Mutation Testing for Automated Compliance at Scale

Meta

Meta developed the Automated Compliance Hardening (ACH) tool to address the challenge of scaling compliance adherence across its products while maintaining developer velocity. Traditional compliance processes relied on manual, error-prone approaches that couldn't keep pace with rapid technology development. By leveraging LLMs for mutation-guided test generation, ACH generates realistic, problem-specific mutants (deliberately introduced faults) and automatically creates tests to catch them through plain-text prompts. During a trial from October to December 2024 across Facebook, Instagram, WhatsApp, and Meta's wearables platforms, privacy engineers accepted 73% of generated tests, with 36% judged as privacy-relevant. The system overcomes traditional barriers to mutation testing deployment including scalability issues, unrealistic mutants, equivalent mutants, computational costs, and testing overstretch.

LLM-Powered Security Incident Response and Automation

Agoda

Agoda, a global travel platform processing sensitive data at scale, faced operational bottlenecks in security incident response due to high alert volumes, manual phishing email reviews, and time-consuming incident documentation. The security team implemented three LLM-powered workflows: automated triage for Level 1-2 security alerts using RAG to retrieve historical context, autonomous phishing email classification responding in under 25 seconds, and multi-source incident report generation reducing drafting time from 5-7 hours to 10 minutes. The solutions achieved 97%+ alignment with human analysts for alert triage, 99% precision in phishing classification with no false negatives, and 95% factual accuracy in report generation, while significantly reducing analyst workload and response times.

LLM-Powered User Feedback Analysis for Bug Report Classification and Product Improvement

Meta

Meta (Facebook) developed an LLM-based system to analyze unstructured user bug reports at scale, addressing the challenge of processing free-text feedback that was previously resource-intensive and difficult to analyze with traditional methods. The solution uses prompt engineering to classify bug reports into predefined categories, enabling automated monitoring through dashboards, trend detection, and root cause analysis. This approach successfully identified critical issues during outages, caught less visible bugs that might have been missed, and resulted in double-digit reductions in topline bug reports over several months by enabling cross-functional teams to implement targeted fixes and product improvements.

LLM-Powered Voice Assistant for Restaurant Operations and Personalized Alcohol Recommendations

Doordash

DoorDash implemented two major LLM-powered features during their 2025 summer intern program: a voice AI assistant for verifying restaurant hours and personalized alcohol recommendations with carousel generation. The voice assistant replaced rigid touch-tone phone systems with natural language conversations, allowing merchants to specify detailed hours information in advance while maintaining backward compatibility with legacy infrastructure through factory patterns and feature flags. The alcohol recommendation system leveraged LLMs to generate personalized product suggestions and engaging carousel titles using chain-of-thought prompting and a two-stage generation pipeline. Both systems were integrated into production using DoorDash's existing frameworks, with the voice assistant achieving structured data extraction through prompt engineering and webhook processing, while the recommendations carousel utilized the company's Carousel Serving Framework and Discovery SDK for rapid deployment.

MCP Marketplace: Scaling AI Agents with Organizational Context

Intuit

Intuit, a global fintech platform, faced challenges scaling AI agents across their organization due to poor discoverability of Model Context Protocol (MCP) services, inconsistent security practices, and complex manual setup requirements. They built an MCP Marketplace, a centralized registry functioning as a package manager for AI capabilities, which standardizes MCP development through automated CI/CD pipelines for producers and provides one-click installation with enterprise-grade security for consumers. The platform leverages gRPC middleware for authentication, token management, and auditing, while collecting usage analytics to track adoption, service latency, and quality metrics, thereby democratizing secure context access across their developer organization.

MCP Protocol Development and Agent AI Foundation Launch

Anthropic / OpenAI / Goose

This podcast transcript covers the one-year journey of the Model Context Protocol (MCP) from its initial launch by Anthropic through to its donation to the newly formed Agent AI Foundation. The discussion explores how MCP evolved from a local-only protocol to support remote servers, authentication, and long-running tasks, addressing the fundamental challenge of connecting AI agents to external tools and data sources in production environments. The case study highlights extensive production usage of MCP both within Anthropic's internal systems and across major technology companies including OpenAI, Microsoft, and Google, demonstrating widespread adoption with millions of requests at scale. The formation of the Agent AI Foundation with founding members including Anthropic, OpenAI, and Block represents a significant industry collaboration to standardize agentic system protocols and ensure neutral governance of critical AI infrastructure.

MCP Server for Natural Language Business Data Analytics

Ramp

Ramp built an open-source Model Context Protocol (MCP) server that enables natural language interaction with business financial data by creating a SQL interface over their developer API. The solution evolved from direct API querying to an in-memory SQLite database approach to handle scaling challenges, allowing Claude to analyze tens of thousands of spend events through natural language queries. While demonstrating strong potential for business intelligence applications, the implementation reveals both the promise and current limitations of agentic AI systems in production environments.

Mercury: Agentic AI Platform for LLM-Powered Recommendation Systems

eBay

eBay developed Mercury, an internal agentic framework designed to scale LLM-powered recommendation experiences across its massive marketplace of over two billion active listings. The platform addresses the challenge of transforming vast amounts of unstructured data into personalized product recommendations by integrating Retrieval-Augmented Generation (RAG) with a custom Listing Matching Engine that bridges the gap between LLM-generated text outputs and eBay's dynamic inventory. Mercury enables rapid development through reusable, plug-and-play components following object-oriented design principles, while its near-real-time distributed queue-based execution platform handles cost and latency requirements at industrial scale. The system combines multiple retrieval mechanisms, semantic search using embedding models, anomaly detection, and personalized ranking to deliver contextually relevant shopping experiences to hundreds of millions of users.

Migration of Credit AI RAG Application from Multi-Cloud to AWS Bedrock

Octus

Octus, a leading provider of credit market data and analytics, migrated their flagship generative AI product Credit AI from a multi-cloud architecture (OpenAI on Azure and other services on AWS) to a unified AWS architecture using Amazon Bedrock. The migration addressed challenges in scalability, cost, latency, and operational complexity associated with running a production RAG application across multiple clouds. By leveraging Amazon Bedrock's managed services for embeddings, knowledge bases, and LLM inference, along with supporting AWS services like Lambda, S3, OpenSearch, and Textract, Octus achieved a 78% reduction in infrastructure costs, 87% decrease in cost per question, improved document sync times from hours to minutes, and better development velocity while maintaining SOC2 compliance and serving thousands of concurrent users across financial services clients.

MLOps Evolution and LLM Integration at a Major Bank

Barclays

Discussion of MLOps practices and the evolution towards LLM integration at Barclays, focusing on the transition from traditional ML to GenAI workflows while maintaining production stability. The case study highlights the importance of balancing innovation with regulatory requirements in financial services, emphasizing ROI-driven development and the creation of reusable infrastructure components.

Model Context Protocol (MCP): A Universal Standard for AI Application Extensions

Anthropic

Anthropic developed the Model Context Protocol (MCP) to solve the challenge of extending AI applications with plugins and external functionality in a standardized way. Inspired by the Language Server Protocol (LSP), MCP provides a universal connector that enables AI applications to interact with various tools, resources, and prompts through a client-server architecture. The protocol has gained significant community adoption and contributions from companies like Shopify, Microsoft, and JetBrains, demonstrating its potential as an open standard for AI application integration.

Model Context Protocol (MCP): Building Universal Connectivity for LLMs in Production

Anthropic

Anthropic developed and open-sourced the Model Context Protocol (MCP) to address the challenge of providing external context and tool connectivity to large language models in production environments. The protocol emerged from recognizing that teams were repeatedly reimplementing the same capabilities across different contexts (coding editors, web interfaces, and various services) where Claude needed to interact with external systems. By creating a universal standard protocol and open-sourcing it, Anthropic enabled developers to build integrations once and deploy them everywhere, while fostering an ecosystem that became what they describe as the fastest-growing open source protocol in history. The protocol has matured from requiring local server deployments to supporting remote hosted servers with a central registry, reducing friction for both developers and end users while enabling sophisticated production use cases across enterprise integrations and personal automation.

Multi-Agent AI Platform for Customer Experience at Scale

Cisco

Cisco developed an agentic AI platform leveraging LangChain to transform their customer experience operations across a 20,000-person organization managing $26 billion in recurring revenue. The solution combines multiple specialized agents with a supervisor architecture to handle complex workflows across customer adoption, renewals, and support processes. By integrating traditional machine learning models for predictions with LLMs for language processing, they achieved 95% accuracy in risk recommendations and reduced operational time by 20% in just three weeks of limited availability deployment, while automating 60% of their 1.6-1.8 million annual support cases.

Multi-Agent AI System for Automated Test Case Generation in Payment Systems

Amazon AMET Payments

Amazon AMET Payments team developed SAARAM, a multi-agent AI solution using Amazon Bedrock with Claude Sonnet and Strands Agents SDK to automate test case generation for payment features across five Middle Eastern and North African countries. The manual process previously required one week of QA engineer effort per feature, consuming approximately one full-time employee annually. By implementing a human-centric approach that mirrors how experienced testers analyze requirements through specialized agents, the team reduced test case generation time from one week to hours while improving test coverage by 40% and reducing QA effort from 1.0 FTE to 0.2 FTE for validation activities.

Multi-Agent AI System for Financial Intelligence and Risk Analysis

Moody’s

Moody's Analytics, a century-old financial institution serving over 1,500 customers across 165 countries, transformed their approach to serving high-stakes financial decision-making by evolving from a basic RAG chatbot to a sophisticated multi-agent AI system on AWS. Facing challenges with unstructured financial data (PDFs with complex tables, charts, and regulatory documents), context window limitations, and the need for 100% accuracy in billion-dollar decisions, they architected a serverless multi-agent orchestration system using Amazon Bedrock, specialized task agents, custom workflows supporting up to 400 steps, and intelligent document processing pipelines. The solution processes over 1 million tokens daily in production, achieving 60% faster insights and 30% reduction in task completion times while maintaining the precision required for credit ratings, risk intelligence, and regulatory compliance across credit, climate, economics, and compliance domains.

Multi-Agent AI System for Investment Thesis Validation Using Devil's Advocate

Linqalpha

LinqAlpha, a Boston-based AI platform serving over 170 institutional investors, developed Devil's Advocate, an AI agent that systematically pressure-tests investment theses by identifying blind spots and generating evidence-based counterarguments. The system addresses the challenge of confirmation bias in investment research by automating the manual process of challenging investment ideas, which traditionally required time-consuming cross-referencing of expert calls, broker reports, and filings. Using a multi-agent architecture powered by Claude Sonnet 3.7 and 4.0 on Amazon Bedrock, integrated with Amazon Textract, Amazon OpenSearch Service, Amazon RDS, and Amazon S3, the solution decomposes investment theses into assumptions, retrieves counterevidence from uploaded documents, and generates structured, citation-linked rebuttals. The system enables investors to conduct rigorous due diligence at 5-10 times the speed of traditional reviews while maintaining auditability and compliance requirements critical to institutional finance.

Multi-Agent AI System for Network Change Management

Cisco

Cisco's Outshift incubation group developed a multi-agent AI system to address network change management failures in production environments. The solution combines a natural language interface, multiple specialized AI agents using ReAct reasoning loops, and a knowledge graph-based digital twin of production networks. The system integrates with ITSM tools like ServiceNow, automatically generates impact assessments and test plans, and executes validation tests using network configuration data stored in standardized schemas, significantly reducing tokens consumed and response times through fine-tuning approaches.

Multi-Agent AI Systems for IT Operations and Incident Management

Kolomolo / DeLaval / Arelion

Kolomolo, an AWS advanced partner, implemented two distinct AI-powered solutions for their customers DeLaval (dairy farm equipment manufacturer) and Arelion (global internet infrastructure provider). For DeLaval, they built Unity Ops, a multi-agent system that automates incident response and root cause analysis across 3,000+ connected dairy farms, processing alerts from monitoring systems and generating enriched incident tickets automatically. For Arelion, they developed a hybrid ML/LLM solution to classify and extract critical information from thousands of maintenance notification emails from over 100 vendors, reducing manual classification workload by 80%. Both solutions achieved over 95% accuracy while maintaining cost efficiency through strategic use of classical ML techniques combined with selective LLM invocation, demonstrating significant operational efficiency improvements and enabling engineering teams to focus on higher-value tasks rather than reactive incident management.

Multi-Agent Financial Research and Question Answering System

Yahoo! Finance

Yahoo! Finance built a production-scale financial question answering system using multi-agent architecture to address the information asymmetry between retail and institutional investors. The system leverages Amazon Bedrock Agent Core and employs a supervisor-subagent pattern where specialized agents handle structured data (stock prices, financials), unstructured data (SEC filings, news), and various APIs. The solution processes heterogeneous financial data from multiple sources, handles temporal complexities of fiscal years, and maintains context across sessions. Through a hybrid evaluation approach combining human and AI judges, the system achieves strong accuracy and coverage metrics while processing queries in 5-50 seconds at costs of 2-5 cents per query, demonstrating production viability at scale with support for 100+ concurrent users.

Multi-Agent LLM System for Business Process Automation

Cognizant

Cognizant developed Neuro AI, a multi-agent LLM-based system that enables business users to create and deploy AI-powered decision-making workflows without requiring deep technical expertise. The platform allows agents to communicate with each other to handle complex business processes, from intranet search to process automation, with the ability to deploy either in the cloud or on-premises. The system includes features for opportunity identification, use case scoping, synthetic data generation, and automated workflow creation, all while maintaining explainability and human oversight.

Multi-Agent System Architecture for Autonomous Recruiting Agents

LinkedIn

LinkedIn developed a multi-agent system called Hiring Assistant to help recruiters work more efficiently, launching in October 2024. The system comprises four specialized agents (intake, sourcing, evaluation, and outreach) coordinated by a supervisor agent, with personalization driven by a preference model trained on recruiter behaviors. The presentation focuses on the operational challenges of scaling from specialized multi-agent systems to truly autonomous agents, addressing critical production issues including memory isolation across users, tool discovery and validation, safety considerations for destructive tool calls, and computational efficiency through complexity classification to route simpler tasks to completion models rather than expensive reasoning models.

Multi-Agent System for Customer Success and Sales Orchestration

ServiceNow

ServiceNow, a digital workflow platform provider, faced significant challenges with agent fragmentation across their internal sales and customer success operations, lacking a unified orchestration layer to coordinate complex workflows spanning the entire customer lifecycle. To address this, they built a comprehensive multi-agent system using LangGraph for orchestration and LangSmith for observability, covering stages from lead qualification through post-sales adoption, renewal, and customer advocacy. The system uses specialized agents coordinated by a supervisor agent, with sophisticated evaluation frameworks using custom metrics and LLM-as-a-judge evaluators. Currently in the testing phase with QA engineers, the solution has enabled modular development with human-in-the-loop capabilities, granular tracing for debugging, and automated golden dataset creation for continuous quality assurance.

Multi-Agent Web Research System with Dynamic Task Generation

Exa

Exa evolved from providing a search API to building a production-ready multi-agent web research system that processes hundreds of research queries daily, delivering structured results in 15 seconds to 3 minutes. Using LangGraph for orchestration and LangSmith for observability, their system employs a three-component architecture with a planner that dynamically generates parallel tasks, independent research units with specialized tools, and an observer maintaining full context across all components. The system intelligently balances between search snippets and full content retrieval to optimize token usage while maintaining research quality, ultimately providing structured JSON outputs specifically designed for API consumption.

Multi-Company Panel Discussion on Enterprise AI and Agentic AI Deployment Challenges

Glean / Deloitte / Docusign

This panel discussion at AWS re:Invent brings together practitioners from Glean, Deloitte, and DocuSign to discuss the practical realities of deploying AI and agentic AI systems in enterprise environments. The panelists explore challenges around organizational complexity, data silos, governance, agent creation and sharing, value measurement, and the tension between autonomous capabilities and human oversight. Key themes include the need for cross-functional collaboration, the importance of security integration from day one, the difficulty of measuring AI-driven productivity gains, and the evolution from individual AI experimentation to governed enterprise-wide agent deployment. The discussion emphasizes that successful AI transformation requires reimagining workflows rather than simply bolting AI onto legacy systems, and that business value should drive technical decisions rather than focusing solely on which LLM model to use.

Multi-Company Panel on Production LLM Deployment Strategies and Small Language Model Optimization

Meta / AWS / NVIDIA / ConverseNow

This panel discussion features leaders from Meta, AWS, NVIDIA, and ConverseNow discussing real-world challenges and solutions for deploying LLMs in production environments. The conversation covers the trade-offs between small and large language models, with ConverseNow sharing their experience building voice AI systems for restaurants that require high accuracy and low latency. Key themes include the importance of fine-tuning small models for production use cases, the convergence of training and inference systems, optimization techniques like quantization and alternative architectures, and the challenges of building reliable, cost-effective inference stacks for mission-critical applications.

Multi-Industry AI Deployment Strategies with Diverse Hardware and Sovereign AI Considerations

AMD / Somite AI / Upstage / Rambler AI

This panel discussion at AWS re:Invent features three companies deploying AI models in production across different industries: Somite AI using machine learning for computational biology and cellular control, Upstage developing sovereign AI with proprietary LLMs and OCR for document extraction in enterprises, and Rambler AI building vision language models for industrial task verification. All three leverage AMD GPU infrastructure (MI300 series) for training and inference, emphasizing the importance of hardware choice, open ecosystems, seamless deployment, and cost-effective scaling. The discussion highlights how smaller, domain-specific models can achieve enterprise ROI where massive frontier models failed, and explores emerging areas like physical AI, world models, and data collection for robotics.

Multi-Industry LLM Deployment: Building Production AI Systems Across Diverse Verticals

Caylent

Caylent, a development consultancy, shares their extensive experience building production LLM systems across multiple industries including environmental management, sports media, healthcare, and logistics. The presentation outlines their comprehensive approach to LLMOps, emphasizing the importance of proper evaluation frameworks, prompt engineering over fine-tuning, understanding user context, and managing inference economics. Through various client projects ranging from multimodal video search to intelligent document processing, they demonstrate key lessons learned about deploying reliable AI systems at scale, highlighting that generative AI is not a "magical pill" but requires careful engineering around inputs, outputs, evaluation, and user experience.

Multi-LLM Orchestration for Product Matching at Scale

Mercado Libre

Mercado Libre tackled the classic e-commerce product-matching challenge where sellers create listings with inconsistent titles, attributes, and identifiers, making it difficult to identify identical products across the platform. The team developed a sophisticated multi-LLM orchestration system that evolved from a simple 2-node architecture to a complex 7-node pipeline, incorporating adaptive prompts, context-aware decision-making, and collaborative consensus mechanisms. Through systematic iteration and careful orchestration alongside existing ML models and embedding systems, they achieved human-level performance with 95% precision and over 50% recall at a cost-effective rate of less than $0.001 per request, enabling scalable autonomous product matching across millions of items for critical use cases including pricing, personalization, and inventory optimization.

Multi-Track Approach to Developer Productivity Using LLMs

eBay

eBay implemented a three-track approach to enhance developer productivity using AI: deploying GitHub Copilot enterprise-wide, creating a custom-trained LLM called eBayCoder based on Code Llama, and developing an internal RAG-based knowledge base system. The Copilot implementation showed a 17% decrease in PR creation to merge time and 12% decrease in Lead Time for Change, while maintaining code quality. Their custom LLM helped with codebase-specific tasks and their internal knowledge base system leveraged RAG to make institutional knowledge more accessible.

Multi-Track Approach to Developer Productivity Using LLMs

ebay

eBay implemented a three-track approach to enhance developer productivity using LLMs: utilizing GitHub Copilot as a commercial offering, developing eBayCoder (a fine-tuned version of Code Llama 13B), and creating an internal GPT-powered knowledge base using RAG. The implementation showed significant improvements, including a 27% code acceptance rate with Copilot, enhanced software upkeep capabilities with eBayCoder, and increased efficiency in accessing internal documentation through their RAG system.

MultiCare: A Large-Scale Medical Case Report Dataset for AI Model Training

National University of the South

The MultiCare dataset project addresses the challenge of training AI models for medical applications by creating a comprehensive, multimodal dataset of clinical cases. The dataset contains over 75,000 case report articles, including 135,000 medical images with associated labels and captions, spanning multiple medical specialties. The project implements sophisticated data processing pipelines to extract, clean, and structure medical case reports, images, and metadata, making it suitable for training language models, computer vision models, or multimodal AI systems in the healthcare domain.

Multilingual Text Editing via Instruction Tuning

Grammarly

Grammarly's Strategic Research team developed mEdIT, a multilingual extension of their CoEdIT text editing model, to support intelligent writing assistance across seven languages and three editing tasks (grammatical error correction, text simplification, and paraphrasing). The problem addressed was that foundational LLMs produce low-quality outputs for text editing tasks, and prior specialized models only supported either multiple tasks in one language or single tasks across multiple languages. By fine-tuning multilingual LLMs (including mT5, mT0, BLOOMZ, PolyLM, and Bactrian-X) on over 200,000 carefully curated instruction-output pairs across Arabic, Chinese, English, German, Japanese, Korean, and Spanish, mEdIT achieved strong performance across tasks and languages, even when instructions were given in a different language than the text being edited. The models demonstrated generalization to unseen languages, with causal language models performing best, and received high ratings from human evaluators, though the work has not yet been integrated into Grammarly's production systems.

Next-Generation AI-Powered In-Vehicle Assistant with Hybrid Edge-Cloud Architecture

Bosch

Bosch Engineering, in collaboration with AWS, developed a next-generation conversational AI assistant for vehicles that operates through a hybrid edge-cloud architecture to address the limitations of traditional in-car voice assistants. The solution combines on-board AI components for simple queries with cloud-based processing for complex requests, enabling seamless integration with external APIs for services like restaurant booking, charging station management, and vehicle diagnostics. The system was implemented on Bosch's Software-Defined Vehicle (SDV) reference demonstrator platform, demonstrating capabilities ranging from basic vehicle control to sophisticated multi-service orchestration, with ongoing development focused on gradually moving more intelligence to the edge while maintaining robust connectivity fallback mechanisms.

No-Code Agentic Workflow Platform for Automated Code Changes

Duolingo

Duolingo developed an internal platform enabling employees across all roles to create and deploy AI coding agents without writing custom code, addressing the challenge of scaling AI-assisted development beyond individual use. The solution centers on a JSON-based workflow creator that allows users to define prompts, target repositories, and parameters, backed by a unified CodingAgent library supporting multiple LLM providers (Codex and Claude) and orchestrated through Temporal workflows. The platform has enabled rapid creation of agents for routine tasks like feature flag removal, experiment management, and infrastructure changes, with simple agents deployable in under five minutes and custom multi-step workflows buildable in 1-2 days, allowing engineers to focus on core product logic rather than repetitive coding tasks.

One-Shot End-to-End Coding Agents for Developer Productivity

Stripe

Stripe developed "Minions," a system of one-shot, end-to-end coding agents designed to enhance developer productivity within their internal engineering workflows. The problem addressed is the time-consuming nature of routine coding tasks and the potential for AI to automate portions of the software development lifecycle. The solution involves deploying LLM-based coding agents that can handle complete coding tasks from start to finish in a single execution. While the provided text is limited in detail, it represents Stripe's investment in leveraging LLMs for internal tooling to improve engineering efficiency, with the blog post being part of a series documenting their approach to building and deploying these AI-powered development assistants.

One-Shot End-to-End Coding Agents for Developer Productivity

Stripe

Stripe developed "Minions," an internal system of one-shot, end-to-end coding agents designed to enhance developer productivity. While the provided source text is extremely limited and appears to be primarily metadata from a blog post header, it indicates that Stripe has deployed LLM-based coding agents that can autonomously handle complete coding tasks from start to finish in a single execution. The system aims to reduce developer toil and accelerate software engineering workflows at scale within Stripe's infrastructure, though specific implementation details, performance metrics, and concrete results are not available in the provided excerpt.

Open Source Code Generation Model Release and Production Deployment Considerations

Meta

Meta released Code Llama, a family of specialized large language models for code generation built on top of Llama 2, aiming to assist developers with coding tasks and lower barriers to entry for new programmers. The solution includes multiple model sizes (7B, 13B, 34B, and 70B parameters) with three variants: a foundational code model, a Python-specialized version, and an instruction-tuned variant, all trained on 500B-1T tokens of code and supporting up to 100,000 token contexts. Benchmark testing showed Code Llama 34B achieved 53.7% on HumanEval and 56.2% on MBPP, matching ChatGPT performance while being released under an open license for both research and commercial use, with extensive safety evaluations and red teaming conducted to address responsible AI concerns.

Open Source vs. Closed Source Agentic Stacks: Panel Discussion on Production Deployment Strategies

Various (Alation, GrottoAI, Nvidia, OLX)

This panel discussion brings together experts from Nvidia, OLX, Alation, and GrottoAI to discuss practical considerations for deploying agentic AI systems in production. The conversation explores when to choose open source versus closed source tooling, the challenges of standardizing agent frameworks across enterprise organizations, and the tradeoffs between abstraction levels in agent orchestration platforms. Key themes include starting with closed source models for rapid prototyping before transitioning to open source for compliance and cost reasons, the importance of observability across heterogeneous agent frameworks, the difficulty of enabling non-technical users to build agents, and the critical difference between internal tooling with lower precision requirements versus customer-facing systems demanding 95%+ accuracy.

Open-Source Protein Structure Prediction and Generative Design Platform for Drug Discovery

Boltz

Boltz, founded by Gabriele Corso and Jeremy Wohlwend, developed an open-source suite of AI models (Boltz-1, Boltz-2, and BoltzGen) for structural biology and protein design, democratizing access to capabilities previously held by proprietary systems like AlphaFold 3. The company addresses the challenge of predicting complex molecular interactions (protein-ligand, protein-protein) and designing novel therapeutic proteins by combining generative diffusion models with specialized equivariant architectures. Their approach achieved validated nanomolar binders for two-thirds of nine previously unseen protein targets, demonstrating genuine generalization beyond training data. The newly launched Boltz Lab platform provides a production-ready infrastructure with optimized GPU kernels running 10x faster than open-source versions, offering agents for protein and small molecule design with collaborative interfaces for medicinal chemists and researchers.

Panel Discussion on Building Production LLM Applications

Various

A panel discussion featuring experts from Various companies discussing key aspects of building production LLM applications. The discussion covers critical topics including hallucination management, prompt engineering, evaluation frameworks, cost considerations, and model selection. Panelists share practical experiences and insights on deploying LLMs in production, highlighting the importance of continuous feedback loops, evaluation metrics, and the trade-offs between open source and commercial LLMs.

Parallel Asynchronous AI Coding Agents for Development Workflows

Google

Google Labs introduced Jules, an asynchronous coding agent designed to execute development tasks in parallel in the background while developers focus on higher-value work. The product addresses the challenge of serial development workflows by enabling developers to spin up multiple cloud-based agents simultaneously to handle tasks like SDK updates, testing, accessibility audits, and feature development. Launched two weeks prior to the presentation, Jules had already generated 40,000 public commits. The demonstration showcased how a developer could parallelize work on a conference schedule website by simultaneously running multiple test framework implementations, adding features like calendar integration and AI summaries, while conducting accessibility and security audits—all managed through a VM-based cloud infrastructure powered by Gemini 2.5 Pro.

PerfInsights: AI-Powered Performance Optimization for Go Services

Uber

Uber developed PerfInsights to address the unsustainable compute costs of their Go services, where the top 10 services alone accounted for multi-million dollars in monthly compute spend. The solution combines runtime profiling with GenAI-powered static analysis to automatically detect performance antipatterns in Go code, validate findings through LLM juries and rule-based checking (LLMCheck), and generate optimization recommendations. Results include a 93% reduction in time required to detect and fix performance issues (from 14.5 hours to 1 hour), over 80% reduction in false positives, hundreds of merged optimization diffs, and a 33.5% reduction in detected antipatterns over four months, translating to approximately 3,800 hours of engineering time saved annually.

Pitfalls and Best Practices for Production LLM Applications

Humanloop

A comprehensive overview from Human Loop's experience helping hundreds of companies deploy LLMs in production. The talk covers key challenges and solutions around evaluation, prompt management, optimization strategies, and fine-tuning. Major lessons include the importance of objective evaluation, proper prompt management infrastructure, avoiding premature optimization with agents/chains, and leveraging fine-tuning effectively. The presentation emphasizes taking lessons from traditional software engineering while acknowledging the unique needs of LLM applications.

Policy Search and Response System Using LLMs in Higher Education

NDUS

The North Dakota University System (NDUS) implemented a generative AI solution to tackle the challenge of searching through thousands of policy documents, state laws, and regulations. Using Databricks' Data Intelligence Platform on Azure, they developed a "Policy Assistant" that leverages LLMs (specifically Llama 2) to provide instant, accurate policy search results with proper references. This transformation reduced their time-to-market from one year to six months and made policy searches 10-20x faster, while maintaining proper governance and security controls.

Practical Lessons from Deploying LLMs in Production at Scale

Mercado Libre

Mercado Libre explored multiple production applications of Large Language Models across their e-commerce and technology platform, tackling challenges in knowledge retrieval, documentation generation, and natural language processing. The company implemented a RAG system for developer documentation using Llama Index, automated documentation generation for thousands of database tables, and built natural language input interpretation systems using function calling. Through iterative development, they learned critical lessons about the importance of underlying data quality, prompt engineering iteration, quality assurance for generated outputs, and the necessity of simplifying tasks for LLMs through proper data preprocessing and structured output formats.

Privacy-Preserving University Chatbot with LiteLLM Proxy for Multi-Model Governance and Cost Control

Unnamed private university

A private university sought to implement a privacy-preserving chatbot accessible to students and employees with requirements for model flexibility, potential self-hosting, and budget control. The solution leveraged LiteLLM's proxy server as an OpenAI-compatible gateway to manage multiple LLM providers, implement automatic cost tracking and budgeting per user/team, handle load balancing across model instances, and provide a unified API. While the system successfully delivered basic cost control and multi-provider support, the implementation revealed limitations in handling complex custom budgeting requirements, provider-specific features, and stability issues with newer features, requiring workarounds and custom implementations for advanced use cases.

Product Attribute Normalization and Sorting Using DSPy for Large-Scale E-commerce

Zoro UK

Zoro UK, an e-commerce subsidiary of Grainger with 3.5 million products from 300+ suppliers, faced challenges normalizing and sorting product attributes across 75,000 different attribute types. Using DSPy (a framework for optimizing LLM prompts programmatically), they built a production system that automatically determines whether attributes require alpha-numeric sorting or semantic sorting. The solution employs a two-tier architecture: Mistral 8B for initial classification and GPT-4 for complex semantic sorting tasks. The DSPy approach eliminated manual prompt engineering, provided LLM-agnostic compatibility, and enabled automated prompt optimization using genetic algorithm-like iterations, resulting in improved product discoverability and search experience for their 1 million monthly active users.

Production AI Agents for Accounting Automation: Engineering Process Daemons at Scale

Digits

Digits, an AI-native accounting platform, shares their experience running AI agents in production for over 2 years, addressing real-world challenges in deploying LLM-based systems. The team reframes "agents" as "process daemons" to set appropriate expectations and details their implementation across three use cases: vendor data enrichment, client onboarding, and complex query handling. Their solution emphasizes building lightweight custom infrastructure over dependency-heavy frameworks, reusing existing APIs as agent tools, implementing comprehensive observability with OpenTelemetry, and establishing robust guardrails. The approach has enabled reliable automation while maintaining transparency, security, and performance through careful engineering rather than relying on framework abstractions.

Production AI Deployment: Lessons from Real-World Agentic AI Systems

Databricks / Various

This case study presents lessons learned from deploying generative AI applications in production, with a specific focus on Flo Health's implementation of a women's health chatbot on the Databricks platform. The presentation addresses common failure points in GenAI projects including poor constraint definition, over-reliance on LLM autonomy, and insufficient engineering discipline. The solution emphasizes deterministic system architecture over autonomous agents, comprehensive observability and tracing, rigorous evaluation frameworks using LLM judges, and proper DevOps practices. Results demonstrate that successful production deployments require treating agentic AI as modular system architectures following established software engineering principles rather than monolithic applications, with particular emphasis on cost tracking, quality monitoring, and end-to-end deployment pipelines.

Production AI Systems for News Personalization and Journalistic Workflows

Bonnier News

Bonnier News, a major Swedish media publisher with over 200 brands including Expressen and local newspapers, has deployed AI and machine learning systems in production to solve content personalization and newsroom automation challenges. The company's data science team, led by product manager Hans Yell (PhD in computational linguistics) and head of architecture Magnus Engster, has built white-label personalization engines using embedding-based recommendation systems that outperform manual content curation while scaling across multiple brands. They leverage vector similarity and user reading patterns rather than traditional metadata, achieving significant engagement lifts. Additionally, they're developing LLM-powered tools for journalists including headline generation, news aggregation summaries, and trigger questions for articles. Through a WASP-funded PhD collaboration, they're working on domain-adapted Swedish language models via continued pre-training of Llama models with Bonnier's extensive text corpus, focusing on capturing brand tone and improving journalistic workflows while maintaining data sovereignty.

Production GenAI for User Safety and Enhanced Matching Experience

Tinder

Tinder implemented two production GenAI applications to enhance user safety and experience: a username detection system using fine-tuned Mistral 7B to identify social media handles in user bios with near-perfect recall, and a personalized match explanation feature using fine-tuned Llama 3.1 8B to help users understand why recommended profiles are relevant. Both systems required sophisticated LLMOps infrastructure including multi-model serving with LoRA adapters, GPU optimization, extensive monitoring, and iterative fine-tuning processes to achieve production-ready performance at scale.

Production Monitoring and Issue Discovery for AI Agents

Raindrop

Raindrop's CTO Ben presents a comprehensive framework for building reliable AI agents in production, addressing the challenge that traditional offline evaluations cannot capture the full complexity of real-world user behavior. The core problem is that AI agents fail in subtle ways without concrete errors, making issues difficult to detect and fix. Raindrop's solution centers on a "discover, track, and fix" loop that combines explicit signals like thumbs up/down with implicit signals detected semantically in conversations, such as user frustration, task failures, and agent forgetfulness. By clustering these signals with user intents and tracking them over time, teams can identify the most impactful issues and systematically improve their agents. The approach emphasizes experimentation and production monitoring over purely offline testing, drawing parallels to how traditional software engineering shifted from extensive QA to tools like Sentry for error monitoring.

Production RAG Stack Development Through 37 Iterations for Financial Services

jonfernandes

Independent AI engineer Jonathan Fernandez shares his experience developing a production-ready RAG (Retrieval Augmented Generation) stack through 37 failed iterations, focusing on building solutions for financial institutions. The case study demonstrates the evolution from a naive RAG implementation to a sophisticated system incorporating query processing, reranking, and monitoring components. The final architecture uses LlamaIndex for orchestration, Qdrant for vector storage, open-source embedding models, and Docker containerization for on-premises deployment, achieving significantly improved response quality for document-based question answering.

Production Vector Search and Retrieval System Optimization at Scale

Superlinked

SuperLinked, a company focused on vector search infrastructure, shares production insights from deploying information retrieval systems for e-commerce and enterprise knowledge management with indexes up to 2 terabytes. The presentation addresses challenges in relevance, latency, and cost optimization when deploying vector search systems at scale. Key solutions include avoiding vector pooling/averaging, implementing late interaction models, fine-tuning embeddings for domain-specific needs, combining sparse and dense representations, leveraging graph embeddings, and using template-based query generation instead of unconstrained text-to-SQL. Results demonstrate 5%+ precision improvements through targeted fine-tuning, significant latency reductions through proper database selection and query optimization, and improved relevance through multi-encoder architectures that combine text, graph, and metadata signals.

Production-Scale Document Parsing with Vision-Language Models and Specialized OCR

Reducto

Reducto has built a production document parsing system that processes over 1 billion documents by combining specialized vision-language models, traditional OCR, and layout detection models in a hybrid pipeline. The system addresses critical challenges in document parsing including hallucinations from frontier models, dense tables, handwritten forms, and complex charts. Their approach uses a divide-and-conquer strategy where different models are routed to different document regions based on complexity, achieving higher accuracy than AWS Textract, Microsoft Azure Document Intelligence, and Google Cloud OCR on their internal benchmarks. The company has expanded beyond parsing to offer extraction with pixel-level citations and an edit endpoint for automated form filling.

Production-Scale NLP Suggestion System with Real-Time Text Processing

Grammarly

Grammarly built a sophisticated production system for delivering writing suggestions to 30 million users daily. The company developed an extensible operational transformation protocol using Delta format to represent text changes, user edits, and AI-generated suggestions in a unified manner. The system addresses critical challenges in managing ML-generated suggestions at scale: maintaining suggestion relevance as users edit text in real-time, rebasing suggestion positions according to ongoing edits without waiting for backend updates, and applying multiple suggestions simultaneously without UI freezing. The architecture includes a Suggestions Repository, Delta Manager for rebasing operations, and Highlights Manager, all working together to ensure suggestions remain accurate and applicable as document state changes dynamically.

Production-Scale RAG System for Real-Time News Processing and Analysis

Emergent Methods

Emergent Methods built a production-scale RAG system processing over 1 million news articles daily, using a microservices architecture to deliver real-time news analysis and context engineering. The system combines multiple open-source tools including Quadrant for vector search, VLM for GPU optimization, and their own Flow.app for orchestration, addressing challenges in news freshness, multilingual processing, and hallucination prevention while maintaining low latency and high availability.

RAG-Powered Customer Support Enhancement Using GPT-4

Thomson Reuters

Thomson Reuters implemented a Retrieval-Augmented Generation (RAG) system to enhance customer support for their legal and tax domain products. The challenge involved customer support agents experiencing cognitive overload while navigating hundreds of thousands of knowledge base articles across complex product lines like Westlaw, Practical Law, and Checkpoint. By building a RAG architecture combining dense retrieval systems (using Milvus vector database and sentence transformers) with GPT-4, Thomson Reuters created a conversational interface that provides agents with relevant, accurate solutions from their curated knowledge base. The solution reduced resolution times and improved the accuracy of support responses by grounding GPT-4's outputs in company-specific documentation, avoiding hallucinations common in standalone LLM deployments.

Rapid Integration of Advanced AI Models through Modular Architecture and Workflow Orchestration

Harvey

Harvey, a legal AI platform, demonstrated their ability to rapidly integrate new AI capabilities by incorporating OpenAI's Deep Research feature into their production system within 12 hours of its API release. This achievement was enabled by their AI-native architecture featuring a modular Workflow Engine, composable AI building blocks, transparent "thinking states" for user visibility, and a culture of rapid prototyping using AI-assisted development tools. The case study showcases how purpose-built infrastructure and engineering practices can accelerate the deployment of complex AI features while maintaining enterprise-grade reliability and user transparency in legal workflows.

Rapid Prototyping and Scaling AI Applications Using Open Source Models

Hassan El Mghari

Hassan El Mghari, a developer relations leader at Together AI, demonstrates how to build and scale AI applications to millions of users using open source models and a simplified architecture. Through building approximately 40 AI apps over four years (averaging one per month), he developed a streamlined approach that emphasizes simplicity, rapid iteration, and leveraging the latest open source models. His applications, including commit message generators, text-to-app builders, and real-time image generators, have collectively served millions of users and generated tens of millions of outputs, proving that simple architectures with single API calls can achieve significant scale when combined with good UI design and viral sharing mechanics.

Red Teaming AI Agents: Uncovering Security Vulnerabilities in Production Systems

Casco

Casco, a Y Combinator company specializing in red teaming AI agents and applications, conducted a security assessment of 16 live production AI agents, successfully compromising 7 of them within 30 minutes each. The research identified three critical security vulnerabilities common across production AI agents: cross-user data access through insecure direct object references (IDOR), arbitrary code execution through improperly secured code sandboxes leading to lateral movement across infrastructure, and server-side request forgery (SSRF) enabling credential theft from private repositories. The findings demonstrate that agent security extends far beyond LLM-specific concerns like prompt injection, requiring developers to apply traditional web application security principles including proper authentication and authorization, input/output sanitization, and use of enterprise-grade code sandboxes rather than custom implementations.

Refining Input Guardrails for Safer LLM Applications Through Chain-of-Thought Fine-Tuning

Capital One

Capital One developed enhanced input guardrails to protect LLM-powered conversational assistants from adversarial attacks and malicious inputs. The company used chain-of-thought prompting combined with supervised fine-tuning (SFT) and alignment techniques like Direct Preference Optimization (DPO) and Kahneman-Tversky Optimization (KTO) to improve the accuracy of LLM-as-a-Judge moderation systems. Testing on four open-source models (Mistral 7B, Mixtral 8x7B, Llama2 13B, and Llama3 8B) showed significant improvements in F1 scores and attack detection rates of over 50%, while maintaining low false positive rates, demonstrating that effective guardrails can be achieved with small training datasets and minimal computational resources.

Reinforcement Learning for Code Generation and Agent-Based Development Tools

Cursor

This case study examines Cursor's implementation of reinforcement learning (RL) for training coding models and agents in production environments. The team discusses the unique challenges of applying RL to code generation compared to other domains like mathematics, including handling larger action spaces, multi-step tool calling processes, and developing reward signals that capture real-world usage patterns. They explore various technical approaches including test-based rewards, process reward models, and infrastructure optimizations for handling long context windows and high-throughput inference during RL training, while working toward more human-centric evaluation metrics beyond traditional test coverage.

Revenue Intelligence Platform with Ambient AI Agents

Tabs

Tabs, a vertical AI company in the finance space, has built a revenue intelligence platform for B2B companies that uses ambient AI agents to automate financial workflows. The company extracts information from sales contracts to create a "commercial graph" and deploys AI agents that work autonomously in the background to handle billing, collections, and reporting tasks. Their approach moves beyond traditional guided AI experiences toward fully ambient agents that monitor communications and trigger actions automatically, with the goal of creating "beautiful operational software that no one ever has to go into."

Running LLM Agents in Production for Accounting Automation

Digits

Digits, a company providing automated accounting services for startups and small businesses, implemented production-scale LLM agents to handle complex workflows including vendor hydration, client onboarding, and natural language queries about financial books. The company evolved from a simple 200-line agent implementation to a sophisticated production system incorporating LLM proxies, memory services, guardrails, observability tooling (Phoenix from Arize), and API-based tool integration using Kotlin and Golang backends. Their agents achieve a 96% acceptance rate on classification tasks with only 3% requiring human review, handling approximately 90% of requests asynchronously and 10% synchronously through a chat interface.

Scaling AI Agents to Production: A Blueprint for Autonomous Customer Service

Cox Automotive

Cox Automotive, a dominant player in the automotive software industry with visibility into 5.1 trillion vehicle insights, faced the challenge of moving AI agents from prototype to production at scale. In response to an aggressive 5-week deadline set in summer 2024, the company launched five agentic AI products using Amazon Bedrock Agent Core and the Strands framework. The flagship product was a fully automated virtual assistant for dealership customer conversations that operates autonomously after hours without human oversight. By establishing foundational infrastructure with Agent Core, implementing comprehensive red teaming practices, designing both hard and soft guardrails, automating evaluation with LLM-as-judge techniques, and setting circuit breakers for cost and conversation limits, Cox Automotive successfully deployed three products to production beta, with dealers reporting that customers receive timely responses both during business hours and after hours.

Scaling AI Development with DGX Cloud: ServiceNow and SLB Production Deployments

Nvidia

ServiceNow and SLB (formerly Schlumberger) leveraged Nvidia DGX Cloud on AWS to develop and deploy foundation models for their respective industries. ServiceNow focused on building efficient small language models (5B-15B parameters) for enterprise process automation and agentic systems that match frontier model performance at a fraction of the cost and size, achieving nearly 100% GPU utilization through Run AI orchestration. SLB developed domain-specific multi-modal foundation models for seismic and petrophysical data to assist geoscientists and engineers in the energy sector, accelerating time-to-market for two major product releases over two years. Both organizations benefited from the fully optimized, turnkey infrastructure stack combining high-performance GPUs, networking, Lustre storage, EKS optimization, and enterprise-grade support, enabling them to focus on model development rather than infrastructure management while achieving zero or near-zero downtime.

Scaling AI Product Development with Rigorous Evaluation and Observability

Notion

Notion AI, serving over 100 million users with multiple AI features including meeting notes, enterprise search, and deep research tools, demonstrates how rigorous evaluation and observability practices are essential for scaling AI product development. The company uses Brain Trust as their evaluation platform to manage the complexity of supporting multilingual workspaces, rapid model switching, and maintaining product polish while building at the speed of AI industry innovation. Their approach emphasizes that 90% of AI development time should be spent on evaluation and observability rather than prompting, with specialized data specialists creating targeted datasets and custom LLM-as-a-judge scoring functions to ensure consistent quality across their diverse AI product suite.

Scaling AI-Assisted Developer Tools and Agentic Workflows at Scale

Slack

Slack's Developer Experience team embarked on a multi-year journey to integrate generative AI into their internal development workflows, moving from experimental prototypes to production-grade AI assistants and agentic systems. Starting with Amazon SageMaker for initial experimentation, they transitioned to Amazon Bedrock for simplified infrastructure management, achieving a 98% cost reduction. The team rolled out AI coding assistants using Anthropic's Claude Code and Cursor integrated with Bedrock, resulting in 99% developer adoption and a 25% increase in pull request throughput. They then evolved their internal knowledge bot (Buddybot) into a sophisticated multi-agent system handling over 5,000 escalation requests monthly, using AWS Strands as an orchestration framework with Claude Code sub-agents, Temporal for workflow durability, and MCP servers for standardized tool access. The implementation demonstrates a pragmatic approach to LLMOps, prioritizing incremental deployment, security compliance (FedRAMP), observability through OpenTelemetry, and maintaining model agnosticism while scaling to millions of tokens per minute.

Scaling Domain-Specific Model Training with Distributed Infrastructure

Articul8

Articul8, a generative AI company focused on domain-specific models (DSMs), faced challenges in training and deploying specialized LLMs across semiconductor, energy, and supply chain industries due to infrastructure complexity and computational requirements. They implemented Amazon SageMaker HyperPod to manage distributed training clusters with automated fault tolerance, achieving over 95% cluster utilization and 35% productivity improvements. The solution enabled them to reduce AI deployment time by 4x and total cost of ownership by 5x while successfully developing high-performing DSMs that outperform general-purpose LLMs by 2-3x in domain-specific tasks, with their A8-Semicon model achieving twice the accuracy of GPT-4o and Claude in Verilog code generation at 50-100x smaller model sizes.

Scaling Foundation Models for Predictive Banking Applications

Nubank

Nubank integrated foundation models into their AI platform to enhance predictive modeling across critical banking decisions, moving beyond traditional tabular machine learning approaches. Through their acquisition of Hyperplane in July 2024, they developed billion-parameter transformer models that process sequential transaction data to better understand customer behavior. Over eight months, they achieved significant performance improvements (1.20% average AUC lift across benchmark tasks) while maintaining existing data governance and model deployment infrastructure, successfully deploying these models to production decision engines serving over 100 million customers.

Scaling GenAI Applications with vLLM for High-Throughput LLM Serving

LinkedIn

LinkedIn adopted vLLM, an open-source LLM inference framework, to power over 50 GenAI use cases including LinkedIn Hiring Assistant and AI Job Search, running on thousands of hosts across their platform. The company faced challenges in deploying LLMs at scale with low latency and high throughput requirements, particularly for applications requiring complex reasoning and structured outputs. By leveraging vLLM's PagedAttention technology and implementing a five-phase evolution strategy—from offline mode to a modular, OpenAI-compatible architecture—LinkedIn achieved significant performance improvements including ~10% TPS gains and GPU savings of over 60 units for certain workloads, while maintaining sub-600ms p95 latency for thousands of QPS in production applications.

Scaling LLM Application Observability Through Automated Conversation Clustering and Analysis

Manus

This case study presents a methodology for understanding and improving LLM applications at scale when manual review of conversations becomes infeasible. The core problem addressed is that traditional logging misses critical issues in AI applications, and teams face data paralysis when dealing with millions of complex, multi-turn agent conversations across multiple languages. The solution involves using LLMs themselves to automatically summarize, cluster, and analyze user conversations at scale, following a framework inspired by Anthropic's CLEO (Claude Language Insights and Observations) system. The presenter demonstrates this through Kura, an open-source library that summarizes conversations, generates embeddings, performs hierarchical clustering, and creates classifiers for ongoing monitoring. The approach enabled identification of high-leverage fixes (like adding two-line prompt changes for upselling that yielded 20-30% revenue increases) and helped Anthropic launch their educational product by analyzing patterns in one million student conversations. Results show that this systematic approach allows teams to prioritize fixes based on volume and impact, track improvements quantitatively, and scale their analysis capabilities beyond manual review limitations.

Scaling LLM-Based Ranking Systems with Prefill-Only Optimization

LinkedIn

LinkedIn faced significant performance challenges when deploying LLM-based ranking systems for AI Job Search and AI People Search, where models needed to score hundreds of items per query within strict latency SLAs (sub-500ms P99). The ranking workload differs fundamentally from text generation—it requires only the prefill phase to score candidates, not iterative token generation. LinkedIn optimized SGLang, an open-source LLM serving system, through four optimization stages: implementing comprehensive batching (tokenization and batch preservation), creating a scoring-only fast path that eliminates unnecessary decode loops and CPU-GPU synchronization, introducing in-batch prefix caching to reuse shared query context, and addressing Python runtime bottlenecks through multi-process architecture. These optimizations delivered 2-3x throughput improvements on H100 GPUs while maintaining P99 latency under 500ms, enabling production-scale LLM ranking for millions of members.

Scaling RAG Accuracy from 49% to 86% in Finance Q&A Assistant

Amazon Finance

Amazon Finance Automation developed a RAG-based Q&A chat assistant using Amazon Bedrock to help analysts quickly retrieve answers to customer queries. Through systematic improvements in document chunking, prompt engineering, and embedding model selection, they increased the accuracy of responses from 49% to 86%, significantly reducing query response times from days to minutes.

Self-Hosting DeepSeek-R1 Models on AWS: A Cost-Benefit Analysis

LiftOff

LiftOff LLC explored deploying open-source DeepSeek-R1 models (1.5B, 7B, 8B, 16B parameters) on AWS EC2 GPU instances to evaluate their viability as alternatives to paid AI services like ChatGPT. While technically successful in deployment using Docker, Ollama, and OpenWeb UI, the operational costs significantly exceeded expectations, with a single g5g.2xlarge instance costing $414/month compared to ChatGPT Plus at $20/user/month. The experiment revealed that smaller models lacked production-quality responses, while larger models faced memory limitations, performance degradation with longer contexts, and stability issues, concluding that self-hosting isn't cost-effective at startup scale.

Semantic Data Processing at Scale with AI-Powered Query Optimization

DocETL

Shreyaa Shankar presents DocETL, an open-source system for semantic data processing that addresses the challenges of running LLM-powered operators at scale over unstructured data. The system tackles two major problems: how to make semantic operator pipelines scalable and cost-effective through novel query optimization techniques, and how to make them steerable through specialized user interfaces. DocETL introduces rewrite directives that decompose complex tasks and data to improve accuracy and reduce costs, achieving up to 86% cost reduction while maintaining target accuracy. The companion tool Doc Wrangler provides an interactive interface for iteratively authoring and debugging these pipelines. Real-world applications include public defenders analyzing court transcripts for racial bias and medical analysts extracting information from doctor-patient conversations, demonstrating significant accuracy improvements (2x in some cases) compared to baseline approaches.

Semantic Search for Aviation Safety Reports Using Embeddings and Hybrid Search

Beams

Beams, a startup operating in aviation safety, built a semantic search system to help airlines analyze thousands of safety reports written daily by pilots and ground crew. The problem they addressed was the manual, time-consuming process of reading through unstructured, technical, jargon-filled free-text reports to identify trends and manage risks. Their solution combined vector embeddings (using Azure OpenAI's text-embedding-3-large model) with PostgreSQL and PG Vector for similarity search, alongside a two-stage retrieval and reranking pipeline. They also integrated structured filtering with semantic search to create a hybrid search system. The system was deployed on AWS using Lambda functions, RDS with PostgreSQL, and SQS for event-driven orchestration. Results showed that users could quickly search through hundreds of thousands of reports using natural language queries, finding semantically similar incidents even when terminology varied, significantly improving efficiency in safety analysis workflows.

Semi-Supervised Fine-Tuning of Compact Vision-Language Models for Product Attribute Extraction

Flipkart

Flipkart faced the challenge of accurately extracting product attributes (like color, pattern, and material) from millions of product listings at scale. Manual labeling was expensive and error-prone, while using large Vision Language Model APIs was cost-prohibitive. The company developed a semi-supervised approach using compact VLMs (2-3 billion parameters) that combines Parameter-Efficient Fine-Tuning (PEFT) with Direct Preference Optimization (DPO) to leverage unlabeled data. The method starts with a small labeled dataset, generates multiple reasoning chains for unlabeled products using self-consistency, and then fine-tunes the model using DPO to favor preferred outputs. Results showed accuracy improvements from 75.1% to 85.7% on the Qwen2.5-VL-3B-Instruct model across twelve e-commerce verticals, demonstrating that compact models can effectively learn from unlabeled data to achieve production-grade performance.

Specialized Text Editing LLM Development through Instruction Tuning

Grammarly

Grammarly developed CoEdIT, a specialized text editing LLM that outperforms larger models while being up to 60 times smaller. Through targeted instruction tuning on a carefully curated dataset of text editing tasks, they created models ranging from 770M to 11B parameters that achieved state-of-the-art performance on multiple editing benchmarks, outperforming models like GPT-3-Edit (175B parameters) and ChatGPT in both automated and human evaluations.

Streamlining Legislative Analysis Model Deployment with MLOps

FiscalNote

FiscalNote, facing challenges in deploying and updating their legislative analysis ML models efficiently, transformed their MLOps pipeline using Databricks' MLflow and Model Serving. This shift enabled them to reduce deployment time and increase model deployment frequency by 3x, while improving their ability to provide timely legislative insights to clients through better model management and deployment practices.

Structured AI Workflow Orchestration for Developer Productivity at Scale

Shopify

Shopify's Augmented Engineering team developed Roast, an open-source workflow orchestration framework that structures AI agents to solve developer productivity challenges like flaky tests and low test coverage. The team discovered that breaking complex AI tasks into discrete, structured steps was essential for reliable performance at scale, leading them to create a convention-over-configuration tool that combines deterministic code execution with AI-powered analysis, enabling reproducible and testable AI workflows that can be version-controlled and integrated into development processes.

Structured Workflow Orchestration for Large-Scale Code Operations with Claude

Shopify

Shopify's augmented engineering team developed ROAST, an open-source workflow orchestration tool designed to address challenges of maintaining developer productivity at massive scale (5,000+ repositories, 500,000+ PRs annually, millions of lines of code). The team recognized that while agentic AI tools like Claude Code excel at exploratory tasks, deterministic structured workflows are better suited for predictable, repeatable operations like test generation, coverage optimization, and code migrations. By interleaving Claude Code's non-deterministic agentic capabilities with ROAST's deterministic workflow orchestration, Shopify created a bidirectional system where ROAST can invoke Claude Code as a tool within workflows, and Claude Code can execute ROAST workflows for specific steps. The solution has rapidly gained adoption within Shopify, reaching 500 daily active users and 250,000 requests per second at peak, with developers praising the combination for minimizing instruction complexity at each workflow step and reducing entropy accumulation in multi-step processes.

Supervised Fine-Tuning for AI-Powered Travel Recommendations

Booking.com

Booking.com built an AI Trip Planner to handle unstructured, natural language queries from travelers seeking personalized recommendations. The challenge was combining LLMs' ability to understand conversational requests with years of structured behavioral data (searches, clicks, bookings). Instead of relying solely on prompt engineering with external APIs, they used supervised fine-tuning on open-source LLMs with parameter-efficient methods. This approach delivered superior recommendation metrics while achieving 3x faster inference compared to prompt-based solutions, while maintaining data privacy and security by keeping all processing internal.

Swarm-Coding with Multiple Background Agents for Large-Scale Code Maintenance

Faire

Faire implemented "swarm-coding" using GitHub Copilot's background agents to automate tedious engineering tasks like cleaning up expired feature flags and migrating test infrastructure. By coordinating multiple autonomous AI agents working in parallel, they enabled non-engineers to land simple code changes and freed up engineering teams to focus on innovation rather than maintenance work. Within the first month of deployment, 18% of the engineering team adopted the approach, merging over 500 Copilot pull requests with an average time savings of 39.6 minutes per PR and a 25% increase in overall PR volume among users. The company enhanced the background agents through custom instructions, MCP (Model Context Protocol) servers, and programmatic task assignment to create specialized agent profiles for common workflows.

Synthetic Consumer Survey Generation Using LLMs with Semantic Similarity Response Mapping

Colgate

PyMC Labs partnered with Colgate to address the limitations of traditional consumer surveys for product testing by developing a novel synthetic consumer methodology using large language models. The challenge was that standard approaches of asking LLMs to provide numerical ratings (1-5) resulted in biased, middle-of-the-road responses that didn't reflect real consumer behavior. The solution involved allowing LLMs to provide natural text responses which were then mapped to quantitative scales using embedding similarity to reference responses. This approach achieved 90% of the maximum achievable correlation with real survey data, accurately reproduced demographic effects including age and income patterns, eliminated positivity bias present in human surveys, and provided richer qualitative feedback while being faster and cheaper than traditional surveys.

Systematic AI Application Improvement Through Evaluation-Driven Development

Ragas, Various

This case study presents Ragas' comprehensive approach to improving AI applications through systematic evaluation practices, drawn from their experience working with various enterprises and early-stage startups. The problem addressed is the common challenge of AI engineers making improvements to LLM applications without clear measurement frameworks, leading to ineffective iteration cycles and poor user experiences. The solution involves a structured evaluation methodology encompassing dataset curation, human annotation, LLM-as-judge scaling, error analysis, experimentation, and continuous feedback loops. The results demonstrate that teams can move from subjective "vibe checks" to objective, data-driven improvements that systematically enhance AI application performance and user satisfaction.

T-RAG: Tree-Based RAG Architecture for Question Answering Over Organizational Documents

Qatar Computing Research Institute

Qatar Computing Research Institute developed a novel question-answering system for organizational documents combining RAG, finetuning, and a tree-based entity structure. The system, called T-RAG, handles confidential documents on-premise using open source LLMs and achieves 73% accuracy on test questions, outperforming baseline approaches while maintaining robust entity tracking through a custom tree structure.

Testing and Evaluation Strategies for AI-Powered Code Editor with Agentic Editing

Zed

Zed, an AI-enabled code editor built from scratch in Rust, implemented comprehensive testing and evaluation strategies to ensure reliable agentic editing capabilities. The company faced the challenge of maintaining their rigorous empirical testing approach while dealing with the non-deterministic nature of LLM outputs. They developed a multi-layered approach combining stochastic testing with deterministic unit tests, addressing issues like streaming edits, XML tag parsing, indentation handling, and escaping behaviors. Through statistical testing methods running hundreds of iterations and setting pass/fail thresholds, they successfully deployed reliable AI-powered code editing features that work effectively with frontier models like Claude 4.

Text-to-SQL AI Agent for Democratizing Data Access in Slack

Salesforce

Salesforce built Horizon Agent, an internal text-to-SQL Slack agent, to address a data access gap where engineers and data scientists spent dozens of hours weekly writing custom SQL queries for non-technical users. The solution combines Large Language Models with Retrieval-Augmented Generation (RAG) to allow users to ask natural language questions in Slack and receive SQL queries, answers, and explanations within seconds. After launching in Early Access in August 2024 and reaching General Availability in January 2025, the system freed technologists from routine query work and enabled non-technical users to self-serve data insights in minutes instead of waiting hours or days, transforming the role of technical staff from data gatekeepers to guides.

Thinking Machines' Tinker: Low-Level Fine-Tuning API for Production LLM Training

Thinking Machines

Thinking Machines, a new AI company founded by former OpenAI researcher John Schulman, has developed Tinker, a low-level fine-tuning API designed to enable sophisticated post-training of language models without requiring teams to manage GPU infrastructure or distributed systems complexity. The product aims to abstract away infrastructure concerns while providing low-level primitives for expressing nearly all post-training algorithms, allowing researchers and companies to build custom models without developing their own training infrastructure. The company plans to release their own models and expand Tinker's capabilities to include multimodal functionality and larger-scale training jobs, while making the platform more accessible to non-experts through higher-level tooling.

Training a 70B Japanese Large Language Model with Amazon SageMaker HyperPod

Institute of Science Tokyo

The Institute of Science Tokyo successfully developed Llama 3.3 Swallow, a 70-billion-parameter large language model with enhanced Japanese capabilities, using Amazon SageMaker HyperPod infrastructure. The project involved continual pre-training from Meta's Llama 3.3 70B model using 314 billion tokens of primarily Japanese training data over 16 days across 256 H100 GPUs. The resulting model demonstrates superior performance compared to GPT-4o-mini and other leading models on Japanese language benchmarks, showcasing effective distributed training techniques including 4D parallelism, asynchronous checkpointing, and comprehensive monitoring systems that enabled efficient large-scale model training in production.

Training and Deploying AI Coding Agents at Scale with GPT-5 Codex

OpenAI

OpenAI's Bill and Brian discuss their work on GPT-5 Codex and Codex Max, AI coding agents designed for production use. The team focused on training models with specific "personalities" optimized for pair programming, including traits like communication, planning, and self-checking behaviors. They trained separate model lines: Codex models optimized specifically for their agent harness with strong opinions about tool use (particularly terminal tools), and mainline GPT-5 models that are more general and steerable across different tooling environments. The result is a coding agent that OpenAI employees trust for production work, with approximately 50% of OpenAI staff using it daily, and some engineers like Brian claiming they haven't written code by hand in months. The team emphasizes the shift toward shipping complete agents rather than just models, with abstractions moving upward to enable developers to build on top of pre-configured agentic systems.

Training and Deploying MPT: Lessons Learned in Large Scale LLM Development

MosaicML

MosaicML developed and open-sourced MPT, a family of large language models including 7B and 30B parameter versions, demonstrating that high-quality LLMs could be trained for significantly lower costs than commonly believed (under $250,000 for 7B model). They built a complete training platform handling data processing, distributed training, and model deployment at scale, while documenting key lessons around planning, experimentation, data quality, and operational best practices for production LLM development.

User Foundation Models for Personalization at Scale

Grab

Grab developed a custom foundation model to generate user embeddings that power personalization across its Southeast Asian superapp ecosystem. Traditional approaches relied on hundreds of manually engineered features that were task-specific and siloed, struggling to capture sequential user behavior effectively. Grab's solution involved building a transformer-based foundation model that jointly learns from both tabular data (user attributes, transaction history) and time-series clickstream data (user interactions and sequences). This model processes diverse data modalities including text, numerical values, IDs, and location data through specialized adapters, using unsupervised pre-training with masked language modeling and next-action prediction. The resulting embeddings serve as powerful, generalizable features for downstream applications including ad optimization, fraud detection, churn prediction, and recommendations across mobility, food delivery, and financial services, significantly improving personalization while reducing feature engineering effort.

Using LLMs for Automated Opinion Summary Evaluation in E-commerce

Flipkart

Flipkart faced the challenge of evaluating AI-generated opinion summaries of customer reviews, where traditional metrics like ROUGE failed to align with human judgment and couldn't comprehensively assess summary quality across multiple dimensions. The company developed OP-I-PROMPT, a novel single-prompt framework that uses LLMs as evaluators across seven critical dimensions (fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, and specificity), along with SUMMEVAL-OP, a new benchmark dataset with 2,912 expert annotations. The solution achieved a 0.70 Spearman correlation with human judgments, significantly outperforming previous approaches especially on open-source models like Mistral-7B, while demonstrating that high-quality summaries directly impact business metrics like conversion rates and product return rates.

Using LLMs to Combat Health Insurance Claim Denials

Fight Health Insurance

Fight Health Insurance is an open-source project that uses fine-tuned large language models to help people appeal denied health insurance claims in the United States. The system processes denial letters, extracts relevant information, and generates appeal letters based on training data from independent medical review boards. The project addresses the widespread problem of insurance claim denials by automating the complex and time-consuming process of crafting effective appeals, making it accessible to individuals who lack the resources or knowledge to navigate the appeals process themselves. The tool is available both as an open-source Python package and as a free hosted service, though the sustainability model is still being developed.

Using LLMs to Scale Insurance Operations at a Small Company

Anzen

Anzen, a small insurance company with under 20 people, leveraged LLMs to compete with larger insurers by automating their underwriting process. They implemented a document classification system using BERT and AWS Textract for information extraction, achieving 95% accuracy in document classification. They also developed a compliance document review system using sentence embeddings and question-answering models to provide immediate feedback on legal documents like offer letters.

Vector Search and RAG Implementation for Enhanced User Search Experience

Couchbase

This case study explores how vector search and RAG (Retrieval Augmented Generation) are being implemented to improve search experiences across different applications. The presentation covers two specific implementations: Revolut's Sherlock fraud detection system using vector search to identify dissimilar transactions, saving customers over $3 million in one year, and Seen.it's video clip search system enabling natural language search across half a million video clips for marketing campaigns.

Video Super-Resolution at Scale for Ads and Generative AI Content

Meta

Meta's Media Foundation team deployed AI-powered video super-resolution (VSR) models at massive scale to enhance video quality across their ecosystem, processing over 1 billion daily video uploads. The problem addressed was the prevalence of low-quality videos from poor camera quality, cross-platform uploads, and legacy content that degraded user experience. The solution involved deploying multiple VSR models—both CPU-based (using Intel's RVSR SDK) and GPU-based—to upscale and enhance video quality for ads and generative AI features like Meta Restyle. Through extensive subjective evaluation with thousands of human raters, Meta identified effective quality metrics (VMAF-UQ), determined which videos would benefit most from VSR, and successfully deployed the technology while managing GPU resource constraints and ensuring quality improvements aligned with user preferences.