ZenML

Industry: Consulting

23 tools in this industry

← Back to LLMOps Database

AI Agent System for Automated Travel Itinerary Generation

Aimpoint Digital

Aimpoint Digital developed an AI agent system to automate travel itinerary generation, addressing the time-consuming nature of trip planning. The solution combines multiple RAG frameworks with vector search for up-to-date information about places, restaurants, and events, using parallel processing and optimized prompts to generate personalized itineraries within seconds. The system employs Databricks' Vector Search and LLM capabilities, with careful attention to evaluation metrics and prompt optimization.

AI Managed Services and Agent Operations at Enterprise Scale

PriceWaterhouseCooper

PriceWaterhouseCooper (PWC) addresses the challenge of deploying and maintaining AI systems in production through their managed services practice focused on data analytics and AI. The organization has developed frameworks for deploying AI agents in enterprise environments, particularly in healthcare and back-office operations, using their Agent OS framework built on Python. Their approach emphasizes process standardization, human-in-the-loop validation, continuous model tuning, and comprehensive measurement through evaluations to ensure sustainable AI operations at scale. Results include successful deployments in healthcare pre-authorization processes and the establishment of specialized AI managed services teams comprising MLOps engineers and data scientists who continuously optimize production models.

AI-Augmented Cybersecurity Triage Using Graph RAG for Cloud Security Operations

Deloitte

Deloitte developed a Cybersecurity Intelligence Center to help SecOps engineers manage the overwhelming volume of security alerts generated by cloud security platforms like Wiz and CrowdStrike. Using AWS's open-source Graph RAG Toolkit, Deloitte built "AI for Triage," a human-in-the-loop system that combines long-term organizational memory (stored in hierarchical lexical graphs) with short-term operational data (document graphs) to generate AI-assisted triage records. The solution reduced 50,000 security issues across 7 AWS domains to approximately 1,300 actionable items, converting them into over 6,500 nodes and 19,000 relationships for contextual analysis. This approach enables SecOps teams to make informed remediation decisions based on organizational policies, historical experiences, and production system context, while maintaining human accountability and creating automation recipes rather than brittle code-based solutions.

Automated ESG Reporting with Agentic AI for Enterprise Sustainability Compliance

Gardenia Technologies

Gardenia Technologies partnered with AWS to develop Report GenAI, an automated ESG reporting solution that helps organizations reduce sustainability reporting time by up to 75%. The system uses agentic AI on Amazon Bedrock to automatically pre-fill ESG disclosure reports by integrating data from corporate databases, document stores, and web searches, while maintaining human oversight for validation and refinement. Omni Helicopters International successfully reduced their CDP reporting time from one month to one week using this solution.

Automated Reasoning Checks in Amazon Bedrock Guardrails for Responsible AI Deployment

PwC

PwC and AWS collaborated to develop Automated Reasoning checks in Amazon Bedrock Guardrails to address the challenge of deploying generative AI solutions while maintaining accuracy, security, and compliance in regulated industries. The solution combines mathematical verification with LLM outputs to provide verifiable trust and rapid deployment capabilities. Three key use cases were implemented: EU AI Act compliance for financial services risk management, pharmaceutical content review through the Regulated Content Orchestrator (RCO), and utility outage management for real-time decision support, all demonstrating enhanced accuracy and compliance verification compared to traditional probabilistic methods.

Building an AI Co-Pilot Application: Patterns and Best Practices

Thoughtworks

Thoughtworks built Boba, an experimental AI co-pilot for product strategy and ideation, to learn about building generative AI experiences beyond chat interfaces. The team implemented several key patterns including templated prompts, structured responses, real-time progress streaming, context management, and external knowledge integration. The case study provides detailed insights into practical LLMOps patterns for building production LLM applications with enhanced user experiences.

Building an AI Co-pilot for Product Strategy with LLM Integration Patterns

Thoughtworks

Thoughtworks built Boba, an experimental AI co-pilot for product strategy and ideation, to explore effective patterns for LLM-powered applications beyond simple chat interfaces. The team developed and documented key patterns including templated prompts, structured responses, real-time progress streaming, context management, and external knowledge integration. The case study provides detailed implementation insights for building sophisticated LLM applications with better user experiences.

Building Production AI Agents with Vector Databases and Automated Data Collection

Devin Kearns

Over 18 months, a company built and deployed autonomous AI agents for business automation, focusing on lead generation and inbox management. They developed a comprehensive approach using vector databases (Pinecone), automated data collection, structured prompt engineering, and custom tools through n8n for deployment. Their solution emphasizes the importance of up-to-date data, proper agent architecture, and tool integration, resulting in scalable AI agent teams that can effectively handle complex business workflows.

Building Production LLM Applications with DSPy Framework

AlixPartners

A technical consultant presents a comprehensive workshop on using DSPy, a declarative framework for building modular LLM-powered applications in production. The presenter demonstrates how DSPy enables rapid iteration on LLM applications by treating LLMs as first-class citizens in Python programs, with built-in support for structured outputs, type guarantees, tool calling, and automatic prompt optimization. Through multiple real-world use cases including document classification, contract analysis, time entry correction, and multi-modal processing, the workshop shows how DSPy's core primitives—signatures, modules, tools, adapters, optimizers, and metrics—allow teams to build production-ready systems that are transferable across models, optimizable without fine-tuning, and maintainable at scale.

Cross-Industry ROI Survey of Enterprise AI and Agent Adoption

Super AI

Super AI, an AI planning platform company, conducted a comprehensive ROI survey collecting self-reported data from over 1,000 organizations about their AI and agent deployments in production. The study aimed to address the lack of systematic information about real-world ROI from enterprise AI adoption, particularly as traditional impact metrics struggle to capture AI's value. The survey collected approximately 3,500 use cases across eight impact categories (time savings, increased output, quality improvement, new capabilities, decision-making, cost savings, revenue increase, and risk reduction). Results showed that 44.3% of organizations reported modest ROI and 37.6% reported high ROI, with only 5% experiencing negative ROI. The study revealed that time savings dominated initial use cases (35%), but organizations pursuing automation and agentic workflows, as well as those implementing AI systematically across multiple functions, reported significantly higher transformational impact. Notably, 42% of billion-dollar companies now have production agents deployed (up from 11% in Q1), and CEO expectations for ROI realization have shifted dramatically from 3-5 years to 1-3 years.

Custom RAG Implementation for Enterprise Technology Research and Knowledge Management

Trace3

Trace3's Innovation Team developed Innovation-GPT, a custom solution to streamline their technology research and knowledge management processes. The system uses LLMs and RAG architecture to automate the collection and analysis of data about enterprise technology companies, combining web scraping, structured data generation, and natural language querying capabilities. The solution addresses the challenges of managing large volumes of company research data while maintaining human oversight for quality control.

Data Engineering Challenges and Best Practices in LLM Production

QuantumBlack

Data engineers from QuantumBlack discuss the evolving landscape of data engineering with the rise of LLMs, highlighting key challenges in handling unstructured data, maintaining data quality, and ensuring privacy. They share experiences dealing with vector databases, data freshness in RAG applications, and implementing proper guardrails when deploying LLM solutions in enterprise settings.

Document Metadata Extraction at Scale Using Generative AI for Healthcare and Financial Services

AArete

AArete, a management and technology consulting firm serving healthcare payers and financial services, developed Doxy AI to extract structured metadata from complex business documents like provider and vendor contracts. The company evolved from manual document processing (100 documents per week per person) through rules-based approaches (50-60% accuracy) to a generative AI solution built on AWS Bedrock using Anthropic's Claude models. The production system achieved 99% accuracy while processing up to 500,000 documents per week, resulting in a 97% reduction in manual effort and $330 million in client savings through improved contract analysis, claims overpayment identification, and operational efficiency.

Enterprise AI Agent Development: Lessons from Production Deployments

IBM, The Zig, Augmented AI Labs

This panel discussion features three companies - IBM, The Zig, and Augmented AI Labs - sharing their experiences building and deploying AI agents in enterprise environments. The panelists discuss the challenges of scaling AI agents, including cost management, accuracy requirements, human-in-the-loop implementations, and the gap between prototype demonstrations and production realities. They emphasize the importance of conservative approaches, proper evaluation frameworks, and the need for human oversight in high-stakes environments, while exploring emerging standards like agent communication protocols and the evolving landscape of enterprise AI adoption.

Fine-tuning LLMs for Market Research Product Description Matching

Kantar Worldpanel

Kantar Worldpanel, a market research company, needed to modernize their product description matching system to better link paper receipt descriptions with product barcode names. They leveraged Databricks Mosaic AI to experiment with various LLMs (including Llama, Mistral, and GPT models) to generate high-quality training data, achieving 94% accuracy in matching product descriptions. This automated approach generated 120,000 training pairs in just hours, allowing them to fine-tune smaller models for production use while freeing up human resources for more complex tasks.

GenAI Governance in Practice: Access Control, Data Quality, and Monitoring for Production LLM Systems

Xomnia

Martin Der, a data scientist at Xomnia, presents practical approaches to GenAI governance addressing the challenge that only 5% of GenAI projects deliver immediate ROI. The talk focuses on three key pillars: access and control (enabling self-service prototyping through tools like Open WebUI while avoiding shadow AI), unstructured data quality (detecting contradictions and redundancies in knowledge bases through similarity search and LLM-based validation), and LLM ops monitoring (implementing tracing platforms like LangFuse and creating dynamic golden datasets for continuous testing). The solutions include deploying Chrome extensions for workflow integration, API gateways for centralized policy enforcement, and developing a knowledge agent called "Genie" for internal use cases across telecom, healthcare, logistics, and maritime industries.

LLM Production Case Studies: Consulting Database Search, Automotive Showroom Assistant, and Banking Development Tools

Globant

A collection of LLM implementation case studies detailing challenges and solutions in various industries. Key cases include: a consulting firm's semantic search implementation for financial data, requiring careful handling of proprietary data and similarity definitions; an automotive company's showroom chatbot facing challenges with data consistency and hallucination control; and a bank's attempt to create a custom code copilot, highlighting the importance of clear requirements and technical understanding in LLM projects.

MLOps Maturity Levels and Enterprise Implementation Challenges

Various

The case study explores MLOps maturity levels (0-2) in enterprise settings, discussing how organizations progress from manual ML deployments to fully automated systems. It covers the challenges of implementing MLOps across different team personas (data scientists, ML engineers, DevOps), highlighting key considerations around automation, monitoring, compliance, and business value metrics. The study particularly emphasizes the differences between traditional ML and LLM deployments, and how organizations need to adapt their MLOps practices for each.

Multi-Agent DBT Development Workflow for Data Engineering Consulting

Mammoth Growth

Mammoth Growth, a boutique data consultancy specializing in marketing and customer data, developed a multi-agent AI system to automate DBT development workflows in response to data teams struggling to deliver analytics at the speed of business. The solution employs a team of specialized AI agents (orchestrator, analyst, architect, and analytics engineer) that leverage the DBT Model Context Protocol (MCP) to autonomously write, document, and test production-grade DBT code from detailed specifications. The system enabled the delivery of a complete enterprise-grade data lineage with 15 data models and two gold-layer models in just 3 weeks for a pilot client, compared to an estimated 10 weeks using traditional manual development approaches, while maintaining code quality standards through human-led requirements gathering and mandatory code review before production deployment.

Multi-Agent LLM Systems: Implementation Patterns and Production Case Studies

Nimble Gravity, Hiflylabs

A research study conducted by Nimble Gravity and Hiflylabs examining GenAI adoption patterns across industries, revealing that approximately 28-30% of GenAI projects successfully transition from assessment to production. The study explores various multi-agent LLM architectures and their implementation in production, including orchestrator-based, agent-to-agent, and shared message pool patterns, demonstrating practical applications like automated customer service systems that achieved significant cost savings.

Multi-Industry LLM Deployment: Building Production AI Systems Across Diverse Verticals

Caylent

Caylent, a development consultancy, shares their extensive experience building production LLM systems across multiple industries including environmental management, sports media, healthcare, and logistics. The presentation outlines their comprehensive approach to LLMOps, emphasizing the importance of proper evaluation frameworks, prompt engineering over fine-tuning, understanding user context, and managing inference economics. Through various client projects ranging from multimodal video search to intelligent document processing, they demonstrate key lessons learned about deploying reliable AI systems at scale, highlighting that generative AI is not a "magical pill" but requires careful engineering around inputs, outputs, evaluation, and user experience.

Practical LLM Deployment: From Evaluation to Fine-tuning

Parlance Labs

A comprehensive discussion of LLM deployment challenges and solutions across multiple industries, focusing on practical aspects like evaluation, fine-tuning, and production deployment. The case study covers experiences from GitHub's Copilot development, real estate CRM implementation, and consulting work at Parlance Labs, highlighting the importance of rigorous evaluation, data inspection, and iterative development in LLM deployments.

Specialized Language Models for Contact Center Transformation

Accenture

Accenture partnered with Databricks to transform a client's customer contact center by implementing specialized language models (SLMs) that go beyond simple prompt engineering. The client faced challenges with high call volumes, impersonal service, and missed revenue opportunities. Using Databricks' MLOps platform and GPU infrastructure, they developed and deployed fine-tuned language models that understand industry-specific context, cultural nuances, and brand styles, resulting in improved customer experience and operational efficiency. The solution includes real-time monitoring and multimodal capabilities, setting a new standard for AI-driven customer service operations.