Mendix, a low-code platform provider, faced the challenge of integrating advanced generative AI capabilities into their development environment while maintaining security and scalability. They implemented Amazon Bedrock to provide their customers with seamless access to various AI models, enabling features like text generation, summarization, and multimodal image generation. The solution included custom model training, robust security measures through AWS services, and cost-effective model selection capabilities.
Mendix is a low-code application development platform owned by Siemens, recognized as an industry leader by both Gartner and Forrester. The company has been helping enterprises build and deploy applications since 2005, and since 2016 has maintained a strategic collaboration with AWS for cloud infrastructure. This case study describes their integration of generative AI capabilities through Amazon Bedrock to enhance both their platform’s development experience and to enable their customers to build AI-powered applications.
It’s worth noting that this case study is published on the AWS blog and is co-authored by both a Mendix employee and an AWS employee, which means the perspective is understandably promotional. The claims about benefits should be considered in this light, though the technical integration details provide useful insights into how a platform company approaches LLMOps.
Mendix identified the rise of generative AI as both an opportunity and a challenge for their low-code platform. The company wanted to achieve two primary objectives:
The challenge was complex: integrating advanced AI capabilities into a low-code environment requires solutions that are simultaneously innovative, scalable, secure, and easy to use. This is particularly important for enterprise customers who have stringent security and compliance requirements.
Mendix selected Amazon Bedrock as their foundation for generative AI integration. The choice of Bedrock provides access to multiple foundation models from various providers including Amazon Titan, Anthropic, AI21 Labs, Cohere, Meta, and Stability AI. This multi-model approach is significant from an LLMOps perspective as it allows for model selection based on specific use case requirements and cost considerations.
The unified API provided by Bedrock is highlighted as a key advantage, simplifying experimentation with different models and reducing the effort required for upgrades and model swaps. This abstraction layer is valuable for production deployments where model flexibility and future-proofing are important considerations.
A concrete output of this integration is the Mendix AWS Bedrock Connector, available through the Mendix Marketplace. This connector serves as a pre-built integration that eliminates what the case study describes as “traditional complexities” in AI integration. The connector approach is a common pattern in LLMOps where platform vendors create abstraction layers to simplify AI capability consumption for their users.
The connector is accompanied by documentation, samples, and blog posts to guide implementation. This supporting ecosystem is an important aspect of productizing AI capabilities, as raw model access without guidance often leads to poor implementations.
The case study mentions several specific AI use cases that the integration supports:
While these use cases are described at a high level, they represent the breadth of applications that foundation models through Bedrock can enable. The emphasis on personalization and context-awareness suggests integration with user data systems, which has implications for data privacy and security.
The case study mentions ongoing research using the Mendix Extensibility framework to explore more advanced AI integrations. Specific areas being explored include:
These experimental capabilities, demonstrated in a video referenced in the original post, suggest a direction toward AI-assisted low-code development where the AI helps build applications, not just power features within them. However, these are described as “nascent concepts” still being experimented with, so they represent future potential rather than current production capabilities.
The security implementation described is substantial and addresses key enterprise concerns around AI adoption. The architecture includes:
This security architecture addresses a common concern in enterprise LLMOps: the fear that proprietary data sent to AI models might be used for training or could be exposed. The use of PrivateLink for private connectivity is particularly relevant for organizations with strict network security requirements.
The multi-model approach through Bedrock is a notable LLMOps pattern. Rather than locking into a single model, Mendix and their customers can select models based on:
This flexibility is important for production systems where the optimal model may change over time or vary by use case.
The case study notes that Amazon Bedrock provides “continual updates and support for the available models,” giving users access to the latest advancements. From an LLMOps perspective, this managed model lifecycle is valuable as it reduces the operational burden of keeping AI capabilities current. However, it also introduces potential risks if model behavior changes unexpectedly—a consideration not explicitly addressed in the case study.
The mention of anticipation for new Bedrock features announced at AWS re:Invent 2023, specifically Amazon Bedrock Agents and Amazon Bedrock Knowledge Bases, suggests plans for more sophisticated agentic AI and retrieval-augmented generation (RAG) implementations.
The case study briefly mentions that diverse model offerings allow selection of “cost-effective large language models based on your use case.” This highlights cost optimization as a key LLMOps concern, though specific cost data or strategies are not provided.
While this case study provides useful insights into integrating generative AI into a platform product, several aspects warrant critical consideration:
Despite these limitations, the case study illustrates a real-world approach to integrating LLM capabilities into an enterprise software platform, with particular attention to security architecture that is often overlooked in AI adoption discussions.
Predibase, a fine-tuning and model serving platform, announced its acquisition by Rubrik, a data security and governance company, with the goal of combining Predibase's generative AI capabilities with Rubrik's secure data infrastructure. The integration aims to address the critical challenge that over 50% of AI pilots never reach production due to issues with security, model quality, latency, and cost. By combining Predibase's post-training and inference capabilities with Rubrik's data security posture management, the merged platform seeks to provide an end-to-end solution that enables enterprises to deploy generative AI applications securely and efficiently at scale.
Codeium's journey in building their AI-powered development tools showcases how investing early in enterprise-ready infrastructure, including containerization, security, and comprehensive deployment options, enabled them to scale from individual developers to large enterprise customers. Their "go slow to go fast" approach in building proprietary infrastructure for code completion, retrieval, and agent-based development culminated in Windsurf IDE, demonstrating how thoughtful early architectural decisions can create a more robust foundation for AI tools in production.
New Relic, a major observability platform processing 7 petabytes of data daily, implemented GenAI both internally for developer productivity and externally in their product offerings. They achieved a 15% increase in developer productivity through targeted GenAI implementations, while also developing sophisticated AI monitoring capabilities and natural language interfaces for their customers. Their approach balanced cost, accuracy, and performance through a mix of RAG, multi-model routing, and classical ML techniques.