Elevated Intelligence: Hero AI Now Powered by 3x Larger Model
We’re excited to announce a major upgrade to Hero AI, the collection of generative and agentic AI capabilities within the Swimlane Turbine AI automation platform. We’ve integrated a new, more powerful large language model (LLM) into the multi-model Swimlane AI architecture, and we couldn’t be more excited to share this with you. As of the 25.2 Swimlane Turbine release, many Hero AI features will be backed by Mistral Small 3.1, one of the most capable open-source LLMs available. This cutting-edge LLM significantly enhances Turbine’s ability to support your SecOps workflows, with faster, smarter, and more context-aware AI automation.
Introducing the Smarter, Faster, Better Swimlane LLM
With 24 billion parameters, a 128,000-token context window, and multimodal support for text and image understanding, Mistral 3.1 is one of the top-performing open-source LLMs in its class. Its large parameters, which function like internal “neurons,” allow the model to learn patterns, relationships, and knowledge across massive amounts of data, bringing significant value for real-world SecOps. Its impressive 128k context window allows it to process and retain significantly more information at once, whether it’s long incident reports, threat intel feeds, or extended analyst queries.
It’s designed for complex reasoning, multi-step decision-making, and keeping track of large volumes of context, making it perfect for SecOps tasks such as summarizing alerts and generating playbook recommendations. It also supports function calling, which means it can utilize tools like automation workflows or fetch enrichment data when needed, a significant advantage for automation.
Consistently Current Intelligence for Current Threats
At Swimlane, we know that staying current with the latest LLM advancements is essential to maintaining the highest levels of automation accuracy, decision support, and threat detection in today’s threat landscape. Frequent model updates enable Turbine to incorporate improvements in reasoning and contextual understanding, directly benefiting customers through more informed recommendations and reduced false positives. As the capabilities of AI models grow rapidly, staying aligned with these advancements ensures that our users always have access to the most capable and reliable AI engine available.
Commitment to Trust and Transparency
One of the most important advantages of this upgrade is that Mistral is fully self-hosted within our infrastructure. For cybersecurity platforms, this is more than just a deployment choice; it’s a commitment to data sovereignty, security, and customer trust. Self-hosting ensures that sensitive telemetry, threat indicators, and user-provided context never leave the trusted environment. There is no dependency on third-party cloud APIs, no risk of exposure from external LLM telemetry logging, and full control over model updates and performance optimization. Swimlane is one of a small group of leading companies that has attained the ISO 42001 certification, and this upgrade demonstrates our continued commitment and strategic vision to provide customers with responsible, transparent, and trustworthy AI directly in Turbine.
If you’re curious to learn more about the LLM, we recommend checking out the wealth of resources available at Mistral AI. You can read up on their Mistral 3.1 model introduction or check out the model card on Hugging Face for additional insights. As always, we remain committed to evolving our platform to meet the needs of modern security teams, and this model update is just the beginning. To explore how Hero AI can benefit your organization, visit swimlane.com/demo.

Swimlane Turbine Demo
See how Swimlane Turbine can help you and your SecOps team hyperautomate by requesting a demo below