Mistral Voxtral is an Open-Weights Competitor to OpenAI Whisper and Other ASR Tools

Mistral has released Voxtral, a large language model aimed at speech recognition (ASR) applications that seek to integrate more advanced LLM-based capabilities and go beyond simple transcription. For two variants of the model, Voxtral Mini (3B) and Voxtral Small (24B), Mistral has released the weights under the Apache 2.0 license.

According to Mistral, Voxtral closes a gap between classic ASR systems, which delivers cost-efficient transcription but lack semantic understanding, and more advanced LLM-based models, which provide transcription and language understanding. While this is similar to what other solutions like GPT-4o mini Transcribe, Gemini 2.5 Flash, and others provide, Voxtral stands out by making its model weights openly available, improving deployment flexibility and enabling a different cost model.

Besides being available for local deployment, the new models can be accessed via Mistral’s API, which also offers a custom version of Voxtral Mini optimized for transcription, helping reduce inference cost and latency.

Voxtral has a 32K token context, which enables it to process audios up to 30 minutes for transcription, or 40 minutes for understanding. Being LLM-based means it naturally lends itself to tasks like Q&A and summarization based on audio content without requiring to chain an ASR system with a language model. Additionally, it enables executing backend functions, workflows, or API calls based on spoken user intents. As usual for Mistral models, Voxtral is natively multilingual and supports automatic language detection with optimized performance for European languages. It goes without saying that Voxtral retains the text-only capabilities of its base model and can be used as a text-only LLM.

Speaking of transcription-only use cases, Mistral claims both cost and performance advantages over other solutions like OpenAI Whisper, ElevenLabs Scribe, and Gemini 2.5 Flash.

Voxtral comprehensively outperforms Whisper large-v3, the current leading open-source Speech Transcription model. It beats GPT-4o mini Transcribe and Gemini 2.5 Flash across all tasks, and achieves state-of-the-art results on English short-form and Mozilla Common Voice, surpassing ElevenLabs Scribe and demonstrating its strong multilingual capabilities.

When it comes to audio understanding, Voxtral can answer questions directly from speech thanks to its LLM foundation. This is a distinct approach compared to other LLM-based speech recognition models’. For instance, NVIDIA NeMo Canary-Qwen-2.5B and IBM’s Granite Speech have two distinct modes, ASR and LLM, that can be combined at different stages, such as using the LLM to summarize the textual output generated by the ASR step.

According to Mistral’s own benchmarking, Voxtral Small is competitive with GPT-4o-mini and Gemini 2.5 Flash across several tasks, and outperforms both in speech translation.

Besides offering Voxtral for download for local deployment or use via the API, Mistral also supports additional features specifically aimed at enterprise customers, including support for private deployment at production-scale, domain-specific fine-tuning, and advanced use cases such as speaker identification, emotion detection, diarization and others.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top