Mixtral 8x22B: Cheaper, Better, Faster, Stronger
mistral.ai
17 квітня
· 2
·
0
Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Mixtral 8x22B comes with the following strengths:
- It is fluent in English, French, Italian, German, and Spanish
- It has strong mathematics and coding capabilities
- It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale
- Its 64K tokens context window allows precise information recall from large documents
Щоб залишити коментар необхідно авторизуватися.
Коментарі (2)