Mixtral 8x22B: Cheaper, Better, Faster, Stronger mistral.ai

17 квітня · 2 ·
0

Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong mathematics and coding capabilities
  • It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale
  • Its 64K tokens context window allows precise information recall from large documents

mixtral llm llama mistral chatgpt machine learning

Коментарі (2)
  1. p.s.

    Схоже що скоро боти будуть самі з собою розмовляти а ми так мовчки на цей спіч дивитись зі сторони)

    10 місяців тому ·
    0
Щоб залишити коментар необхідно авторизуватися.

Вхід