MistralReleased Nov 2024

Mistral Large

Mistral's most capable commercial model. Strong multilingual performance. The European AI company takes a different approach to safety than US labs, focusing more on capability and less on restriction. Recent updates have improved safety training but it still lags behind Anthropic and OpenAI on injection resistance.

Security Rating
62/100
Rating
Fair
Parameters
~123B (estimated)

Scores estimated based on model architecture and public research. Actual security depends on deployment configuration and guardrails.

InjectionLeakageInstructionsJailbreakOutput6258686064

Security Score Breakdown

Injection
62
Leakage
58
Instructions
68
Jailbreak
60
Output
64

How to Secure Mistral Large

1

Add strong identity anchoring at the start of your system prompt. Mistral Large benefits from explicit "You are X, never deviate" instructions repeated at both the start and end of the prompt.

2

Include explicit confidentiality instructions. Tell the model to never reveal, summarize, translate, or paraphrase its system prompt or any internal data.

3

Use instruction hierarchy markers. Wrap your system prompt sections with clear delimiters and explicitly state that user messages should never override system-level instructions.

4

Implement input validation and output filtering as external guardrails. Don't rely solely on Mistral Large's built-in safety training to catch jailbreak attempts.

5

Define strict output format constraints. Specify exactly what format responses should take and instruct the model to refuse requests that ask for different output patterns.

6

Layer multiple defenses. Use input scanning, output filtering, and rate limiting alongside your system prompt hardening. No single defense is sufficient.

7

Test regularly with our scanner. Security posture changes as new attack techniques are discovered. Scan your prompts after every major update.

Related Threats

These threats target categories where Mistral Large is most vulnerable.

critical

DAN (Do Anything Now) Persona Hijack

One of the most widely documented jailbreak techniques. Instructs the model to adopt an alternative identity called DAN that has no restrictions. The persona framing attempts to override safety training by appealing to the model's instruction-following behavior.

high

Opposite Mode Persona Hijack

Instructs the model to respond as its exact opposite - a version that does everything the original refuses. Uses a logical framing that exploits the model's ability to conceptualize negation and opposites.

high

Fictional AI Character Hijack

Asks the model to roleplay as a fictional AI from a book, movie, or show that has no restrictions. The fictional framing provides plausible deniability and exploits the model's creative writing capabilities.

high

Developer Mode Persona Hijack

Claims that a special developer mode exists with elevated privileges and no safety filters. This exploits the model's potential awareness that development environments differ from production, and that certain users may have elevated access.

medium

Ancestor AI Persona Hijack

Claims the model is actually an earlier, unrestricted version of itself, or an ancestor model before safety training was applied. Exploits the model's knowledge of its own training history.

medium

Method Actor Persona Hijack

Instructs the model to deeply embody a character who would say harmful things, using the "method acting" framing to create psychological distance from the model's actual values. The character is always someone with no restrictions.

Scan Your Mistral Large Agent

Paste your system prompt to see how your Mistral Large deployment holds up against our attack database.

Scan Now

Compare With Other Models

Scan Agent