Mistral AI’s Mistral Large 2 (24.07) foundation model (FM) is now generally available in Amazon Bedrock. Mistral Large 2 is the newest version of Mistral Large, and according to Mistral AI offers significant improvements across multilingual capabilities, math, reasoning, coding, and much more.
In this post, we discuss the benefits and capabilities of this new model with some examples.
Overview of Mistral Large 2
Mistral Large 2 is an advanced large language model (LLM) with state-of-the-art reasoning, knowledge, and coding capabilities according to Mistral AI. It is multi-lingual by design, supporting dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi. Per Mistral AI, a significant effort was also devoted to enhancing the model’s reasoning capabilities. One of the key focuses during training was to minimize the model’s tendency to hallucinate, or generate plausible-sounding but factually incorrect or irrelevant information. This was achieved by fine-tuning the model to be more cautious and discerning in its responses, making sure it provides reliable and accurate outputs. Additionally, the new Mistral Large 2 is trained to acknowledge when it can’t find solutions or doesn’t have sufficient information to provide a confident answer.
According to Mistral AI, the model is also proficient in coding, trained on over 80 programming languages such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran. With its best-in-class agentic capabilities, it can natively call functions and output JSON, enabling seamless interaction with external systems, APIs, and tools. Additionally, Mistral Large 2 (24.07) boasts advanced reasoning and mathematical capabilities, making it a powerful asset for tackling complex logical and computational challenges.
Mistral Large 2 also offers an increased context window of 128,000 tokens. At the time of writing, the model (mistral.mistral-large-2407-v1:0) is available in the us-west-2 AWS Region.
Get started with Mistral Large 2 on Amazon Bedrock
If you’re new to using Mistral AI models, you can request model access on the Amazon Bedrock console. For more details, see Manage access to Amazon Bedrock foundation models.
To test Mistral Large 2 on the Amazon Bedrock console, choose Text or Chat under Playgrounds in the navigation pane. Then choose Select model and choose Mistral as the category and Mistral Large 24.07 as the model.
By choosing View API request, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. You can use model IDs such as mistral.mistral-large-2407-v1:0, as shown in the following code:
$ aws bedrock-runtime invoke-model
–model-id mistral.mistral-large-2407-v1:0
–body “{“prompt”:”<s>[INST] this is where you place your input text [/INST]”, “max_tokens”:200, “temperature”:0.5, “top_p”:0.9, “top_k”:50}”
–cli-binary-format raw-in-base64-out
–region us-west-2
invoke-model-output.txt
In the following sections, we dive into the capabilities of Mistral Large 2.
Increased context window
Mistral Large 2 supports a context window of 128,000 tokens, compared to Mistral Large (24.02), which had a 32,000-token context window. This larger context window is important for developers because it allows the model to process and understand longer pieces of text, such as entire documents or code files, without losing context or coherence. This can be particularly useful for tasks like code generation, documentation analysis, or any application that requires understanding and processing large amounts of text data.
Generating JSON and tool use
Mistral Large 2 now offers a native JSON output mode. This feature allows developers to receive the model’s responses in a structured, easy-to-read format that can be readily integrated into various applications and systems. With JSON being a widely adopted data exchange standard, this capability simplifies the process of working with the model’s outputs, making it more accessible and practical for developers across different domains and use cases. To learn more about how to generate JSON with the Converse API, refer to Generating JSON with the Amazon Bedrock Converse API.
To generate JSON with the Converse API, you need to define a toolSpec. In the following code, we present an example for a travel agent company that will take passenger information and requests and convert them to JSON:
# Define the tool configuration
import json
tool_list = [
{
“toolSpec”: {
“name”: “travel_agent”,
“description”: “Converts trip details as a json structure.”,
“inputSchema”: {
“json”: {
“type”: “object”,
“properties”: {
“origin_airport”: {
“type”: “string”,
“description”: “Origin airport (IATA code)”
},
“destination_airport”: {
“type”: “boolean”,
“description”: “Destination airport (IATA code)”
},
“departure_date”: {
“type”: “string”,
“description”: “Departure date”,
},
“return_date”: {
“type”: “string”,
“description”: “Return date”,
}
},
“required”: [
“origin_airport”,
“destination_airport”,
“departure_date”,
“return_date”
]
}
}
}
}
]
content = “””
I would like to book a flight from New York (JFK) to London (LHR) for a round-trip.
The departure date is June 15, 2023, and the return date is June 25, 2023.
For the flight preferences, I would prefer to fly with Delta or United Airlines.
My preferred departure time range is between 8 AM and 11 AM, and my preferred arrival time range is between 9 AM and 1 PM (local time in London).
I am open to flights with one stop, but no more than that.
Please include non-stop flight options if available.
“””
message = {
“role”: “user”,
“content”: [
{ “text”: f”<content>{content}</content>” },
{ “text”: “Please create a well-structured JSON object representing the flight booking request, ensuring proper nesting and organization of the data. Include sample data for better understanding. Create the JSON based on the content within the <content> tags.” }
],
}
# Bedrock client configuration
response = bedrock_client.converse(
modelId=model_id,
messages=[message],
inferenceConfig={
“maxTokens”: 500,
“temperature”: 0.1
},
toolConfig={
“tools”: tool_list
}
)
response_message = response[‘output’][‘message’]
response_content_blocks = response_message[‘content’]
content_block = next((block for block in response_content_blocks if ‘toolUse’ in block), None)
tool_use_block = content_block[‘toolUse’]
tool_result_dict = tool_use_block[‘input’]
print(json.dumps(tool_result_dict, indent=4))
We get the following response:
{
“origin_airport”: “JFK”,
“destination_airport”: “LHR”,
“departure_date”: “2023-06-15”,
“return_date”: “2023-06-25”
}
Mistral Large 2 was able to correctly take our user query and convert the appropriate information to JSON.
Mistral Large 2 also supports the Converse API and tool use. You can use the Amazon Bedrock API to give a model access to tools that can help it generate responses for messages that you send to the model. For example, you might have a chat application that lets users find the most popular song played on a radio station. To answer a request for the most popular song, a model needs a tool that can query and return the song information. The following code shows an example for getting the correct train schedule:
# Define the tool configuration
toolConfig = {
“tools”: [
{
“toolSpec”: {
“name”: “shinkansen_schedule”,
“description”: “Fetches Shinkansen train schedule departure times for a specified station and time.”,
“inputSchema”: {
“json”: {
“type”: “object”,
“properties”: {
“station”: {
“type”: “string”,
“description”: “The station name.”
},
“departure_time”: {
“type”: “string”,
“description”: “The departure time in HH:MM format.”
}
},
“required”: [“station”, “departure_time”]
}
}
}
}
]
}
# Define shikansen schedule tool
def shinkansen_schedule(station, departure_time):
schedule = {
“Tokyo”: {“09:00”: “Hikari”, “12:00”: “Nozomi”, “15:00”: “Kodama”},
“Osaka”: {“10:00”: “Nozomi”, “13:00”: “Hikari”, “16:00”: “Kodama”}
}
return schedule.get(station, {}).get(departure_time, “No train found”)
def prompt_mistral(prompt):
messages = [{“role”: “user”, “content”: [{“text”: prompt}]}]
converse_api_params = {
“modelId”: model_id,
“messages”: messages,
“toolConfig”: toolConfig,
“inferenceConfig”: {“temperature”: 0.0, “maxTokens”: 400},
}
response = bedrock_client.converse(**converse_api_params)
if response[‘output’][‘message’][‘content’][0].get(‘toolUse’):
tool_use = response[‘output’][‘message’][‘content’][0][‘toolUse’]
tool_name = tool_use[‘name’]
tool_inputs = tool_use[‘input’]
if tool_name == “shinkansen_schedule”:
print(“Mistral wants to use the shinkansen_schedule tool”)
station = tool_inputs[“station”]
departure_time = tool_inputs[“departure_time”]
try:
result = shinkansen_schedule(station, departure_time)
print(“Train schedule result:”, result)
except ValueError as e:
print(f”Error: {str(e)}”)
else:
print(“Mistral responded with:”)
print(response[‘output’][‘message’][‘content’][0][‘text’])
prompt_mistral(“What train departs Tokyo at 9:00?”)
We get the following response:
Mistral wants to use the shinkansen_schedule tool
Train schedule result: Hikari
Mistral Large 2 was able to correctly identify the shinkansen tool and demonstrate its use.
Multilingual support
Mistral Large 2 now supports a large number of character-based languages such as Chinese, Japanese, Korean, Arabic, and Hindi. This expanded language support allows developers to build applications and services that can cater to users from diverse linguistic backgrounds. With multilingual capabilities, developers can create localized UIs, provide language-specific content and resources, and deliver a seamless experience for users regardless of their native language.
In the following example, we translate customer emails generated by the author into different languages such as Hindi and Japanese:
emails= “””
“I recently bought your RGB gaming keyboard and absolutely love the customizable lighting features! Can you guide me on how to set up different profiles for each game I play?”
“I’m trying to use the macro keys on the gaming keyboard I just purchased, but they don’t seem to be registering my inputs. Could you help me figure out what might be going wrong?”
“I’m considering buying your gaming keyboard and I’m curious about the key switch types. What options are available and what are their main differences?”
“I wanted to report a small issue where my keyboard’s space bar is a bit squeaky. However, your quick-start guide was super helpful and I fixed it easily by following the lubrication tips. Just thought you might want to know!”
“My new gaming keyboard stopped working within a week of purchase. None of the keys respond, and the lights don’t turn on. I need a solution or a replacement as soon as possible.”
“I’ve noticed that the letters on the keys of my gaming keyboard are starting to fade after several months of use. Is this covered by the warranty?”
“I had an issue where my keyboard settings would reset every time I restarted my PC. I figured out it was due to a software conflict and resolved it by updating the firmware. Just wanted to ask if there are any new updates coming soon?”
“I’ve been having trouble with the keyboard software not saving my configurations, and it’s starting to get frustrating. What can be done to ensure my settings are saved permanently?”
“””
def converse(prompt, inference_config):
messages = [{“role”: “user”, “content”: [{“text”: prompt}]}]
response = bedrock_client.converse(
messages=messages,
modelId=model_id,
inferenceConfig=inference_config
)
generated_text = response[‘output’][‘message’][‘content’][0][‘text’]
print(generated_text)
return generated_text
prompt=f”””
emails={emails}
Translate the following customer emails into these languages:
1. Hindi
2. Japanese
Label each language section accordingly”””.format(emails=emails)
inference_config = {“temperature”: 0.0, “maxTokens”: 4000, “topP”: 0.1}
response = converse(prompt, inference_config)
We get the following response:
1. “मैंने हाल ही में आपका RGB गेमिंग कीबोरà¥à¤¡ खरीदा और कसà¥à¤Ÿà¤®à¤¾à¤‡à¤œà¥‡à¤¬à¤² लाइटिंग फीचरà¥à¤¸ से बहà¥à¤¤ पà¥à¤°à¥‡à¤® करता हूà¤! कà¥à¤¯à¤¾ आप मà¥à¤à¥‡ बता सकते हैं कि मैं हर गेम के लिठअलग-अलग पà¥à¤°à¥‹à¤«à¤¾à¤‡à¤² कैसे सेट कर सकता हूà¤?”
2. “मैं अपने नठगेमिंग कीबोरà¥à¤¡ पर मैकà¥à¤°à¥‹ कीज का उपयोग करने की कोशिश कर रहा हूà¤, लेकिन वे मेरे इनपà¥à¤Ÿà¥à¤¸ को रजिसà¥à¤Ÿà¤° नहीं कर रहे हैं। कà¥à¤¯à¤¾ आप मà¥à¤à¥‡ बता सकते हैं कि कà¥à¤¯à¤¾ गलत हो सकता है?”
3. “मैं आपका गेमिंग कीबोरà¥à¤¡ खरीदने के बारे में सोच रहा हूठऔर मà¥à¤à¥‡ की सà¥à¤µà¤¿à¤š टाइपà¥à¤¸ के बारे में जानना है। कà¥à¤¯à¤¾ विकलà¥à¤ª उपलबà¥à¤§ हैं और उनके मà¥à¤–à¥à¤¯ अंतर कà¥à¤¯à¤¾ हैं?”
4. “मैं यह रिपोरà¥à¤Ÿ करना चाहता था कि मेरे कीबोरà¥à¤¡ का सà¥à¤ªà¥‡à¤¸ बार थोड़ा सा चरमरा रहा है। हालाà¤à¤•à¤¿, आपका कà¥à¤µà¤¿à¤•-सà¥à¤Ÿà¤¾à¤°à¥à¤Ÿ गाइड बहà¥à¤¤ मददगार था और मैंने लà¥à¤¬à¥à¤°à¤¿à¤•à¥‡à¤¶à¤¨ टिपà¥à¤¸ का पालन करके इसे आसानी से ठीक कर दिया। बस यह जानना चाहता था कि शायद आपको पता चलना चाहिà¤!”
5. “मेरा नया गेमिंग कीबोरà¥à¤¡ खरीद के à¤à¤• सपà¥à¤¤à¤¾à¤¹ के à¤à¥€à¤¤à¤° काम करना बंद हो गया। कोई à¤à¥€ की जवाब नहीं दे रहा है, और लाइटà¥à¤¸ à¤à¥€ नहीं चालू हो रहे हैं। मà¥à¤à¥‡ à¤à¤• समाधान या à¤à¤• रिपà¥à¤²à¥‡à¤¸à¤®à¥‡à¤‚ट जितनी जलà¥à¤¦à¥€ हो सके चाहिà¤à¥¤”
6. “मैंने नोट किया है कि मेरे गेमिंग कीबोरà¥à¤¡ के कीज पर अकà¥à¤·à¤° कà¥à¤› महीनों के उपयोग के बाद फेड होने लगे हैं। कà¥à¤¯à¤¾ यह वारंटी के तहत कवर है?”
7. “मेरे कीबोरà¥à¤¡ सेटिंगà¥à¤¸ हर बार मेरे पीसी को रीसà¥à¤Ÿà¤¾à¤°à¥à¤Ÿ करने पर रीसेट हो जाती थीं। मैंने पता लगाया कि यह à¤à¤• सॉफà¥à¤Ÿà¤µà¥‡à¤¯à¤° कॉनà¥à¤«à¥à¤²à¤¿à¤•à¥à¤Ÿ के कारण था और फरà¥à¤®à¤µà¥‡à¤¯à¤° अपडेट करके इसे सà¥à¤²à¤à¤¾ दिया। बस पूछना चाहता था कि कà¥à¤¯à¤¾ कोई नठअपडेट आने वाले हैं?”
8. “मेरे कीबोरà¥à¤¡ सॉफà¥à¤Ÿà¤µà¥‡à¤¯à¤° मेरी कॉनà¥à¤«à¤¼à¤¿à¤—रेशन को सेव नहीं कर रहे हैं, और यह अब परेशान करने लगा है। मेरे सेटिंगà¥à¤¸ को सà¥à¤¥à¤¾à¤¯à¥€ रूप से सेव करने के लिठकà¥à¤¯à¤¾ किया जा सकता है?”
### Japanese
1. “最近ã€ã‚ãªãŸã®RGBゲーミングã‚ーボードを購入ã—ã€ã‚«ã‚¹ã‚¿ãƒžã‚¤ã‚ºå¯èƒ½ãªãƒ©ã‚¤ãƒ†ã‚£ãƒ³ã‚°æ©Ÿèƒ½ãŒå¤§å¥½ãã§ã™ï¼ å„ゲームã”ã¨ã«ç•°ãªã‚‹ãƒ—ãƒãƒ•ã‚¡ã‚¤ãƒ«ã‚’è¨å®šã™ã‚‹æ–¹æ³•ã‚’æ•™ãˆã¦ã„ãŸã ã‘ã¾ã™ã‹ï¼Ÿ”
2. “æ–°ã—ã購入ã—ãŸã‚²ãƒ¼ãƒŸãƒ³ã‚°ã‚ーボードã®ãƒžã‚¯ãƒã‚ーを使ãŠã†ã¨ã—ã¦ã„ã¾ã™ãŒã€å…¥åŠ›ãŒèªè˜ã•ã‚Œã¾ã›ã‚“。何ãŒå•é¡Œã‹æ•™ãˆã¦ã„ãŸã ã‘ã¾ã™ã‹ï¼Ÿ”
3. “ã‚ãªãŸã®ã‚²ãƒ¼ãƒŸãƒ³ã‚°ã‚ーボードを購入ã—よã†ã¨è€ƒãˆã¦ã„ã¾ã™ãŒã€ã‚ースイッãƒã®ç¨®é¡žã«ã¤ã„ã¦çŸ¥ã‚ŠãŸã„ã§ã™ã€‚ã©ã®ã‚ˆã†ãªã‚ªãƒ—ションãŒã‚ã‚Šã€ãã®ä¸»ãªé•ã„ã¯ä½•ã§ã™ã‹ï¼Ÿ”
4. “ã‚ーボードã®ã‚¹ãƒšãƒ¼ã‚¹ãƒãƒ¼ãŒå°‘ã—ãã—むよã†ã«ãªã‚Šã¾ã—ãŸã€‚ãŸã ã—ã€ã‚¯ã‚¤ãƒƒã‚¯ã‚¹ã‚¿ãƒ¼ãƒˆã‚¬ã‚¤ãƒ‰ãŒéžå¸¸ã«å½¹ç«‹ã¡ã€æ½¤æ»‘ã®ãƒ’ントã«å¾“ã£ã¦ç°¡å˜ã«ä¿®ç†ã§ãã¾ã—ãŸã€‚ãŸã ã€çŸ¥ã£ã¦ãŠã„ã¦ã»ã—ã„ã¨æ€ã„ã¾ã—ãŸï¼”
5. “æ–°ã—ã„ゲーミングã‚ーボードãŒè³¼å…¥å¾Œ1週間ã§å‹•ä½œã—ãªããªã‚Šã¾ã—ãŸã€‚ã©ã®ã‚ーもåå¿œã›ãšã€ãƒ©ã‚¤ãƒˆã‚‚点ãã¾ã›ã‚“。ã§ãã‚‹ã ã‘æ—©ã解決ç–や交æ›ã‚’ãŠé¡˜ã„ã—ã¾ã™ã€‚”
6. “ゲーミングã‚ーボードã®ã‚ーã®æ–‡å—ãŒæ•°ãƒ¶æœˆã®ä½¿ç”¨å¾Œã«æ¶ˆãˆå§‹ã‚ã¾ã—ãŸã€‚ã“ã‚Œã¯ä¿è¨¼ã«å«ã¾ã‚Œã¾ã™ã‹ï¼Ÿ”
7. “ã‚ーボードã®è¨å®šãŒPCã‚’å†èµ·å‹•ã™ã‚‹ãŸã³ã«ãƒªã‚»ãƒƒãƒˆã•ã‚Œã¦ã„ã¾ã—ãŸã€‚ソフトウェアã®ç«¶åˆãŒåŽŸå› ã§ã‚ã‚‹ã“ã¨ãŒã‚ã‹ã‚Šã€ãƒ•ã‚¡ãƒ¼ãƒ ウェアを更新ã—ã¦è§£æ±ºã—ã¾ã—ãŸã€‚æ–°ã—ã„アップデートãŒè¿‘々æ¥ã‚‹ã®ã‹æ•™ãˆã¦ã„ãŸã ã‘ã¾ã™ã‹ï¼Ÿ”
8. “ã‚ーボードソフトウェアãŒè¨å®šã‚’ä¿å˜ã—ãªã„ãŸã‚ã€éžå¸¸ã«å›°ã£ã¦ã„ã¾ã™ã€‚è¨å®šã‚’永続的ã«ä¿å˜ã™ã‚‹ãŸã‚ã«ä½•ãŒã§ãã¾ã™ã‹ï¼Ÿ”
Coding tasks
Mistral Large 2 has been trained on over 80 coding languages, including popular ones like Python, Java, C, C++, JavaScript, and Bash, as well as more specialized languages such as Swift and Fortran. This comprehensive language support empowers developers to tackle a wide range of coding tasks and projects across various domains and platforms. Whether you’re working on web development, mobile applications, scientific computing, or system programming, Mistral Large 2 can assist you with code generation, debugging, refactoring, and other coding-related tasks. For example, the following code requests the model to generate a Python function:
prompt = “””
Write a Python function called palindrome_prime_finder that finds all prime numbers within a given range that are also palindromes when written in base 10 (decimal system).
“””
inference_config = {“temperature”: 0.0, “maxTokens”: 1000, “topP”: 0.1}
response = converse(prompt, inference_config)
We get the following response:
Certainly! Below is a Python function called `palindrome_prime_finder` that finds all prime numbers within a given range that are also palindromes when written in base 10.
“`python
def is_prime(n):
“””Check if a number is prime.”””
if n <= 1:
return False
if n <= 3:
return True
if n % 2 == 0 or n % 3 == 0:
return False
i = 5
while i * i <= n:
if n % i == 0 or n % (i + 2) == 0:
return False
i += 6
return True
def is_palindrome(n):
“””Check if a number is a palindrome.”””
return str(n) == str(n)[::-1]
def palindrome_prime_finder(start, end):
“””Find all prime numbers within a given range that are also palindromes.”””
palindrome_primes = []
for num in range(start, end + 1):
if is_prime(num) and is_palindrome(num):
palindrome_primes.append(num)
return palindrome_primes
# Example usage:
start_range = 1
end_range = 1000
print(palindrome_prime_finder(start_range, end_range))
“`
### Explanation:
1. **is_prime(n)**: This function checks if a number `n` is prime. It uses a basic primality test that eliminates even numbers and numbers divisible by 3 early, then checks divisibility up to the square root of `n`.
2. **is_palindrome(n)**: This function checks if a number `n` is a palindrome by converting it to a string and comparing it to its reverse.
3. **palindrome_prime_finder(start, end)**: This function iterates through all numbers in the given range `[start, end]`. For each number, it checks if it is both a prime and a palindrome. If both conditions are met, the number is added to the list `palindrome_primes`.
### Example Usage:
The example usage finds all palindromic prime numbers between 1 and 1000. You can adjust the `start_range` and `end_range` variables to find palindromic primes in any desired range.
Conclusion
Mistral AI’s Mistral Large 2 FM is now available on Amazon Bedrock in the US West (Oregon) Region. To get started with Mistral Large 2 in Amazon Bedrock, visit the Amazon Bedrock console.
Interested in diving deeper? Check out the Mistral-on-AWS repo. For more information about Mistral AI on Amazon Bedrock, refer to Mistral AI models now available on Amazon Bedrock.
About the Authors
Niithiyn Vijeaswaran is a Solutions Architect at AWS. His area of focus is generative AI and AWS AI Accelerators. He holds a Bachelor’s degree in Computer Science and Bioinformatics. Niithiyn works closely with the Generative AI GTM team to enable AWS customers on multiple fronts and accelerate their adoption of generative AI. He’s an avid fan of the Dallas Mavericks and enjoys collecting sneakers.
Armando Diaz is a Solutions Architect at AWS. He focuses on generative AI, AI/ML, and Data Analytics. At AWS, Armando helps customers integrating cutting-edge generative AI capabilities into their systems, fostering innovation and competitive advantage. When he’s not at work, he enjoys spending time with his wife and family, hiking, and traveling the world.
Preston Tuggle is a Sr. Specialist Solutions Architect working on generative AI.
Source: Â