Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 18, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 18, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 18, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 18, 2025

      Gears of War: Reloaded — Release date, price, and everything you need to know

      May 18, 2025

      I’ve been using the Logitech MX Master 3S’ gaming-influenced alternative, and it could be your next mouse

      May 18, 2025

      Your Android devices are getting several upgrades for free – including a big one for Auto

      May 18, 2025

      You may qualify for Apple’s $95 million Siri settlement – how to file a claim today

      May 18, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      YTConverter™ lets you download YouTube videos/audio cleanly via terminal — especially great for Termux users.

      May 18, 2025
      Recent

      YTConverter™ lets you download YouTube videos/audio cleanly via terminal — especially great for Termux users.

      May 18, 2025

      NodeSource N|Solid Runtime Release – May 2025: Performance, Stability & the Final Update for v18

      May 17, 2025

      Big Changes at Meteor Software: Our Next Chapter

      May 17, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Gears of War: Reloaded — Release date, price, and everything you need to know

      May 18, 2025
      Recent

      Gears of War: Reloaded — Release date, price, and everything you need to know

      May 18, 2025

      I’ve been using the Logitech MX Master 3S’ gaming-influenced alternative, and it could be your next mouse

      May 18, 2025

      How to Make Your Linux Terminal Talk Using espeak-ng

      May 18, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Amazon Bedrock Prompt Management is now available in GA

    Amazon Bedrock Prompt Management is now available in GA

    November 7, 2024

    Today we are announcing the general availability of Amazon Bedrock Prompt Management, with new features that provide enhanced options for configuring your prompts and enabling seamless integration for invoking them in your generative AI applications.

    Amazon Bedrock Prompt Management simplifies the creation, evaluation, versioning, and sharing of prompts to help developers and prompt engineers get better responses from foundation models (FMs) for their use cases. In this post, we explore the key capabilities of Amazon Bedrock Prompt Management and show examples of how to use these tools to help optimize prompt performance and outputs for your specific use cases.

    New features in Amazon Bedrock Prompt Management

    Amazon Bedrock Prompt Management offers new capabilities that simplify the process of building generative AI applications:

    • Structured prompts – Define system instructions, tools, and additional messages when building your prompts
    • Converse and InvokeModel API integration – Invoke your cataloged prompts directly from the Amazon Bedrock Converse and InvokeModel API calls

    To showcase the new additions, let’s walk through an example of building a prompt that summarizes financial documents.

    Create a new prompt

    Complete the following steps to create a new prompt:

    1. On the Amazon Bedrock console, in the navigation pane, under Builder tools, choose Prompt management.
    2. Choose Create prompt.
    3. Provide a name and description, and choose Create.

    Build the prompt

    Use the prompt builder to customize your prompt:

    1. For System instructions, define the model’s role. For this example, we enter the following:
      You are an expert financial analyst with years of experience in summarizing complex financial documents. Your task is to provide clear, concise, and accurate summaries of financial reports.
    2. Add the text prompt in the User message box.

    You can create variables by enclosing a name with double curly braces. You can later pass values for these variables at invocation time, which are injected into your prompt template. For this post, we use the following prompt:

    Summarize the following financial document for {{company_name}} with ticker symbol {{ticker_symbol}}:
    Please provide a brief summary that includes
    1.	Overall financial performance
    2.	Key numbers (revenue, profit, etc.)
    3.	Important changes or trends
    4.	Main points from each section
    5.	Any future outlook mentioned
    6.	Current Stock price
    Keep it concise and easy to understand. Use bullet points if needed.
    Document content: {{document_content}}

    1. Configure tools in the Tools setting section for function calling.

    You can define tools with names, descriptions, and input schemas to enable the model to interact with external functions and expand its capabilities. Provide a JSON schema that includes the tool information.

    When using function calling, an LLM doesn’t directly use tools; instead, it indicates the tool and parameters needed to use it. Users must implement the logic to invoke tools based on the model’s requests and feed results back to the model. Refer to Use a tool to complete an Amazon Bedrock model response to learn more.

    1. Choose Save to save your settings.

    Compare prompt variants

    You can create and compare multiple versions of your prompt to find the best one for your use case. This process is manual and customizable.

    1. Choose Compare variants.
    2. The original variant is already populated. You can manually add new variants by specifying the number you want to create.
    3. For each new variant, you can customize the user message, system instruction, tools configuration, and additional messages.
    4. You can create different variants for different models. Choose Select model to choose the specific FM for testing each variant.
    5. Choose Run all to compare outputs from all prompt variants across the selected models.
    6. If a variant performs better than the original, you can choose Replace original prompt to update your prompt.
    7. On the Prompt builder page, choose Create version to save the updated prompt.

    This approach allows you to fine-tune your prompts for specific models or use cases and makes it straightforward to test and improve your results.

    Invoke the prompt

    To invoke the prompt from your applications, you can now include the prompt identifier and version as part of the Amazon Bedrock Converse API call. The following code is an example using the AWS SDK for Python (Boto3):

    import boto3
    
    # Set up the Bedrock client
    bedrock = boto3.client('bedrock-runtime')
    
    # Example API call
    response = bedrock.converse(
        modelId=<<insert prompt arn>>,
        promptVariables = '{ "company_name": { "text" : "<<insert company name>>"},"ticker_symbol": {"text" : "<<insert ticker symbol>>"},"document_content": {"text" : "<<Insert document content>>"}}'
    )
    
    # Print the response	
    response_body = json.loads(bedrock_response.get('body').read())
    print(response_body)

    We have passed the prompt Amazon Resource Name (ARN) in the model ID parameter and prompt variables as a separate parameter, and Amazon Bedrock directly loads our prompt version from our prompt management library to run the invocation without latency overheads. This approach simplifies the workflow by enabling direct prompt invocation through the Converse or InvokeModel APIs, eliminating manual retrieval and formatting. It also allows teams to reuse and share prompts and track different versions.

    For more information on using these features, including necessary permissions, see the documentation.

    You can also invoke the prompts in other ways:

    • Use the Amazon Bedrock prompt flow. Include any prompt version from Amazon Bedrock Prompt Management using the prompt node in your flow.
    • Use the Amazon Bedrock SDK and the get_prompt This allows you to integrate the prompt information into your code application or, if preferred, using open source frameworks such as LangChain or LlamaIndex. Refer to Integrate Amazon Bedrock Prompt Management in LangChain applications for more details.

    Now available

    Amazon Bedrock Prompt Management is now generally available in the US East (N. Virginia), US West (Oregon), Europe (Paris), Europe (Ireland) , Europe (Frankfurt), Europe (London), South America (Sao Paulo), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Canada (Central) AWS Regions. For pricing information, see Amazon Bedrock Pricing.

    Conclusion

    The general availability of Amazon Bedrock Prompt Management introduces powerful capabilities that enhance the development of generative AI applications. By providing a centralized platform to create, customize, and manage prompts, developers can streamline their workflows and work towards improving prompt performance. The ability to define system instructions, configure tools, and compare prompt variants empowers teams to craft effective prompts tailored to their specific use cases. With seamless integration into the Amazon Bedrock Converse API and support for popular frameworks, organizations can now effortlessly build and deploy AI solutions that are more likely to generate relevant output.


    About the Authors

    Dani Mitchell is a Generative AI Specialist Solutions Architect at AWS. He is focused on computer vision use cases and helping accelerate EMEA enterprises on their ML and generative AI journeys with Amazon SageMaker and Amazon Bedrock.

    Ignacio Sánchez is a Spatial and AI/ML Specialist Solutions Architect at AWS. He combines his skills in extended reality and AI to help businesses improve how people interact with technology, making it accessible and more enjoyable for end-users.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to Set Up Automated GitHub Workflows for Your Python and React Applications
    Next Article Stealthier GodFather Malware Uses Native Code to Target 500 Banking and Crypto Apps

    Related Posts

    Development

    February 2025 Baseline monthly digest

    May 18, 2025
    Artificial Intelligence

    Markus Buehler receives 2025 Washington Award

    May 18, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    CVE-2025-40627 – AbanteCart Reflected Cross-Site Scripting (XSS) Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    DistroWatch Weekly, Issue 1099

    Development

    Apple Intelligence: Revolutionizing Tech by 2025

    Artificial Intelligence

    Harrison County Schools Hit by Cyberattack, Investigation Underway

    Development

    Highlights

    Machine Learning

    This AI Paper from the Tsinghua University Propose T1 to Scale Reinforcement Learning by Encouraging Exploration and Understand Inference Scaling

    February 2, 2025

    Large language models (LLMs) are developed specifically for math, programming, and general autonomous agents and…

    6 Best Purchase Order Software in 2024

    June 19, 2024

    Enhancing Visual Search with Aesthetic Alignment: A Reinforcement Learning Approach Using Large Language Models and Benchmark Evaluations

    June 18, 2024

    New Credit Card Skimmer Targets WordPress, Magento, and OpenCart Sites

    June 26, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.