Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Harnessing the Power of AWS Bedrock through CloudFormation

    Harnessing the Power of AWS Bedrock through CloudFormation

    August 20, 2024

    The rapid advancement of artificial intelligence (AI) has led to the development of foundational models that form the bedrock of numerous AI applications. AWS Bedrock is Amazon Web Services’ comprehensive solution that leverages these models to provide robust AI and machine learning (ML) capabilities. This blog delves into the essentials of AI foundational models in AWS Bedrock, highlighting their significance and applications.

    What are AI Foundational Models?

    AI foundational models are pre-trained models designed to serve as the basis for various AI applications. These models are trained on extensive datasets and can be fine-tuned for specific tasks, such as natural language processing (NLP), image recognition, and more. The primary advantage of using these models is that they significantly reduce the time and computational resources required to develop AI applications from scratch.

    AWS Bedrock: A Comprehensive AI Solution

    AWS Bedrock provides a suite of foundational models that are easily accessible and deployable. These models are integrated into the AWS ecosystem, allowing users to leverage the power of AWS’s infrastructure and services. AWS Bedrock offers several key benefits:

    Scalability: AWS Bedrock models can scale to meet the demands of large and complex applications. The AWS infrastructure ensures that models can handle high volumes of data and traffic without compromising performance.
    Ease of Use: With AWS Bedrock, users can access pre-trained models via simple API calls. This ease of use allows developers to integrate AI capabilities into their applications quickly and efficiently.
    Cost-Effectiveness: Utilizing pre-trained models reduces the need for extensive computational resources and time-consuming training processes, leading to cost savings.

    Key Components of AWS Bedrock

    AWS Bedrock comprises several key components designed to facilitate the development and deployment of AI applications:

    Pre-trained Models: These models are the cornerstone of AWS Bedrock. They are trained on vast datasets and optimized for performance. Users can select models tailored to specific tasks, such as text analysis, image classification, and more.
    Model Customization: AWS Bedrock allows users to fine-tune pre-trained models to meet their specific needs. This customization ensures that the models can achieve high accuracy for specialized applications.
    Integration with AWS Services: Bedrock models seamlessly integrate with other AWS services, such as AWS Lambda, Amazon S3, and Amazon SageMaker. This integration simplifies the deployment and management of AI applications.

     

    Amazon Bedrock supports a wide range of foundation models from industry-leading providers. We can choose the model that is best suited to achieving your unique goals.

    Here are just a few of the popular ones:

    Note: Account users with the correct IAM Permissions must manually enable access to available Bedrock foundation models (FMs) to use Bedrock. Once Model Access is granted for that particular region, we can use it to build and scale our application.

    Using AWS Bedrock services requires specific IAM permissions to ensure that users and applications can interact with the service securely and effectively. Basic Bedrock Access, Model Training and Deployment, Inference and Usage, Data Management, Compute Resources Management, Security and Identity Management, Monitoring, and Logging are the types of IAM permissions that are typically needed.

    The cost parameters for using AWS Bedrock include Compute, Storage, Data Transfer, and Model Usage costing depending on the Number of Input tokens for units per month. Understanding these parameters can help estimate the costs associated with deploying and running AI models using AWS Bedrock. For precise cost calculations, AWS provides the AWS Pricing Calculator and detailed pricing information on its official website.

    Let’s try to implement any one of these foundation models (for example: Titan) using AWS CloudFormation service.

    Amazon Titan in Amazon Bedrock

    Amazon Bedrock exclusively offers the Amazon Titan series of models, which benefit from Amazon’s 25 years of innovation in AI and machine learning. Via a fully controlled API, these Titan foundation models (FMs) offer a range of high-performance choices for text, graphics, and multimodal information. Created by AWS, Titan models are pre-trained on extensive datasets, making them powerful and versatile for a wide range of applications while promoting responsible AI usage. They can be used as they are or customized privately with your data.

    Titan models have three categories: embeddings, text generation, and image generation. Here, we will focus on the Amazon Titan Text generation models, which include Amazon Titan Text G1 – Premier, Amazon Titan Text G1 – Express, and Amazon Titan Text G1 – Lite. We will implement “Titan Text G1 – Premier” from the list above.

    Amazon Titan Text G1 – Premier

    Amazon Titan Text G1 – Premier is a large language model (LLM) for text generation that is integrated with Amazon Bedrock Knowledge Base and Amazon Bedrock Agents and is highly helpful for a variety of tasks, including summarizing, code generation, and answering open-ended and context-based questions, and also supports Custom Finetuning in preview.

    ID – amazon.titan-text-premier-v1:0

    Max tokens – 32,000

    Language – English only

    Use cases – 32k context window, Context-Based Question Answering, open-ended text generation, Knowledge Base support, Agent’s support, chain of thought, rewrite, brainstorming, summarizations, code generation, table creation, data formatting, paraphrasing, extraction, QnA, chat, Model Customization (preview).

    Inference parameters – Temperature, Top P (Default: Temperature = 0.7, Top P = 0.9)

    While implementing this using CloudFormation, we will first need to create a stack for it. Creating a stack template for AWS CloudFormation involves defining your AWS infrastructure and resources using either JSON or YAML format.

    Let’s try to implement the Python AWS Lambda function that utilizes the AWS Bedrock’s Titan service to generate text based on an input prompt through a YAML-based cloud formation script.

    The given AWS CloudFormation template defines resources for creating an IAM role and a Lambda function to invoke a model from AWS Bedrock i.e., providing text generation capabilities through the specified Titan model.

    IAM Role

    Allows Lambda to assume the role and invoke the Bedrock model.
    Grants permission to invoke the specific Bedrock model (amazon.titan-embed-text-v1) and list available models.

    Lambda Function

    Python function that uses Boto3 to invoke the Bedrock model amazon.titan-text-premier-v1:0.
    Sends a JSON payload to the model with a specified configuration for text generation. Returns the response from the model as the HTTP response.
    If we check the Lambda function’s dashboard, then the “index.py” file, contains:

    This AWS Lambda function interacts with the AWS Bedrock service to generate text based on an input prompt. It creates a client for the Bedrock runtime, invokes a specific text generation model with given configurations, processes the response to extract the generated text, and returns this text in an HTTP response. This setup allows for the automation of text generation tasks using AWS Bedrock’s capabilities.

    Execution Results

    As seen in the Response window for the input given as: “Hello, how are you?”, it has returned the output text as “Hello! I’m doing well, thank you. How can I assist you today?”.

    In this manner, AWS Bedrock’s Amazon Titan Text G1 – Premier model is designed for a wide range of natural language processing (NLP) tasks due to its advanced capabilities and large context window.

    Source: Read More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleLaravel Translations: Keys as JSON String or PHP Array?
    Next Article A Ninja’s Handbook: A book on privacy, security, and anonymity online.

    Related Posts

    Machine Learning

    Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation

    May 16, 2025
    Security

    Nmap 7.96 Launches with Lightning-Fast DNS and 612 Scripts

    May 16, 2025
    Leave A Reply Cancel Reply

    Continue Reading

    OpenResearcher: An Open-Source Project that Harnesses AI to Accelerate Scientific Research

    Development

    Black Basta Ransomware May Have Exploited MS Windows Zero-Day Flaw

    Development

    How Software Testing Failures Led to a Global Crisis: Key Takeaways

    Development

    Microsoft is really serious about giving more products Copilot name on Windows 11

    Development

    Highlights

    Artificial Intelligence

    LLMs are really bad at solving simple river crossing puzzles

    June 25, 2024

    Large language models like GPT-4o can perform incredibly complex tasks, but even the top models…

    APT41/RedGolf Infrastructure Briefly Exposed: Fortinet Zero-Days Targeted Shiseido

    April 20, 2025

    Founders, if you want A+ startup success, do the homework first!

    November 23, 2024

    Jemma: A New AI Project that Convert Your Thoughts to Code

    April 15, 2024
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.