Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The state of DevOps and AI: Not just hype

      September 1, 2025

      A Breeze Of Inspiration In September (2025 Wallpapers Edition)

      August 31, 2025

      10 Top Generative AI Development Companies for Enterprise Node.js Projects

      August 30, 2025

      Prompting Is A Design Act: How To Brief, Guide And Iterate With AI

      August 29, 2025

      Look out, Meta Ray-Bans! These AI glasses just raised over $1M in pre-orders in 3 days

      September 2, 2025

      Samsung ‘Galaxy Glasses’ powered by Android XR are reportedly on track to be unveiled this month

      September 2, 2025

      The M4 iPad Pro is discounted $100 as a last-minute Labor Day deal

      September 2, 2025

      Distribution Release: Linux From Scratch 12.4

      September 1, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Enhanced Queue Job Control with Laravel’s ThrottlesExceptions failWhen() Method

      September 2, 2025
      Recent

      Enhanced Queue Job Control with Laravel’s ThrottlesExceptions failWhen() Method

      September 2, 2025

      August report 2025

      September 2, 2025

      Fake News Detection using Python Machine Learning (ML)

      September 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Installing Proxmox on a Raspberry Pi to run Virtual Machines on it

      September 2, 2025
      Recent

      Installing Proxmox on a Raspberry Pi to run Virtual Machines on it

      September 2, 2025

      Download Transcribe! for Windows

      September 1, 2025

      Microsoft Fixes CertificateServicesClient (CertEnroll) Error in Windows 11

      September 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»Databases»Automate Amazon RDS for PostgreSQL major or minor version upgrade using AWS Systems Manager and Amazon EC2

    Automate Amazon RDS for PostgreSQL major or minor version upgrade using AWS Systems Manager and Amazon EC2

    May 29, 2025

    Managing the lifecycle of your PostgreSQL database is essential for maintaining optimal performance, security, and feature access. Amazon Relational Database Service (Amazon RDS) for PostgreSQL simplifies operations by allowing you to perform major or minor version upgrades with the push of a button. However, you are still required to plan these maintenance activities and test the new versions thoroughly. A number of things can go wrong during an upgrade, and managing upgrades across multiple environments in a large-scale fleet of thousands of RDS for PostgreSQL instances can be particularly challenging. Using the AWS Management Console to upgrade RDS instances can introduce extended downtime and potential human errors, particularly when managing multiple instances, both of which can significantly disrupt application stability and business operations.

    Automation can help address these challenges. By using AWS Command Line Interface (AWS CLI) commands within a Unix shell script, you can automate the upgrade process, including prerequisite checks and instance maintenance updates. To scale this approach for multiple instances, you can integrate with AWS Systems Manager. Using RDS resource tags, you can manage upgrades across an entire fleet of RDS instances and control which environments should be upgraded.

    In this post, we guide you through setting up automation for pre-upgrade checks and upgrading a fleet of RDS instances.

    Solution overview

    In this solution, we use Systems Manager to automate the Amazon RDS upgrade job. We tag all Amazon RDS candidates in order for Systems Manager to identify them. In the Systems Manager automation document, we define a Unix shell script to call psql and AWS CLI commands from an Amazon Elastic Compute Cloud (Amazon EC2) instance. To clearly define the responsibilities, we have built two reusable modules: PREUPGRADE and UPGRADE. You can choose PREUPGRADE to perform pre-upgrade activities, such as readiness checks and database preparation to identify potential issues, or choose UPGRADE to execute the actual database upgrade.

    We have added logging and notification features to ease upgrade process monitoring and troubleshooting. At job completion, the log files are sent to a predefined Amazon Simple Storage Service (Amazon S3) bucket. Users who have subscribed to the Amazon Simple Notification Service (Amazon SNS) topic will be notified through email on the job status.

    In this post, we use major version upgrades to demonstrate our approach, though the same solution can be applied to automate minor version upgrades across your entire fleet during scheduled maintenance windows. Throughout this post, we use placeholder values – make sure you replace these with your actual resource ARNs (Amazon Resource Names) or IDs when implementing the solution.

    The solution has been tested and validated in a single VPC and region environment for one AWS account. Organizations requiring multi-VPC or cross-account/cross-region deployments can extend this architecture, though additional considerations for networking, security (including IAM roles and permissions), replication, and data consistency will need to be addressed.

    The following diagram illustrates the high-level architecture and workflow:

    The workflow consists of the following steps:

    1. The user signs in to the Systems Manager console and runs the automation job.
    2. The job connects to Amazon S3 and downloads the upgrade shell script to the EC2 instance.
    3. The job connects to the EC2 instance and identifies RDS PostgreSQL instances based on the input tag key/value pair. In our example, we use a tag name: UpgradeDB and tag value: Y. For each RDS PostgreSQL Instance identified, the job also modifies the instance to publish PostgreSQL log and Upgrade log to Amazon CloudWatch logs if not configured already.
    4. The job uses RDS instance tags to locate and retrieve credentials from AWS Secrets Manager. Each RDS instance must have a tag where:
      • Tag Name: rds-maintenance-user-secret
      • tag value: <DB instance ID>-maintenance-user-secret.
    5. The job executes either pre-upgrade checks or the actual upgrade based on user input.
    6. The job pushes log files to Amazon S3.
    7. The job sends an email notification.

    Prerequisites

    To implement this solution, complete the following prerequisite steps:

    1. Familiarize yourself with the Amazon RDS upgrade process by referring to How to perform a major version upgrade for RDS for PostgreSQL.
    2. Have an AWS Identity and Access Management (IAM) user and appropriate permissions to manage Amazon RDS, Amazon EC2, Amazon S3, Amazon SNS, AWS Secrets Manager, and AWS Systems Manager.
    3. Clone the GitHub repository to your local device:
      git clone https://github.com/aws-samples/rds-postgres-upgrade.git
    4. Prepare RDS PostgreSQL instance:
      1. Create a database user with proper privilege using PSQL.
        CREATE USER rds_maintenance_user WITH PASSWORD 'mypassword';
        GRANT rds_superuser TO rds_maintenance_user;
      2. Create a secret in AWS Secrets Manager for the database user.
        The secret name must follow this format: <DB-Instance-ID>-maintenance-user-secret
        aws secretsmanager create-secret 
        --name "<DB-instance-ID>-maintenance-user-secret" 
        --description "Maintenance user credentials for PostgreSQL RDS instance" 
        --secret-string "{"username":"rds_maintenance_user","password":"mypassword"}"
      3. Add required tags to your RDS instance
        aws rds add-tags-to-resource 
        --resource-name <RDS-instance-ARN> 
        --tags Key=UpgradeDB,Value=Y 
        Key=rds-maintenance-user-secret,Value=<DB-instance-ID>-maintenance-user-secret

        Note: replace <DB-instance-ID> and <RDS-instance-ARN> with the actual values.
        If you have many RDS instances, you can use a Unix shell and AWS CLI based script, Python script, CI/CD pipeline to automate the tagging.

    5. Create an S3 bucket to store the shell script and log files.
    6. Create a SNS topic and subscription for email notifications.
    7. Create the required IAM policy and IAM role. Attach policies to the role.
      1. Create a custom policy using AWS CLI:
        aws iam create-policy 
        --policy-name <RDSUpgradeInstancePolicyname> 
        --policy-document file://rdsupgradeinstancepolicy.json

        Where rdsupgradeinstancepolicy.json is a json file saved in the current directory with below content:

        {
            "Statement": [
                {
                    "Action": [
                        "rds:DescribeDBEngineVersions",
                        "rds:DescribeOrderableDBInstanceOptions"
                    ],
                    "Effect": "Allow",
                    "Resource": "*"
                },
                {
                    "Action": [
                        "rds:DescribeDBInstances",
                        "rds:ModifyDBInstance",
                        "rds:RebootDBInstance",
                        "rds:CreateDBSnapshot",
                        "rds:DescribeDBSnapshots",
                        "rds:DescribeDBEngineVersions",
                        "rds:DescribeOrderableDBInstanceOptions",
                        "rds:DescribeDBParameterGroups",
                        "rds:CreateDBParameterGroup",
                        "rds:ModifyDBParameterGroup",
                        "rds:DescribeDBParameters",
                        "rds:DescribePendingMaintenanceActions",
                        "rds:ApplyPendingMaintenanceAction",
                        "rds:AddTagsToResource",
                        "rds:ListTagsForResource"
                    ],
                    "Effect": "Allow",
                    "Resource": [
                        "arn:aws:rds:<aws-region>:<account-id>:db:*",
                        "arn:aws:rds:<aws-region>:<account-id>:pg:*",
                        "arn:aws:rds:<aws-region>:<account-id>:snapshot:*"
                    ]
                },
                {
                    "Action": [
                        "s3:PutObject",
                        "s3:GetObject",
                        "s3:ListBucket"
                    ],
                    "Effect": "Allow",
                    "Resource": [
                        "arn:aws:s3:::<s3bucket-name>",
                        "arn:aws:s3:::<s3bucket-name>/*"
                    ]
                },
                {
                    "Action": [
                        "sns:Publish"
                    ],
                    "Effect": "Allow",
                    "Resource": "<sns-topic-arn>"
                },
                {
                    "Action": [
                        "secretsmanager:GetSecretValue"
                    ],
                    "Effect": "Allow",
                    "Resource": [
                        "arn:aws:secretsmanager:<aws-region>:<account-id>:secret:rds*"
                    ]
                }
            ],
            "Version": "2012-10-17"
        }
      2. Create an IAM role with trust policy using the AWS CLI:
        aws iam create-role 
        --role-name <InstanceRolename> 
        --assume-role-policy-document file://trust-policy.json

        Where trust-policy.json is a json file saved in the current directory with the following content:

        {
            "Statement": [
                {
                    "Action": "sts:AssumeRole",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "ec2.amazonaws.com"
                    }
                }
            ],
            "Version": "2012-10-17"
        }
      3. Attach AmazonSSMManagedInstanceCore (AWS managed policy) to the role
        aws iam attach-role-policy 
        --role-name <InstanceRolename> 
        --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
      4. Attach the custom policy created in this section to the role
        aws iam attach-role-policy 
        --role-name <InstanceRolename> 
        --policy-arn <arn-of-custom-policy>
    8. Launch a new EC2 instance in the same VPC where your RDS instances are. If you have RDS instances that need to be upgraded across different VPCs, you can launch EC2 instance in its own VPC, then setup VPC peering. For more details, refer to “Connect VPCs using VPC peering”. This EC2 instance will host and execute the upgrade script, requiring AWS CLI, PostgreSQL client, bc, and jq library (AWS CLI and jq are pre-installed on Amazon Linux 2023 AMI).To install the required software on Amazon Linux, use the following commands:
      sudo yum update -y
      sudo yum install -y epel-release
      sudo yum install -y bc jq awscli
      sudo dnf install postgresql15
    9. Create an instance profile.
      aws iam create-instance-profile --instance-profile-name <EC2InstanceProfilename>
    10. Add the role created in step 7 to the instance profile.
      aws iam add-role-to-instance-profile 
      --instance-profile-name <EC2InstanceProfilename> 
      --role-name <InstanceRoleName>
    11. Associate the EC2 instance with the instance profile.
      aws ec2 associate-iam-instance-profile 
      --instance-id <EC2instanceID> 
      --iam-instance-profile Name=<EC2InstanceProfile> 
    12. Configure connection between Database and EC2 instance.
      Allow inbound PostgreSQL traffic (port 5432) from the EC2 instance to your database by updating the RDS security group. For more details, refer to “Manually connect an instance to an RDS database by creating security groups”.
      Note: If you don’t have an existing environment, you can use the included CloudFormation template (create_rds_psql_instance_cfn.yaml) to create the required resources including the secret for rds_maintenance_user. After you create the stack, you must first retrieve the passwords from the Secrets Manager for both the master user and rds_maintenance_user. You then connect to the RDS with psql as master user:
      psql -h <RDS-endpoint> -p 5432 -d <database-name> -U <masteruser> -W

      Once connected, run the following statements:

      CREATE USER rds_maintenance_user WITH PASSWORD '<retrieved-password-for-rds-maintenance-user>';
      GRANT rds_superuser TO rds_maintenance_user;

    Implementation steps

    1. Review the README file in rds-postgres-upgrade directory after you clone the Git repository. It provides complete setup instructions for both existing environments and new deployments. For a complete instruction on set up a new environment, refer to the Testing section in README.
    2. Upload shell script to your S3 bucket.
      cd rds-postgres-upgrade
      aws s3 cp rds_psql_patch.sh s3://<your-bucket>/<folder>/
    3. Create an AWS Systems Manager automation document using the provided yaml as template. You can create a stack from the CloudFormation console or by executing the following AWS CLI command:
      aws cloudformation create-stack 
      --stack-name create-ssm-automation-document 
      --template-body file://create_ssm_rds_patch_automation_document.yaml 
      --capabilities CAPABILITY_NAMED_IAM

      Note: To create and manage the SSM automation document, you need to have the following permission:

      - ssm:CreateDocument
      - ssm:UpdateDocument
      - ssm:DeleteDocument
      - ssm:GetDocument
      - ssm:ListDocuments

      The CloudFormation stack creates two key components: the required IAM role and the Systems Manager automation document. You can find the role ARN and name of the automation document by navigating to the Outputs tab of your CloudFormation stack, as shown in the following screenshot:

      Now you’re ready to execute the automation document from Systems Manager.

    4. On the Systems Manager console, choose Documents under Change management tools in the left navigation pane.
    5. Choose Owned by me.
    6. Choose the document you created in step 3 to open it.
    7. Choose Execute automation.
    8. Leave the Execution mode at its default value of Simple execution.
    9. Enter the required input parameters:
      1. For EC2InstanceId, select the EC2 instance that you configured in Prerequisite section.
      2. For AssumeIAMRole, leave as default.
      3. For SnsTopicArnEmail, enter the SNS topic ARN for notifications.
      4. For DBTagValue, enter Y.
      5. For S3BucketName, enter the name of the S3 bucket containing the sh script.
      6. For TargetEngineVersion, enter a valid target RDS for PostgreSQL version that you upgrade to.
      7. For RunPreUpgradeTasks, choose PREUPGRADE to run pre-upgrade checks or choose UPGRADE to perform the actual upgrade. We strongly recommend running Pre-upgrade checks first, reviewing the logs, and resolving any issues before selecting UPGRADE for the actual upgrade.
      8. For ScriptsDIROnEC2, enter the directory on the EC2 instance where the shell script will be saved.
      9. For DBTagKey, enter UpgradeDB.
      10. For S3ScriptKey, enter the full S3 path and name of sh.
    10. Choose Execute.

    The following screenshot shows the input parameters in the demo:

    After submitting your execution job, you will see the status initially set to In progress. If all steps are complete successfully for all the instances, the status will change to Success. If any RDS database instance failed during the fleet upgrade, the automation job status will be marked as “failed“. The logs will clearly indicate which specific instances failed, allowing you to investigate and potentially retry just those instances.

    To view the upgrade details, choose the link under Step ID. An example of the output is shown in the following screenshot.

    Monitoring and notifications

    The shell script provides comprehensive logging throughout the upgrade process. During the pre-upgrade phase, logs are generated for general pre-upgrade checks, including replication slot status and VACUUM operations. For systems with replication slots, the script adds guidance in the master log as shown below. You must remove these replication slots before performing a major version upgrade.

    During the upgrade phase, the script logs configuration backups, replication slot status, ANALYZE operations, PostgreSQL extensions, and general upgrade progress. All logs are stored in LOGS_DIR on the EC2 instance and uploaded to S3 upon completion. There is one exception: the database upgrade log. Since the script configures the RDS instances to publish Upgrade log to CloudWatch logs before the actual upgrade, you will find the database upgrade log in CloudWatch.

    Summary of logs for Pre-Upgrade:

    Log File TypeSample File NameDirectory PathFrequencyPurpose
    Master LogPREUPGRADE-master-20250321-21-55-51.log<script-dir>/logsEach runGeneral information on pre-upgrade job tasks
    Pre-upgrade Status logPREUPGRADE-status<script-dir>/logs/<dbname>Each runPre-upgrade Job status
    Pre-upgrade Execution LogPREUPGRADE-20250321-21-51-48.log<script-dir>/logs/<dbname>Each runDetail view of all pre-upgrade tasks
    Freeze Task LogPREUPGRADE-run_db_task_freeze-20250321-21-55-52.log<script-dir>/logs/<dbname>Each runLog on Vacuum Freeze
    Replication Slot LogPREUPGRADE-replication_slot_20250321-21-55-52.log<script-dir>/logs/<dbname>For Major Version Upgrade onlyCurrent replication slot status and recommendations on actions to take before major version upgrade

    Note: <script-dir> is the directory where “rds-psql-patch.sh” is saved.

    Summary of logs for Upgrade:

    Log File TypeSample File NameDirectory PathFrequencyPurpose/Error information
    Master LogUPGRADE-master-20250321-22-16-11.log<script-dir>/logsEach runGeneral information on upgrade tasks
    Upgrade Status logUPGRADE-status<script-dir>/logs/<dbname>Each runUpgrade Job Status
    Upgrade Execution LogUPGRADE-20250321-22-16-11.log<script-dir>/logs/<dbname>Each runDetail view of all Upgrade tasks
    Current DB Configuration Backupdb_current_config_backup_postgres15-20250321-22-16-12.txt<script-dir>/logs/<dbname>Each runBackup of current DB configuration
    Replication Slot LogUPGRADE-replication_slot_20250321-22-16-12.log<script-dir>/logs/<dbname>For Major Version Upgrade OnlyCurrent replication slot status and recommendations on actions to take before major version upgrade
    Extension Update LogUPGRADE-update_db_extensions_20250321-22-16-12.log<script-dir>/logs/<dbname>Each runLog on PostgreSQL extension updates
    Analyze Task LogUPGRADE-run_db_task_analyze-20250321-22-16-12.log<script-dir>/logs/<dbname>Each runLog on ANALYZE command execution

    Note: <script-dir> is the directory where “rds-psql-patch.sh” is saved.

    You can download the logs from the S3 bucket, as shown in the following screenshot.

    The script also sends email notifications through SNS to subscribed users. Here’s an example of the completion notification:

    Clean up

    To remove test resources:

    Empty the S3 bucket through the S3 console, then delete the CloudFormation stack through the CloudFormation console.

    Conclusion

    While Amazon RDS simplifies database upgrades with its one-click functionality, managing upgrades across a large fleet of instances can still be challenging. This solution eliminates the need to coordinate and validate each upgrade individually. It streamlines both pre-upgrade checks and upgrade processes for Amazon RDS for PostgreSQL, significantly reducing manual effort and potential errors. With logging and email notifications, it provides real-time visibility and comprehensive tracking. This cost-effective solution ensures logs are readily available for analysis, audits, and compliance purposes.

    Your feedback is highly appreciated. Leave your thoughts and questions in the comments.


    About the Author

    RajeevRajeev Pardipuram is a Delivery Consultant at Amazon Web Services (AWS) specializing in database technologies and modernization initiatives. With expertise across relational and NoSQL platforms, he helps customers navigate complex data migrations while leveraging Generative AI tools for faster solution delivery. Driven by Customer Obsession and Bias for Action, Rajeev helps organizations optimize their data infrastructure and accelerate their cloud journey.

    AnandAnand Komandooru is a Principal Cloud Architect at AWS. He helps customers build and operate cloud-native applications on AWS cloud. He has over 20 years of experience building software and his favorite Amazon leadership principle is “Leaders are right a lot.“

    LiLi Liu is a Principal Cloud Architect at AWS. Specializing in database migration and modernization, Li has been helping customers innovate and transform complex legacy systems into modern, scalable database solutions since joining AWS. She is passionate about leveraging cutting-edge database technologies to solve real-world business challenges.

    DmytroDmytro Protsiv is a Cloud Application Architect for with Amazon Web Services. He is passionate about helping customers to solve their business challenges around application modernization.

    CurtisCurtis Stewart is a Cloud Infrastructure Architect at AWS. He is focused on helping customers build enterprise cloud platforms and perform cloud migrations through large-scale automation capabilities.

    Source: Read More

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleFigma launches Brazilian Portuguese localization, expanding global reach
    Next Article Supercharging vector search performance and relevance with pgvector 0.8.0 on Amazon Aurora PostgreSQL

    Related Posts

    Development

    Enhanced Queue Job Control with Laravel’s ThrottlesExceptions failWhen() Method

    September 2, 2025
    Artificial Intelligence

    Scaling Up Reinforcement Learning for Traffic Smoothing: A 100-AV Highway Deployment

    September 2, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Copilot+ PCs powered by Intel and AMD still trail behind Snapdragon counterparts — even with experimental features

    News & Updates

    CVE-2025-4646 – Centreon Web Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Less UFO, more Wall-E: You’ve never seen the best robot vacuum on the market

    News & Updates

    Your Pixel Watch just got a new scam-busting feature – how to enable it

    News & Updates

    Highlights

    CVE-2025-3272 – OpenText Operations Bridge Manager Password Change Bypass

    May 7, 2025

    CVE ID : CVE-2025-3272

    Published : May 7, 2025, 7:16 p.m. | 20 minutes ago

    Description : Incorrect Authorization vulnerability in OpenText™ Operations Bridge Manager. 

    The vulnerability could allow authenticated users to change their password without providing their old password.

    This issue affects Operations Bridge Manager: 24.2, 24.4.

    Severity: 0.0 | NA

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Microsoft confirms Windows 11 version 25H2 is coming soon — will install much faster than version 24H2

    June 28, 2025

    Your ChatGPT Conversations Could Improve with Long-Term Memory feature – Here’s How

    April 14, 2025

    Rilasciato Incus 6.11: il gestore di container e macchine virtuali di nuova generazione

    April 1, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.