Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      CodeSOD: Functionally, a Date

      September 16, 2025

      Creating Elastic And Bounce Effects With Expressive Animator

      September 16, 2025

      Microsoft shares Insiders preview of Visual Studio 2026

      September 16, 2025

      From Data To Decisions: UX Strategies For Real-Time Dashboards

      September 13, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      Building personal apps with open source and AI

      September 12, 2025

      What Can We Actually Do With corner-shape?

      September 12, 2025

      Craft, Clarity, and Care: The Story and Work of Mengchu Yao

      September 12, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Can I use React Server Components (RSCs) today?

      September 16, 2025
      Recent

      Can I use React Server Components (RSCs) today?

      September 16, 2025

      Perficient Named among Notable Providers in Forrester’s Q3 2025 Commerce Services Landscape

      September 16, 2025

      Sarah McDowell Helps Clients Build a Strong AI Foundation Through Salesforce

      September 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      I Ran Local LLMs on My Android Phone

      September 16, 2025
      Recent

      I Ran Local LLMs on My Android Phone

      September 16, 2025

      DistroWatch Weekly, Issue 1139

      September 14, 2025

      sudo vs sudo-rs: What You Need to Know About the Rust Takeover of Classic Sudo Command

      September 14, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»How to Upload Large Objects to S3 with AWS CLI Multipart Upload

    How to Upload Large Objects to S3 with AWS CLI Multipart Upload

    July 31, 2025

    Uploading large files to S3 using traditional single-request methods can be quite challenging. If your’e transferring a 5GB database backup, and a network interruption happens, it forces you to restart the entire upload process. This wastes bandwidth and time. And this approach becomes increasingly unreliable as file sizes grow.

    With a single PUT operation, you can actually upload an object of up to 5 GB. But, when it comes to uploading larger objects (above 5GB), using Amazon S3’s Multipart Upload feature is a better approach.

    Multipart upload makes it easier for you to upload larger files and objects by segmenting them into smaller, independent chunks that upload separately and reassemble on S3.

    In this guide, you’ll learn how to implement multipart uploads using AWS CLI.

    Table of Contents

    • Prerequisites

    • How Multipart Uploads Work

    • Getting started

      • Step 1: Download the AWS CLI

      • Step 2: Configure AWS IAM credentials

    • Step 1: Split Object

    • Step 2: Create an Amazon S3 bucket

    • Step 3: Initiate Multipart Upload

    • Step 4: Upload split files to S3 Bucket

    • Step 5: Create a JSON File to Compile ETag Values

    • Step 6: Complete Mulitipart Upload to S3

    • Conclusion

    Prerequisites

    To follow this guide, you should have:

    • An AWS account.

    • Knowledge of AWS and the S3 service.

    • AWS CLI is installed on your local machine.

    How Multipart Uploads Work

    In a multipart upload, large file transfers are segmented into smaller chunks that get uploaded separately to Amazon S3. After all segments complete their upload process, S3 reassembles them into the complete object.

    For example, a 160GB file broken into 1GB segments generates 160 individual upload operations to S3. Each segment receives a distinct identifier while preserving sequence information to guarantee proper file reconstruction.

    The system supports configurable retry logic for failed segments and allows upload suspension/resumption functionality. Here’s a diagram that shows what the multipart upload process looks like:

    AWS multipart upload process

    Getting Started

    Before you get started with this guide, make sure that you have the AWS CLI installed on your machine. If you don’t already have that installed, follow the steps below.

    Step 1: Download the AWS CLI

    To download the CLI, visit the CLI download documentation. Then, download the CLI based on your operating system (Windows, Linux, macOS). Once the CLI is installed, the next step is to configure your AWS IAM credentials in your terminal.

    Step 2: Configure AWS IAM credentials

    To configure your AWS credentials, navigate to your terminal and run the command below:

    aws configure
    

    This command prompts you to paste in certain credentials, such as AWS Access Key ID and secret ID. To obtain these credentials, create a new IAM user in your AWS account. To do this, follow the steps below. (You can skip these steps if you already have an IAM user and security credentials.)

    1. Sign in to your AWS dashboard.

    2. Click on the search bar above your dashboard and search “IAM”.

    3. Click on IAM.

    4. In the left navigation pane, navigate to Access management > Users.

    5. Click Create user.

    6. During IAM user creation, attach a policy directly by selecting Attach policies directly in step 2: Set permissions.

    7. Give the user admin access by searching “admin” in the permission policies search bar and selecting AdministratorAccess.

    8. On the next page, click Create user.

    9. Click on the created user in the Users section and navigate to Security credentials.

    10. Scroll down and click Create access key.

    11. Select the Command Line Interface (CLI) use case.

    12. On the next page, click Create access key.

    You will now see your access keys. Please keep these safe and do not expose them publicly or share them with anyone.

    You can now copy these access keys into your terminal after running the aws configure command.

    You will be prompted to include the following details:

    • AWS Access Key ID: gotten from the created IAM user credentials. See steps above.

    • AWS Secret Access Key: gotten from the created IAM user credentials. See steps above.

    • Default region name: default AWS region name, for example, us-east-1.

    • Default output format: None.

    Now we’re done with the CLI configuration.

    To confirm that you’ve successfully installed the CLI, run the command below:

    aws --version
    

    You should see the CLI version in your terminal as shown below:

    Image of AWS CLI version

    Now, you are ready for the following main steps for multipart uploads 🙂

    Step 1: Split Object

    The first step is to split the object you intend to upload. For this guide, we’ll be splitting a 188MB video file into smaller chunks.

    Image of object size

    Note that this process also works for much larger files.

    Next, locate the object you intend to upload in your system. You can use the cd command to locate the object in its stored folder using your terminal.

    Then run the split command below:

    split -b <SIZE>mb <filename>
    

    Replace <SIZE> with your desired chunk size in megabytes (for example, 150, 100, 200).

    For this use case, we’ll be splitting our 188mb video file into bytes. Here’s the command:

    
    split -b 31457280 videoplayback.mp4
    

    Next, run the ls -lh command on your terminal. You should get the output below:

    Image of split object

    Here, you can see that the 188MB file has been split into multiple parts (30MB and 7.9MB). When you go to the folder where the object is saved in your system files, you will see additional files with names that look like this:

    • xaa

    • xab

    • xac

    and so on. These files represent the different parts of your object. For example, xaa is the first part of your file, which will be uploaded first to S3. More on this later in the guide.

    Step 2: Create an Amazon S3 Bucket

    If you don’t already have an S3 bucket created, follow the steps in the AWS Get Started with Amazon S3 documentation to create one.

    Step 3: Initiate Multipart Upload

    The next step is to initiate a multipart upload. To do this, execute the command below:

    
    aws s3api create-multipart-upload --bucket DOC-EXAMPLE-BUCKET --key large_test_file
    

    In this command:

    • DOC-EXAMPLE-BUCKET is your S3 bucket name.

    • large_test_file is the name of the file, for example, videoplayback.mp4.

    You’ll get a JSON response in your terminal, providing you with the UploadId. The response looks like this:

    
    {
        <span class="hljs-attr">"ServerSideEncryption"</span>: <span class="hljs-string">"AES345"</span>,
        <span class="hljs-attr">"Bucket"</span>: <span class="hljs-string">"s3-multipart-uploads"</span>,
        <span class="hljs-attr">"Key"</span>: <span class="hljs-string">"videoplayback.mp4"</span>,
        <span class="hljs-attr">"UploadId"</span>: <span class="hljs-string">"************************************"</span>
    }
    

    Keep the UploadId somewhere safe in your local machine, as you will need it for later steps.

    Step 4: Upload Split Files to S3 Bucket

    Remember those extra files saved as xaa, xab, and so on? Well, now it’s time to upload them to your S3 bucket. To do that, execute the command below:

    aws s3api upload-part --bucket DOC-EXAMPLE-BUCKET --key large_test_file --part-number 1 --body large_test_file.001 --upload-id exampleTUVGeKAk3Ob7qMynRKqe3ROcavPRwg92eA6JPD4ybIGRxJx9R0VbgkrnOVphZFK59KCYJAO1PXlrBSW7vcH7ANHZwTTf0ovqe6XPYHwsSp7eTRnXB1qjx40Tk
    
    • DOC-EXAMPLE-BUCKET is your S3 bucket name.

    • large_test_file is the name of the file, for example, videoplayback.mp4

    • large_test_file.001 is the name of the file part, for example, xaa.

    • upload-id replaces the example ID with your saved UploadId.

    The command returns a response that contains an ETag value for the part of the file that you uploaded.

    
    {
        <span class="hljs-attr">"ServerSideEncryption"</span>: <span class="hljs-string">"aws:kms"</span>,
        <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""7f9b8c3e2a1d5f4e8c9b2a6d4e8f1c3a""</span>,
        <span class="hljs-attr">"ChecksumCRC64NVME"</span>: <span class="hljs-string">"mK9xQpD2WnE="</span>
    }
    

    Copy the ETag value and save it somewhere on your local machine, as you’ll need it later as a reference.

    Continue uploading the remaining file parts by repeating the command above, incrementing both the part number and file name for each subsequent upload. For example: xaa becomes xab, and --part-number 1 becomes --part-number 2, and so forth.

    Note that upload speed depends on how large the object is and how good your internet speed is.

    To confirm that all the file parts have been uploaded successfully, run the command below:

    aws s3api list-parts --bucket s3-multipart-uploads --key videoplayback.mp4 --upload-id p0NU3agC3C2tOi4oBmT8lHLebUYqYXmWhEYYt8gc8jXlCStEZYe1_kSx1GjON2ExY_0T.4N4E6pjzPlNcji7VDT6UomtNYUhFkyzpQ7IFKrtA5Dov8YdC20c7UE20Qf0
    

    Replace the example upload ID with your actual upload ID.

    You should get a JSON response like this:

    
    {
        <span class="hljs-attr">"Parts"</span>: [
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">1</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:22:18+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""f7b9c8e4d3a2f6e8c9b5a4d7e6f8c2b1""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">2</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:25:42+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""a8e5d2c7f9b4e6a3c8d5f2e9b7c4a6d3""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">3</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:28:15+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""c4f8e2b6d9a3c7e5f8b2d6a9c3e7f4b8""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">4</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:31:03+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""e9c3f7a5d8b4e6c9f2a7d4b8c6e3f9a2""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">5</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:33:47+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""b6d4a8c7f5e9b3d6a2c8f4e7b9c5d8a6""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">6</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:36:29+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""d7e3c9f6a4b8d2e5c7f9a3b6d4e8c2f5""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">26214400</span>
            },
            {
                <span class="hljs-attr">"PartNumber"</span>: <span class="hljs-number">7</span>,
                <span class="hljs-attr">"LastModified"</span>: <span class="hljs-string">"2025-07-27T14:38:52+00:00"</span>,
                <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""f2a6d8c4e7b3f6a9c2d5e8b4c7f3a6d9""</span>,
                <span class="hljs-attr">"Size"</span>: <span class="hljs-number">15728640</span>
            }
        ]
    }
    

    This is how you verify that all parts have been uploaded.

    Step 5: Create a JSON File to Compile ETag Values

    The document we are about to create helps AWS understand which parts the ETags represent. Gather the ETag values from each uploaded file part and organize them into a JSON structure.

    Sample JSON format:

    
    {
        <span class="hljs-attr">"Parts"</span>: [{
            <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">"example8be9a0268ebfb8b115d4c1fd3"</span>,
            <span class="hljs-attr">"PartNumber"</span>:<span class="hljs-number">1</span>
        },
    
        ....
    
        {
            <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">"example246e31ab807da6f62802c1ae8"</span>,
            <span class="hljs-attr">"PartNumber"</span>:<span class="hljs-number">4</span>
        }]
    }
    

    Save the created JSON file in the same folder as your object and name it multipart.json. You can use any IDE of your choice to create and save this document.

    Step 6: Complete Mulitipart Upload to S3

    To complete the multipart upload, run the command below:

    aws s3api complete-multipart-upload --multipart-upload file://fileparts.json --bucket DOC-EXAMPLE-BUCKET --key large_test_file --upload-id exampleTUVGeKAk3Ob7qMynRKqe3ROcavPRwg92eA6JPD4ybIGRxJx9R0VbgkrnOVphZFK59KCYJAO1PXlrBSW7vcH7ANHZwTTf0ovqe6XPYHwsSp7eTRnXB1qjx40Tk
    

    Replace fileparts.json with multipart.json.

    You should get an output like this:

    
    {
        <span class="hljs-attr">"ServerSideEncryption"</span>: <span class="hljs-string">"AES256"</span>,
        <span class="hljs-attr">"Location"</span>: <span class="hljs-string">"https://s3-multipart-uploads.s3.eu-west-1.amazonaws.com/videoplayback.mp4"</span>,
        <span class="hljs-attr">"Bucket"</span>: <span class="hljs-string">"s3-multipart-uploads"</span>,
        <span class="hljs-attr">"Key"</span>: <span class="hljs-string">"videoplayback.mp4"</span>,
        <span class="hljs-attr">"ETag"</span>: <span class="hljs-string">""78298db673a369adf33dd8054bb6bab7-7""</span>,
        <span class="hljs-attr">"ChecksumCRC64NVME"</span>: <span class="hljs-string">"d1UPkm73mAE="</span>,
        <span class="hljs-attr">"ChecksumType"</span>: <span class="hljs-string">"FULL_OBJECT"</span>
    }
    

    Now, when you go to your S3 bucket and hit refresh, you should see the uploaded object.

    Image of object successfully uploaded to AWS using multipart upload

    Here, you can see the complete file, file name, type, and size.

    Conclusion

    Multipart uploads transform large file transfers to Amazon S3 from fragile, all-or-nothing operations into robust, resumable processes. By segmenting files into manageable chunks, you gain retry capabilities, better performance, and the ability to handle objects exceeding S3’s 5GB single-upload limit.

    This approach is essential for production environments dealing with database backups, video files, or any large assets. With the AWS CLI techniques covered in this guide, you’re now equipped to handle S3 transfers confidently, regardless of file size or network conditions.

    Check out this documentation from the AWS knowledge center to learn more about multi-part uploads using AWS CLI.

    Source: freeCodeCamp Programming Tutorials: Python, JavaScript, Git & More 

    Facebook Twitter Reddit Email Copy Link
    Previous ArticleHow to Use MongoDB with Go
    Next Article STIV: Scalable Text and Image Conditioned Video Generation

    Related Posts

    Development

    Can I use React Server Components (RSCs) today?

    September 16, 2025
    Development

    Perficient Named among Notable Providers in Forrester’s Q3 2025 Commerce Services Landscape

    September 16, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Orange Pi 5 Max Single Board Computer: Power Consumption

    Linux

    Benchmarking the Orange Pi R2S Single Board Computer

    Linux

    The Basics of Node.js Streams

    Development

    WinRAR Vulnerability Let Execute Arbitrary Code Using a Malicious File

    Security

    Highlights

    CVE-2025-3998 – CodeAstro Membership Management System SQL Injection Vulnerability

    April 28, 2025

    CVE ID : CVE-2025-3998

    Published : April 28, 2025, 4:15 a.m. | 4 hours, 13 minutes ago

    Description : A vulnerability classified as critical was found in CodeAstro Membership Management System 1.0. This vulnerability affects unknown code of the file renew.php?id=6. The manipulation of the argument ID leads to sql injection. The attack can be initiated remotely. The exploit has been disclosed to the public and may be used.

    Severity: 7.3 | HIGH

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-49853 – ControlID iDSecure SQL Injection Vulnerability

    June 24, 2025

    Rilasciata Clonezilla Live 3.2.2-5 con Kernel Linux 6.12 e Opzioni Ezio Potenziate

    May 16, 2025

    CVE-2025-25038 – MiniDVBLinux OS Command Injection Vulnerability

    June 20, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.