2024 marked a significant year for Amazon DynamoDB, with advancements in security, performance, cost-effectiveness, and integration capabilities. This year-in-review post highlights key developments that have enhanced the DynamoDB experience for our customers.
Key highlights and launches of 2024 include:
- Significant price reductions for on-demand throughput and global tables
- Introduction of warm throughput for improved performance management and configurable maximum throughput in on-demand mode
- Multi-Region strong consistency public preview for global tables
- Enhanced security features including AWS PrivateLink support, resource-based policies, and attribute-based access control
- Zero-ETL integrations with Amazon Redshift and Amazon SageMaker Lakehouse
These improvements, along with numerous other updates, reflect our commitment to making DynamoDB more resilient, flexible, and cost-effective for businesses of all sizes. In the following sections, we dive deeper into each category of updates, exploring how they can benefit your applications and workflows.
Whether you’re a long-time DynamoDB user or just getting started, this post will guide you through the most impactful changes of 2024 and how they can help you build reliable, faster, and more secure applications. We’ve sorted the post by alphabetical feature areas, listing releases in reverse chronological order. Note that certain announcements may be duplicated across feature areas; the first of these duplicates will cover comprehensive details.
Over the course of 2024, we’ve also overhauled areas of the official DynamoDB documentation, so be sure to check out all of the new and modified pages, and update your browser bookmarks. Be sure to contact us at @DynamoDB or on AWS re:Post if you have questions, comments, or feature requests.
Amazon DynamoDB Accelerator (DAX)
- August 1 – Amazon DynamoDB Accelerator (DAX) is now available in additional AWS Regions. DAX expanded its availability to the Europe (Spain) and Europe (Stockholm) Regions in 2024. The expansion allows more customers to take advantage of DAX’s benefits, including support for Amazon Elastic Compute Cloud (Amazon EC2) R5 and T3 instance types in these new Regions. For DAX Regional availability information, refer to the Service endpoints section in Amazon DynamoDB endpoints and quotas. Pricing details are available on the DynamoDB pricing page. To get started with DAX, refer to Developing with the DynamoDB Accelerator (DAX) Client.
- May 22 – Spring 2024 SOC reports now available with 177 services in scope. One of the additional six services in scope for the Spring 2024 SOC report was DynamoDB Accelerator (DAX). To learn more about our compliance and security programs, see AWS Compliance Programs.
Application integration
- December 3 – Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning (ML) workloads on your DynamoDB data using SageMaker Lakehouse, which provides integrated access control and open source Apache Iceberg for data interoperability and collaboration. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations. To learn more, refer to DynamoDB integrations, read the DynamoDB zero-ETL documentation with Amazon SageMaker Lakehouse, or read the Amazon SageMaker Lakehouse documentation.
- November 12 – Amazon Managed Service for Apache Flink now supports Amazon DynamoDB Streams as a source. The new connector, contributed by AWS to the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Flink. Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up. For the Flink repository for AWS connectors, refer to Amazon DynamoDB Connector. For detailed documentation and setup instructions, see DynamoDB Streams and Apache Flink.
- October 15 – AWS announces general availability of Amazon DynamoDB zero-ETL integration with Amazon Redshift. This integration allows you to perform high-performance analytics on DynamoDB data without impacting production workloads or building complex extract, transform, and load (ETL) pipelines. As data is written to DynamoDB, it becomes immediately available in Amazon Redshift, enabling holistic insights across applications, breaking data silos, and providing cost savings. You can take advantage of Amazon Redshift capabilities such as high-performance SQL, built-in ML, Spark integrations, and data sharing to enhance your analysis of DynamoDB data. To learn more, see DynamoDB zero-ETL integration with Amazon Redshift.
- September 18 – AWS Cost Management now provides purchase recommendations for Amazon DynamoDB reserved capacity. This new feature analyzes your DynamoDB usage and suggests optimal reserved capacity purchases for 1- or 3-year terms. You can customize recommendation parameters to align with your financial goals. This addition expands the recommendation capabilities of AWS Cost Explorer to include seven reservation models across various AWS services. The feature is available in most AWS Regions where DynamoDB operates, except China (Beijing, operated by Sinnet), China (Ningxia, operated by NWCD), and AWS GovCloud. For more information, or to get started with DynamoDB reserved capacity recommendations, refer to Accessing reservation recommendations.
- March 27 – Amazon DynamoDB Import from S3 now supports up to 50,000 Amazon S3 objects in a single bulk import. With the increased default service quota for import from Amazon Simple Storage Service (Amazon S3), customers who need to bulk import a large number of S3 objects can now run a single import to ingest up to 50,000 S3 objects, removing the need to consolidate S3 objects prior to running a bulk import. The new Import from S3 quotas are now effective in all Regions, including the AWS GovCloud (US) Regions. Start taking advantage of the new DynamoDB Import from S3 quotas by using the DynamoDB console, the AWS Command Line Interface (AWS CLI), or AWS APIs. For more information about DynamoDB quotas, refer to the Service, account, and table quotas in Amazon DynamoDB.
- February 2 – Amazon DynamoDB zero-ETL integration with Amazon Redshift in the US East (Virginia) region.
Developer and user experience
- October 17 – Amazon DynamoDB announces user experience enhancement to organize your tables. You can choose the favorites icon to view your favorited tables on the console’s tables page. With this update, you have a faster and more efficient way to find and work with tables that you often monitor, manage, and explore. The favorites table console experience is now available in all Regions at no additional cost. Get started with creating a DynamoDB table from the AWS Management Console.
- May 28 – Amazon DynamoDB local supports configurable maximum throughput for on-demand tables. You can use the configurable maximum throughput for on-demand tables feature for predictable cost management, protection against accidental surging in consumed resources and excessive use, and safe guarding downstream services with fixed capacity from potential overloading and performance bottlenecks. With DynamoDB local, you can develop and test your application with managing maximum on-demand table throughput, making it easier to validate the use of the supported API actions before releasing code to production. To get started with the latest version refer to Deploying DynamoDB locally on your computer. Learn more in the documentation by referring to Setting Up DynamoDB Local (Downloadable Version).
- April 24 – NoSQL Workbench for Amazon DynamoDB launches a revamped operation builder user interface. The revamped operation builder interface gives you more space to explore and visualize your data, lets you manage tables with just one click, and allows direct item manipulation right from the results pane. We’ve added a copy feature for quick item creation, streamlined the Query and Scan filtering process, and added a seamless DynamoDB local integration. For those who prefer a different look or need better accessibility, there’s a new dark mode too. All of these great features come at no extra cost, no matter which Region you’re using. Download the latest version now—you will love how much smoother your database management becomes! Get started with the latest version of NoSQL Workbench by downloading from Download NoSQL Workbench for DynamoDB. For more information about the latest updates, refer to Exploring datasets and building operations with NoSQL Workbench.
- March 14 – Amazon DynamoDB local upgrades to Jetty 12 and JDK 17. DynamoDB local (the downloadable version of DynamoDB) version 2.3.0 migrates from Jetty 11 to Jetty 12 server, and from JDK 11 to JDK 17. With this update, developers using Spring Boot 3.2.x with Jetty 12 support can use DynamoDB local to develop and test their Spring applications when working with DynamoDB. With DynamoDB local, you can develop and test applications by running DynamoDB in your local development environment with incurring any costs.
Global tables
- December 2 – Amazon DynamoDB global tables previews multi-Region strong consistency. This new capability enables you to build highly available multi-Region applications with zero Recovery Point Objective (RPO). Multi-Region strong consistency makes sure applications can consistently read the latest data version from different Regions in a global table, eliminating the need for manual cross-Region consistency management. This feature is particularly beneficial for global applications with strict consistency requirements, such as user profile management, inventory tracking, and financial transaction processing. The preview is currently available in US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, with the existing global tables pricing. For more information, refer to Multi-Region strong consistency, and Global tables – multi-Region replication for DynamoDB.
- November 14 – Amazon DynamoDB reduces prices for on-demand throughput and global tables. We have made DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%. To learn more, see New – Amazon DynamoDB lowers pricing for on-demand throughput and global tables.
- April 30 – Amazon DynamoDB now supports an AWS FIS action to pause global table replication. This feature enhances the fully managed global tables service in DynamoDB, which automatically replicates tables across selected Regions for fast, local read and write performance. The new AWS Fault Injection Service (FIS) action enables you to simulate and observe your application’s response to a pause in Regional replication, allowing you to fine-tune monitoring and recovery processes for improved resiliency and availability. Global tables can now be tested more thoroughly to maintain proper application behavior during Regional interruptions. You can integrate this new action into your continuous integration and release testing processes by creating experiment templates in FIS, and combine it with other FIS actions for comprehensive scenario testing. The DynamoDB Pause Replication action is now available across AWS commercial Regions where FIS is available. To learn more, see Amazon DynamoDB actions.
Security
- December 13 – Amazon DynamoDB announces support for FIPS 140-3 interface VPC and Streams endpoints. FIPS-compliant endpoints help companies contracting with the federal government meet the FIPS security requirement to encrypt sensitive data in supported Regions. The new capability is available in Regions in the US and Canada, and the AWS GovCloud (US) Regions. To learn more about AWS FIPS 140-3, refer to Federal Information Processing Standard (FIPS) 140-3.
- November 18 – Amazon DynamoDB announces general availability of attribute-based access control. DynamoDB now supports ABAC for tables and indexes. ABAC is an authorization strategy that defines access permissions based on tags attached to users, roles, and AWS resources. ABAC uses tag-based conditions in your AWS Identity and Access Management (IAM) policies or other policies to allow or deny specific actions on your tables or indexes when IAM principals’ tags match the tags for the tables. Using tag-based conditions, you can also set more granular access permissions based on your organizational structures. ABAC automatically applies your tag-based permissions to new employees and changing resource structures, without rewriting policies as organizations grow. To learn more, see Using attribute-based access control with DynamoDB and Using attribute-based access control for tag-based access authorization with Amazon DynamoDB.
- September 3 – Amazon DynamoDB announces support for Attribute-Based Access Control (preview). ABAC for DynamoDB is available in limited preview in the US East (Ohio), US East (Virginia), and US West (N. California) Regions.
- May 28 – Amazon DynamoDB now supports resource-based policies in the AWS GovCloud (US) Regions. This feature enables customers in AWS GovCloud (US) to use resource-based policies.
- March 20 – Amazon DynamoDB now supports resource-based policies. This feature allows you to specify IAM principals and their allowed actions on tables, streams, and indexes. It simplifies cross-account access control and integrates with AWS IAM Access Analyzer and Block Public Access capabilities. Resource-based policies are available in AWS commercial Regions at no additional cost and can be implemented using various AWS tools. This new feature provides greater flexibility and security in managing access to DynamoDB resources across different AWS accounts. There is no additional cost to use the feature. Get started with resource-based policies by using the console, AWS CLI, AWS SDK, AWS CDK, or AWS CloudFormation. To learn more, see Using resource-based policies for DynamoDB.
- March 19 – Amazon DynamoDB now supports AWS PrivateLink, allowing you to connect to DynamoDB over a private network without using public IP addresses. This helps you maintain compliance for your DynamoDB workloads and eliminates the need to configure firewall rules or an internet gateway. AWS PrivateLink for DynamoDB is available in AWS commercial Regions and there is an additional cost to use the feature. To learn more, see Simplify private connectivity to Amazon DynamoDB with AWS PrivateLink.
Serverless
- November 14 – Amazon DynamoDB reduces prices for on-demand throughput and global tables.
- November 13 – Amazon DynamoDB introduces warm throughput for tables and indexes. The warm throughput value provides visibility into the number of read and write operations your DynamoDB tables can readily handle, and pre-warming lets you proactively increase the value to meet future traffic demands. Warm throughput values are available for provisioned and on-demand tables and indexes at no cost. Pre-warming your table’s throughput incurs a charge. Refer to the DynamoDB pricing page for pricing details. This capability is now available in AWS commercial Regions. Refer to Understanding DynamoDB warm throughput to learn more.
- May 3 – Amazon DynamoDB introduces configurable maximum throughput for On-demand tables. Customers can now optionally configure maximum read or write (or both) throughput for individual on-demand DynamoDB tables and associated secondary indexes, simplifying the balance of cost and performance. Throughput requests in excess of the maximum table throughput will automatically get throttled, but you can modify the table-specific maximum throughput as needed based on your application requirements. On-demand throughput is available in all Regions. Refer to the DynamoDB pricing page for on-demand pricing. To learn more, see DynamoDB throughput capacity.
Documentation
- December 26 – Application integration – Published a new topic on integrating Amazon Managed Streaming for Apache Kafka (Amazon MSK) with DynamoDB. Learn how Amazon MSK integrates with DynamoDB by reading data from Apache Kafka topics and storing it in DynamoDB. For more information, refer to Integrating DynamoDB with Amazon Managed Streaming for Apache Kafka.
- November 18 – Security – Added two new permissions to the
AmazonDynamoDBReadOnlyAccess
managed policy:dynamodb:GetAbacStatus
anddynamodb:UpdateAbacStatus
. These permissions allow you to view the attribute-based access control (ABAC) status and enable ABAC for your AWS account in the current Region. For more information, refer to AWS managed policy: AmazonDynamoDBReadOnlyAccess. - October 16 – Billing – Published two new topics regarding billing for global tables and billing for backups. For more information, refer to Understanding Amazon DynamoDB billing for global tables and Understanding Amazon DynamoDB billing for backups.
- October 11 – Generative AI – Published a new topic that provides information about using generative AI with DynamoDB, including examples of generative AI use cases for DynamoDB.
- September 3 – Application integration – Added documentation for account-based endpoints and the
ACCOUNT_ID_ENDPOINT_MODE
setting for SDK clients. For more information, refer to SDK support of AWS account-based endpoints. - July 31 – Developer and user experience – Overhauled Getting started with DynamoDB pages. We combined the AWS CLI and AWS SDK instructions into the same page as the AWS Management Console, so new users getting started with DynamoDB can choose the medium in which they interact with DynamoDB.
- July 2 – Developer and user experience – Restructured and consolidated the DynamoDB backup and restore documentation in the DynamoDB Developer Guide. For more information, refer to Backup and restore for DynamoDB.
- June 3 – DynamoDB Accelerator – Published a new best practices topic that provides you with comprehensive insights for using DAX effectively. This topic covers performance optimization, cost management, and operational best practices. For more information, refer to Prescriptive guidance to integrate DAX with DynamoDB applications.
- May 29 – Developer and user experience – Added a new topic on migrating DynamoDB tables from one account to another. For more information, refer to Migrating a DynamoDB table from one account to another.
- May 7 – Security – DynamoDB preventative security best practices pages updated.
- March 6 – Developer and user experience – Added a programming guide for AWS SDK for JavaScript. Learn about the AWS SDK for JavaScript, abstraction layers, configuring connection, handling errors, defining retry policies, managing keep-alive, and more. For more information, refer to Programming Amazon DynamoDB with JavaScript.
- March 5 – Developer and user experience – Created a new programming guide for AWS SDK for Java 2.x that goes in depth about high-level, low-level, and document interfaces, HTTP clients and their configuration, and error handling, and addresses the most common configuration settings that you should consider when using the SDK for Java 2.x. For more information, refer to Programming DynamoDB with AWS SDK for Java 2.x.
- February 26 – Developer and user experience – Allowed developers to use NoSQL Workbench to copy or clone tables between development environments and Regions (DynamoDB Local and DynamoDB web). For more information, refer to Cloning tables with NoSQL Workbench.
- January 11 – Developer and user experience – Created a new guide that goes in depth about both high-level and low-level libraries and addresses the most common configuration settings that you should consider when using the Python SDK. For more information, refer to Programming Amazon DynamoDB with Python and Boto3.
- January 3 – Developer and user experience – Updated Using time to live (TTL) in DynamoDB documentation, with updated code samples in Java, Python, and JavaScript.
Summary
2024 marked a year of considerable advancement for DynamoDB, with major strides in security, cost optimization, and seamless integrations. Key developments like multi-Region strong consistency, reduced pricing for on-demand throughput, and zero-ETL integrations with services like Amazon Redshift and SageMaker Lakehouse have empowered customers to build more resilient, cost-effective, and data-driven applications.
As we look towards 2025, we’re excited to see how you use these new capabilities. Whether you’re an experienced DynamoDB user or just starting your journey, there’s never been a better time to explore what’s possible. We encourage you to try out our new features, and share your experiences with us at @DynamoDB or on AWS re:Post.
Get started with the Amazon DynamoDB Developer Guide and the DynamoDB getting started guide, and join the millions of customers that push the boundaries of what’s possible with DynamoDB.
About the Author
Michael Shao is a Senior Developer Advocate on the Amazon DynamoDB team. Michael has spent over 8 years as a software engineer at Amazon Retail, with a background in designing and building scalable, resilient, and highly performing software systems. His expertise in exploratory, iterative development helps drive his passion for helping AWS customers understand complex technical topics. With a strong technical foundation in system design and building software from the ground up, Michael’s goal is to empower and accelerate the success of the next generation of developers.
Source: Read More