Amazon Aurora is a fully managed relational database service designed to provide the performance, scalability, and availability of high-end commercial databases while offering the simplicity and cost-effectiveness of open-source databases. Amazon Aurora MySQL-Compatible Edition is wire-compatible with MySQL making it an attractive choice for businesses already using MySQL technology.
Storage in Amazon Aurora MySQL is managed differently from traditional MySQL databases. In this post, we explore the different types of storage available in Amazon Aurora MySQL, how Aurora uses those storage types, and how to monitor storage consumption. We also explore some of the database queries and Amazon CloudWatch metrics for Aurora that you can use to estimate Aurora storage billing.
Storage types
Aurora has two types of storage:-
Cluster volume storage– Amazon Aurora MySQL uses a shared storage layer that is distributed across three Availability Zones within an AWS Region to provide durability, fault tolerance, data redundancy, and high availability. It stores the InnoDB tables and indexes, database metadata, stored objects such as functions and procedures, and other persistent data including binary logs and relay logs.
Local storage – Each Aurora MySQL instance in the cluster is equipped with local storage volumes that are backed by Amazon Elastic Block Store (Amazon EBS). You use these local volumes to store non-persistent temporary files and non-InnoDB temporary tables, sort large datasets, and store different engine logs like error, audit, and general logs. For more information, refer to Temporary storage limits for Aurora MySQL.
In the following sections, we discuss common sources of storage utilization in the Aurora MySQL cluster and how you can use database metrics and metadata to check storage consumption.
User tables, indexes, and tablespaces
User tables and indexes occupy most of the persistent storage space in a relational database system. In MySQL, storage engines are those components that handle the SQL operations for different table types. InnoDB is the default, general-purpose storage engine in MySQL. InnoDB is the only storage engine supported by Amazon Aurora MySQL for persistent database tables; therefore, we will discuss storage utilization in the context of InnoDB storage engine only.
In traditional MySQL, tables that use the InnoDB storage engine are stored in data files called tablespaces, signified by the “.ibd†file extension. For more information, refer to InnoDB: Tablespace Space Management.
Although Amazon Aurora doesn’t use traditional files and block-based filesystems for InnoDB data storage, the high-level concepts remain the same. Aurora follows the concept of InnoDB tablespaces, but those tablespaces exist as objects in Aurora’s custom-designed storage volume rather than files on a block storage device.
The InnoDB storage engine stores tables in file-per-table tablespaces by default. This behavior is controlled by the innodb_file_per_table parameter.
With innodb_file_per_table=ON, the engine behaves as follows:
Each table has its own tablespace (equivalent to an .ibd file in traditional MySQL terms).
When a tablespace is removed (dropped), the freed database pages are released to the storage volume and are available for reuse for new data
Aurora can dynamically reclaim those free pages over time in order to shrink the storage volume, which increases the amount of space available in the volume and reduces your storage cost.
There are several database operations that can result in a tablespace being dropped, and in turn, free up pages that Aurora can reclaim. This also applies to table partitions because each partition uses a separate tablespace:-
Dropping tables or schemas will cause the underlying tablespaces to be removed.
Truncating a table will replace the existing tablespace with an empty one, which is technically the same as dropping and recreating it.
Optimizing a table (either through OPTIMIZE or ALTER) will build a new tablespace and remove the old one.
After such an operation is performed, the volume size will not reduce immediately. Aurora will gradually reclaim the free space in the background at a rate of up to 10 TB per day. For more information on dynamic resizing, see Storage scaling.
With innodb_file_per_table=OFF, the behavior is as follows:
Tables don’t have their own individual tablespaces, but rather the table data resides under the system tablespace.
If you drop, truncate, or optimize a table, the associated pages are freed within the system tablespace, but does not reduce the size of the system tablespace. As a result, Aurora’s dynamic volume resizing cannot reclaim space occupied by those pages.
To calculate the space used by tablespaces, you can use the INFORMATION_SCHEMA.FILES table. The INFORMATION_SCHEMA.FILES table reports metadata for InnoDB tablespace types, including file-per-table tablespaces, system tablespace, global temporary tablespace, and undo tablespaces. For more information about InnoDB tablespaces, refer to Tablespaces.
You can use the following query to list tablespace names along with their size:-
This query will work for both Amazon Aurora MySQL version 2 (compatible with MySQL 5.7) and Amazon Aurora MySQL version 3 (compatible with MySQL 8.0).
Note that tablespaces have a certain minimum size, even when empty. With innodb_file_per_table set to ON, even an empty table or partition (with no rows in it) will occupy a small amount of storage, on the order of a few megabytes. This is typically not an issue, unless you’re planning to store tens of millions of tables in a single Aurora cluster. We recommend using the default ON setting for innodb_file_per_table if at all possible.
You should also consider using the INFORMATION_SCHEMA.FILES table instead of (or in addition to) INFORMATION_SCHEMA.TABLES to calculate the storage space used by tables, indexes, and schemas as the INFORMATION_SCHEMA.TABLES table contains cached statistics that might be stale if you do not analyze the tables before reading the metadata. The information_schema_stats_expiry system variable (applies to Aurora MySQL version 3) defines the period of time before cached statistics expire automatically. The default is 86,400 seconds (24 hours). To force an update of the cached values for a given table, use the ANALYZE TABLE command and then check the statistics in INFORMATION_SCHEMA.TABLES afterwards. Please note that the accuracy of analyze table depends on configuration of innodb_stats_persistent and innodb_stats_transient_sample_pages parameters.
Temporary tables and temporary tablespaces
Before we discuss temporary tablespaces, we first need to understand temporary tables, when they’re used, and the differences in temporary table handling between Amazon Aurora MySQL version 2 and version 3.
There are two types of temporary tables in Aurora MySQL:
Internal (or implicit) temporary tables – These tables are created by the database engine itself to handle operations such as sorting, aggregation, derived tables, and common table expressions (CTEs). Database users have no direct control over these tables. For more details about internal temporary tables in MySQL 5.7, refer to Internal Temporary Table Use in MySQL and for MySQL 8.0, refer to Internal Temporary Table Use in MySQL.
User-created (or explicit) temporary tables – These tables are created by the database client using the CREATE TEMPORARY TABLE statement. Explicit temporary tables are visible only within the database session (connection) that created them, and they’re dropped automatically when the session is closed. These tables are useful for storing intermediate data while running complex SQL processes, and the data isn’t something you need to store persistently. For more details about these tables in MySQL 5.7, refer to CREATE TEMPORARY TABLE and for MySQL 8.0, refer to CREATE TEMPORARY TABLE.
There are differences in how temporary tables are stored in Amazon Aurora MySQL 2 compared to Aurora MySQL 3. While some of these differences affect performance, others influence storage consumption. In the next section, we briefly explain the storage-related differences. For additional details, refer to Use the TempTable storage engine on Amazon RDS for MySQL and Amazon Aurora MySQL.
Aurora version 2 (compatible with MySQL 5.7)
In Aurora version 2, internal temporary tables can be held in memory and handled by the MEMORY storage engine. If an internal temporary table created in-memory becomes too large, MySQL automatically converts it to a disk-based table. In some cases, the database might use a disk-based table right from the beginning, such as when the query involves data types not supported by the MEMORY engine (for example, BLOB or TEXT). The storage engine for on-disk internal temporary tables is either InnoDB (default) or MyISAM depending on the internal_tmp_disk_storage_engine setting.
For InnoDB temporary tables in MySQL 5.7, the engine uses a single temporary tablespace. It’s a shared temporary tablespace with an auto-extending size, called ibtmp1. For more information, refer to The Temporary Tablespace.
To check the size of the temporary tablespace, you can query the Information_Schema.Files table using the following query:
By default, the temporary tablespace data file is auto-extending and increases in size as necessary to accommodate on-disk temporary tables.
To reclaim disk space occupied by the temporary tablespace, restart the writer instance of the Aurora cluster. Restarting the writer instance removes and recreates the temporary tablespace data file.
By default, in Aurora version 2, InnoDB on-disk internal temporary tables (inside the temporary tablespace) reside on the Aurora cluster volume. Non-InnoDB temporary tables reside on the local storage provided by the Amazon Aurora MySQL instance.
Aurora version 3 (compatible with MySQL 8.0)
In Aurora version 3, internal temporary tables can be held in memory and handled by the TempTable storage engine (default) or the MEMORY engine. The TempTable storage engine limits and storage allocation behaviors are controlled by configuration parameter such as tmp_table_size, temptable_max_ram, temptable_use_mmap, and temptable_max_mmap. If an internal temporary table created in-memory becomes large enough as governed by these parameters, then MySQL converts that table to an on-disk temporary table. In Aurora MySQL versions earlier than 3.x, you can define the storage engine used for on-disk internal temporary tables as InnoDB or MyISAM. From Aurora MySQL version 3.x and beyond, we only use the InnoDB storage engine for on-disk internal temporary tables.
In MySQL 8.0, and thus in Aurora MySQL version 3, InnoDB uses two types of temporary tablespaces:
Session temporary tablespaces – These tablespaces store user-created temporary tables and on-disk internal temporary tables when InnoDB is configured as the storage engine for on-disk internal temporary tables. Session temporary tablespaces are allocated to a session from a pool of temporary tablespaces. When a session disconnects, its temporary tablespaces are truncated and released back to the pool. In previous releases, temporary tables were created in the global temporary tablespace (ibtmp1), which did not return disk space to the operating system after temporary tables were truncated or dropped.
The global temporary tablespace – The global temporary tablespace (ibtmp1) now stores rollback segments for changes made to user-created temporary tables. This temporary tablespace data file is auto-extending and increases in size as necessary. You can use the same query listed above (for Aurora version 2) to check the size of this global temporary tablespace.
To reclaim disk space occupied by the global temporary tablespace data file, you need to restart the writer instance of your Aurora cluster. Restarting the writer instance removes and recreates the global temporary tablespace data file.
In Aurora version 3, by default, InnoDB on-disk internal temporary tables and temporary tablespaces files reside on the Aurora cluster volume, and non-InnoDB temporary tables and files reside on the local storage provided by the Amazon Aurora MySQL instance.
Binary logs
The binary log (or binlog) contains events that describe database changes such as table creation operations or changes to table data.
In Aurora MySQL, binary logs are useful for:
Replicating from Aurora MySQL to another MySQL-compatible database.
Replicating from Aurora MySQL to a non-MySQL database using change-data-capture (CDC) tools such as AWS Database Migration Service (AWS DMS).
Extracting CDC records from Aurora MySQL for various purposes, such as establishing integration between Aurora and a downstream message/event-based system.
Binary logging is disabled by default in Amazon Aurora MySQL (log_bin = OFF). You can enable binary logs by setting binlog_format to Mixed, Statement, or Row in the cluster-level parameter group.
If binlogs are enabled on the cluster, the space they consume in the cluster volume depends on various factors including:
The configured binary log retention period.
The volume of changes generated by the changes such as table creation operations or changes to table data etc.
In some cases, issues with the attached binary log replicas can cause binlog space to increase on the cluster volume. For example, if you use the binlog-based cross-Region read replicas and for any reason, read replica falls behind in applying the binlogs then Aurora may need to temporarily store more binlogs on the source than it would otherwise need to.
You can check the binary log retention setting by running the mysql.rds_show_configuration stored procedure:
The retention period is expressed in hours. If required, you can change the binlog retention period by using the mysql.rds_set_configuration stored procedure. In the following example, we set the retention period of binary logs to 24 hours:
You can run the SHOW BINARY LOGS command at the primary instance to see the binary logs present along with the size of each one:
You can use the following CloudWatch metrics at the cluster level in Amazon Aurora MySQL to monitor the number of binary logs and their size:
SumBinaryLogSize – The total size of the binary logs in bytes
NumBinaryLogFiles – The number of binary logs stored in the cluster
Relay logs
During binary log replication in MySQL, the I/O receiver thread on the replica server connects to a primary server, reads binary log events from the primary, and copies them over to a local log called the relay log. The SQL applier thread reads events from the relay log and applies them as fast as possible. In other words, the relay log is a copy of the binary log waiting to be applied on the replica.
Once the SQL thread processes the events from a given relay log file, it automatically deletes that file since it’s no longer needed.
In certain cases, you might see storage space being occupied by relay logs even though your Aurora cluster isn’t actively replicating. For example, you might have configured your Aurora cluster as the replica of another MySQL server in the past, but you stopped replication without resetting it completely. In such cases, there might still be relay logs present on the Aurora MySQL cluster volume that are occupying storage space.
To verify this, you can run the SHOW REPLICA STATUS command (or SHOW SLAVE STATUS in 5.7-compatible versions) to check whether replication is configured even if it’s not actively running. If the command produces an empty output, it means replication is not configured, and thus the cluster should not contain any relay logs.
If you see an output showing some replication configuration (as shown in the following example) and other replication status counters, it means the replication might have been stopped but replication metadata is still there, and the cluster could still contain relay logs that are taking up space on the Aurora cluster volume.
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 10.136.6.91
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysqld@sam_prd2-bin.000712
Read_Master_Log_Pos: 418325560
Relay_Log_File: relaylog.066015
Relay_Log_Pos: 4
Relay_Master_Log_File: mysqld@sam_prd2-bin.000712
Slave_IO_Running: No
Slave_SQL_Running: No
….
**********************************************************
Unfortunately, MySQL doesn’t provide granular metadata around the relay log files, so you can’t check the exact number or size of relay logs stored inside of the cluster.
To clear the replication metadata and delete the relay log files, you can call the following stored procedures on the writer instance of your Aurora cluster:
For Amazon Aurora MySQL version 2, use the mysql.rds_reset_external_master procedure
For Amazon Aurora MySQL version 3, use the mysql.rds_reset_external_source procedure
Aurora clones
Aurora clone is a quick and cost-effective way to create a duplicate aurora cluster volume, data.
Aurora uses a copy-on-write protocol to create cluster clones. When a clone is created, Aurora creates a new standalone cluster storage volume, but it doesn’t copy all the original data pages to the new volume. Instead, as long as the pages remain unmodified, a cloned page is simply a pointer to the original page. When a page is modified on either side, Aurora creates a copy of that page on the clone, hence the term, “copy-on-write.†This makes clones fast to create and more space-efficient than physically copying the data using other techniques, such as restoring a snapshot, Point-in-Time Restore (PITR), or logical dump-and-restore. You can also create clone chains (clones of clones) and even clone clusters across different AWS accounts.
Clones use a small amount of additional space to instantiate a volume. Outside of that small overhead, additional storage is allocated only when changes are made to either the source or the cloned cluster(s). Clones are independent from the original cluster and from one another in terms of data replication and performance. Aurora doesn’t automatically replicate data between these entities, and the workloads on the clones (or the original cluster) don’t affect one another.
Since the cloned cluster storage volume originally shares the vast majority of its pages with the source volume, those pages are only included in the VolumeBytesUsed metric of the source volume. In a cloned cluster, the VolumeBytesUsed metric is initially near zero. We discuss metrics later in this post. The clone volume’s VolumeBytesUsed metric only includes the additional storage that was allocated due to data changes made after the clone is created.
If the source cluster is deleted, the pages that were still shared are billed to the remaining active (live) clones, which means VolumeBytesUsed can increase significantly even without substantive writes to the cloned clusters.
The deletion or creation of more clones in that chain will cause another redistribution of pages to the remaining clones. For more information, refer to How Aurora cloning works.
Therefore, if you observe a large discrepancy between the VolumeBytesUsed of the clusters and actual tablespace sizes, it’s worth checking whether your cluster is part of a clone chain.
CloudWatch metrics
These are the useful CloudWatch metrics for observing local storage and cluster volume utilization:
FreeLocalStorage – This metric lets you monitor the local storage space associated with each Aurora instance. The amount of local storage is tied to the instance class, so if you need more local storage, you need to use a larger instance. For more information, refer to What is stored in Aurora MySQL-Compatible local storage, and how can I troubleshoot local storage issues.
VolumeBytesUsed – This metric provides the amount of billed storage used by your Aurora DB cluster. As noted previously, the cluster volume usage will include InnoDB tablespaces, binary logs, relay logs. Note that this is a billing metric that might not always reflect the amount of data actually present in the cluster, such as when using copy-on-write clones.
AuroraVolumeBytesLeftTotal – This metric shows the remaining available space for the cluster volume, out of the 128TB maximum. As the cluster volume grows, this value decreases. If it reaches zero, the cluster reports an out-of-space error. Note that this metric includes storage used for internal housekeeping and other allocations that don’t affect your storage billing. As a result, the metric value won’t be exactly equal to “128TB minus VolumeBytesUsedâ€.
For more details on these metrics, refer to Amazon CloudWatch metrics for Amazon Aurora.
Conclusion
In this post, we discussed common reasons for space utilization in an Aurora MySQL cluster volume and what queries and configurations you can use to find the root cause of space utilization and understand Aurora storage billing costs.
To learn more about AWS Aurora MySQL, refer to Working with Amazon Aurora MySQL. Also for more information into the Aurora storage volume, please refer to the Introducing the Aurora Storage Engine.
About the Author
Raunak Gupta is a Cloud Support Engineer at AWS. He has been with AWS for over 4 years. He focuses on Amazon RDS MySQL and Amazon Aurora MySQL primarily and is also an Aurora subject matter expert. He works with enterprise customers providing technical assistance on database operational performance and sharing database best practices.
Source: Read More