If you work in IT industry, you must have came across the question of how well your network storage or even your local storage is performing. There can be many reasons for looking into storage performance including but not limited to..
- You are evaluating new storage solution to deploy in your organization.
- You are trying to find the bottlenecks in your existing storage solution.
- You are product engineer and working on performance testing of storage product.
Storage Performance Testing
Measuring storage performance of a storage can be easy but finding bottleneck in storage solution can be a cumbersome task as there can be lots of hardware/software layers in a given storage solution. Let’s take an example of NAS storage product. Below diagram shows what a typical NAS storage product architecture looks like. When it comes to storage performance testing, you might be testing complete solution or any layer/component as shown in above diagram. Regardless of the component you are testing, storage performance is measured using following key metrics. Before we start digging into the details, please read the article on to better under I/O size that can impact the performance results.
IOPS (Input/Output Operations Per Second)
IOPS represents the number of I/O operations per Second of storage device. Most common four operations that take place on storage device – Read, Write, Re-Read, Re-Write. All these four operations can be further classified in two access pattern – Sequential and Random. As you know the data is read or written in blocks from/to storage device. A sequential read/write operation is where data blocks being accessed are right next to each other. This normally will be the case when you copy large files. A random read/write operation is where the block being accessed is located at different location on the storage device. I strongly suggest that you read Getting Hang of IOPS which explains hard disk drive internals in detail. IOPS for a given storage device are different for read and write operation where read IOPS are much higher than write IOPS. Based on the access pattern – sequential vs random, read IOPS can further vary based on the type of storage device (Hard Disk vs. SSD).
There are different ways to calculate the IOPS for different layers in above diagram.
Block Storage Devices
IOPS for non-mechanical storage devices such as SSD, PCIe Flash, NVMe etc. are normally published by the manufacturer on their spec sheets but for the hard disk drives this info is missing. You can calculate this theoretically using other details available on the hard drive spec sheet. You need to know Average latency and Average seek time for hard disk drive to calculate IOPS. Once you have this info, use following formula to calculate the IOPS.
IOPS = 1/(Average Latency in seconds + Average Seek Time in seconds)
Below screenshot shows some of the performance data for HGST Ultrastar C10K1200 series hard disk drives.
Using above formula, you can calculate Read IOPS for this hard disk drive as
- Convert Latency Average to seconds from ms Average Latency = 3.0 / 1000 Average Latency = 0.003
- Convert Average Seek Time to seconds from ms Average Seek Time = 4.6 /1000 Average Seek Time = 0.0046
- Using above formula, calculate IOPS IOPS = 1/(0.003 + 0.0046) IOPS = 132
For the SSD and other flash or DRAM based storage devices, you can find IOPS info on the spec sheet of product. Below is the screenshot of SanDisk CloudSpeed Ultra SATA SSD spec sheet available on Sandisk website.
As you can see in above screenshot, IOPS are listed as 80K/25K. 80K is for random read whereas 25K are for random write. Different manufacturers publish this data differently – some publish Sequential Read/Write whereas other publish Random Read/Write. For manufacturers, it make sense to publish the highest possible number for the give performance criteria and that is exactly what they do. To make sense of published IOPS, you also need to know following details
- What was the Read/Write percentage split when performance testing was done if both Read/Write numbers are published as in above SanDisk example. Some manufacturers do testing using 70% Read and 30% write which is most common but some also use other percentage split.
- What was the I/O request or block size used during testing. In our example above, this info is printed using fine prints in the notes section. The note 2 reads that testing was performed using 4KB block size.
Hardware RAID Controller or Host Bus Adapter
Hardware RAID controller and Host Bus Adapter (HBA) manufacturers publish their IOPS in the marketing material as well. Although you won’t find these details in spec sheet and instead throughput (in MB/s) numbers are provided for given device in spec sheet. Below is the screenshot of LSI SAS 9300-16e 12Gb/s SAS HBA that highlights IOPS supported by HBA. For the RAID controller, there is also a penalty involved when it comes to performance. The Write operation in case of RAID is considered complete only when multiple Writes (in case of RAID 1 – mirror) or parity write (in case of RAID 5 & 6) are completed. This requires extra time for writing the parity information and this extra time is called RAID Penalty and it starts with 1. It only applies to Write operations. RAID penalty depends on the RAID type being used and below table list the penalty for each RAID type.
|RAID 0 (Stripe)||1||There is no parity to be calculated, so there is no associated WRITE penalty. The READ penalty is 1 and the WRITE penalty is 1.|
|RAID 1 (Mirror)||2||The WRITE operation has to be mirrored on two devices. This leads to WRITE penalty of 2 whereas READ penalty is still 1.|
|RAID 10 (Mirror with Stripe)||2||With RAID 10, penalty stays same as RAID 1.|
|RAID 5 (Single Parity)||4||Four operations are performed for each WRITE in this case – read old data block, read old parity block, write new data block and write new parity block. So, with RAID 5, READ penalty stays 1 but WRITE penalty jumps to 4.|
|RAID 6 (Double Parity)||6||With double parity, WRITE operations leads to reading old data, reading old parity 1, reading old parity 2, writing new data, writing new parity 1 and writing new parity 2 for each change to the disk. This leads to WRITE penalty of 6 whereas READ penalty is still 1.|
To calculate maximum IOPS for RAID, you can use following formula
IOPS = (Disk IOPS * Number of Disks) / RAID Penalty
Taking an example of RAID 5 with 6 disk drives of HGST Ultrastar C10K1200, total RAID IOPS will be
Write IOPS = (132 * 6) / 4
Write IOPS = 198
Read IOPS = (132 * 6) / 1
Read IOPS = 792
Latency is the next