IOPS Calculator
This material consists of two parts. Part One describes the principles of operation of the IOPS Calculator. Part Two contains explanations and an overview of the main concepts related to the operation of the server disk subsystem.
Part first. Description of the IOPS Calculator
What does Calculator can do?
How to use Calculator?
How did we do Calculator?
Main idea behind the Calculator
Two scenarios for using the Calculator
Example of the first scenario
Example of the second scenario
Part second. Disk subsystem, basic concepts
Disk Subsystem Performance Metrics: IOPS and Throughput
Factors Affecting Disk Subsystem Performance
How Many IOPS Does Typical Applications Need
Types of storage devices
Interfaces of storage devices
Types of controllers
The Impact of Queue Depth on Performance
Types of RAID arrays
Conclusion
Part first. Description of the IOPS Calculator
What does Calculator can do?
IOPS calculator allows you to calculate the performance, volume and cost of the server disk subsystem.
To calculate, you need to set the parameters of the disk subsystem:
- controller model;
- model of storage devices;
- number of drives in a RAID array;
- RAID level;
- percentage of read and write operations.
As a result you will get:
- graph of the disk subsystem performance for random access operations depending on the total queue depth;
- graph of disk subsystem performance for sequential access operations depending on the queue depth of a single-threaded load;
- useful volume of RAID array;
- configuration cost (drives plus controller).
The disk subsystem parameters and calculation results are entered into a summary table, in which you can compare up to six different disk subsystem configurations in terms of their performance, volume and cost.
How to use Calculator?
Set disk subsystem parameters:
- type of storage devices (SSD or HDD) and their interface (SATA, SAS or NVMe );
- controller model (select from the list);
- drive model (select from the list);
- number of drives in the array (maximum value depends on the selected controller);
- RAID level (available options are determined by the number of drives and the selected controller);
- percentage of read and write operations.
As a result of the calculation, the left graph will show the dependence of the array performance on the total queue depth for random access operations (4KB block).
By changing the "Number of load threads (T)" and "Queue depth per thread (Q)", you will see the performance value on the graph corresponding to the current values of T and Q.
The right graph will show the dependence of the array performance on the queue depth of a single-threaded load for sequential access operations (1MB block).
By changing the "Queue depth per thread (Q)", you will see the performance value corresponding to the current Q value on the graph.
You can compare several configurations. Click the "+" sign in the bottom table to remember the current configuration. Changing any parameter will create a new configuration.
How did we do Calculator?
Using the iometer and diskspd utilities, we measured the performance of RAID arrays of levels 0, 1, 10, 5, 6, 50, 60 for different queue depths, using different models of drives and controllers.
Based on the data obtained, we constructed mathematical models of the dependence of performance on queue depth for the drives and controllers that participated in the testing.
Based on the assumption that drive models in the same series should have the same performance vs. queue depth behavior, we used these models for the remaining drives based on their performance specifications.
Main idea behind the Calculator
Existing RAID performance formulas only calculate their maximum possible performance, although it can vary widely depending on the number of requests that the array is simultaneously processing (queue depth).
If the queue depth is 1, the array will operate at the speed of one drive or slower.
For the array to operate at maximum performance, a sufficient queue depth is required. Let's call this queue depth the saturation depth.
In real server loads, the current queue depth value is usually in the range from one to the saturation depth. How can we understand in this case whether the disk subsystem is delivering the "supposed" performance or not?
If we know what performance value should correspond to any queue depth value in the range from one to the saturation depth for a given disk subsystem configuration, we can answer this question.
This is the main idea and novelty of the Calculator.
Two scenarios for using the Calculator
The first scenario can be used when designing the disk subsystem of a new server.
Using the Calculator, you can create several valid configurations using specific drive and controller models, and compare these configurations in terms of maximum performance, usable volume, and cost.
Additionally, note that the formulas for calculating maximum RAID performance mentioned here give underestimated results for write operations in the case of arrays with SSD parity. Our Calculator is more accurate.
The second scenario is to test the performance of the disk subsystem under real load.
Using operating system monitoring tools, determine the current values of queue depth, throughput, and percentage of read and write operations while the server is running.
"Reproduce" the server's disk subsystem configuration in the Calculator by selecting the required drive and controller models, the number of drives, and the RAID level.
Set the percentage of read and write operations and set the values of the "Number of load threads (T)" and "Queue depth per thread (Q)" parameters so that the total queue depth matches the queue depth of the real load.
Compare the performance of the server's disk subsystem under real load with the value given by the Calculator. If the numbers are close, then the disk subsystem is giving the required performance. If the difference is critical, you need to try to find the reason for the slowdown.
Example of the first scenario
Let's assume you are designing a server for 1C with the following requirements for the disk subsystem:
- performance of at least 100'000 IOPS;
- read/write ratio (%) 70/30;
- useful volume of the array from 6 to 8TB;
- drive resource 1.0 DWPD or higher.
Using the Calculator, you can create several suitable configurations:
| Number configurations | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| Model drives | D3-S4520 | D3-S4520 | D3-S4520 | PM1653 | D7-P5520 | D7-P5520 | D7-P5520 |
| Interface drives | SATA | SATA | SATA | SAS | NVMe | NVMe | NVMe |
| Capacity drives | 3.84TB | 1.92TB | 960GB | 1.92TB | 1.92TB | 3.84TB | 7.68TB |
| Quantity drives | 4 | 5 | 8 | 5 | 5 | 4 | 2 |
| Controller | C621 | C621 | C621 | 9361-8i | P. VROC | S. VROC | 9560-8i |
| RAID level | 10 | 5 | 5 | 5 | 5 | 10 | 1 |
| Performance, IOPS | 131,000 | 137,000 | 186,000 | 435 000 | 534,000 | 690,000 | 555,000 |
| Volume array, TB | 7.68 | 7.68 | 6.72 | 7.68 | 7.68 | 7.68 | 7.68 |
| Cost storage devices and controller, rub. | 213'000 | 157'000 | 187'000 | 294'000 | 223'000 | 247'000 | 284'000 |
Despite the price difference, it's worth considering the option with NVMe drives , which will provide five times the performance headroom for the future.
Example of the second scenario
This section is still under development.
Part second. Disk subsystem, basic concepts
Disk subsystem performance metrics: IOPS and throughput
The server's disk subsystem consists of a controller and drives, which are combined into a RAID array.
Applications running on the server send requests to the disk subsystem to read or write data. Requests to the disk subsystem of one application are called a load flow.
Several applications can access the disk subsystem at the same time, in which case it processes multiple load streams.
The number of requests that are simultaneously being processed by the disk subsystem is called the queue depth. The order in which requests in the queue are processed is determined by the disk subsystem.
When processing requests, the disk subsystem performs data reading or writing operations (or, in other words, input/output operations).
The performance of the disk subsystem (the speed of reading and/or writing data) is assessed by two indicators (metrics):
Bandwidth of a disk subsystem is the amount of data that the disk subsystem can read and/or write per unit of time. Bandwidth is usually measured in megabytes per second (MB/s).
The number of read and/or write operations of fixed-size data blocks that a disk subsystem can perform per unit of time. The term IOPS (Input / Output Operations Per Second) is used to denote this quantity.
Bandwidth and IOPS are related to each other by the following relationship:
There are two types of input-output operations: sequential access operations and random access operations.
Sequential access operations - reading or writing data located in the drive sequentially one after another.
Random access operations - reading or writing data located in the storage device in a random order.
To evaluate the performance of the disk subsystem for sequential access operations, the value of its throughput in MB/s is used, since in this case we are primarily interested in the volume of transferred data, and not the number of input-output operations.
To evaluate the performance of the disk subsystem for random access operations, the IOPS (input/output operations per second) indicator is used, since in this case we are primarily interested in the number of data blocks read or written, and not the total volume of this data. IOPS is a critical characteristic for systems that work with a large number of small operations.
Thus, the throughput in MB/s and the value of IOPS are two metrics of the disk subsystem performance. The throughput in MB/s characterizes the speed of sequential access operations, and the value of IOPS characterizes the performance of random access operations.
Factors Affecting Disk Subsystem Performance
The disk subsystem throughput for sequential access operations (MB/s) depends on:
- interface between the server and the disk subsystem controller (may be limiting);
- controller interface (may limit if slower than the drive interface);
- type of storage devices (hard disk drives (HDD) are an order of magnitude slower than solid-state drives (SSD));
- drive interface (SATA < SAS3 < SAS4 < PCI3 < PCI4 < PCI5);
- drive performance (proportional);
- number of drives in a RAID array (proportional, except for arrays with parity);
- RAID array level (insignificant, except for "mirrored" arrays);
- queue depth (insignificant);
- the ratio of read and write operations (significant for SSDs).
The performance of the disk subsystem for random access operations (IOPS) depends on:
- interface between the server and the disk subsystem controller (may be limiting);
- controller performance (may limit);
- controller interface (may limit if slower than the drive interface);
- type of storage devices (hard disk drives (HDD) are two orders of magnitude slower than solid-state drives (SSD));
- drive interface (SATA < SAS3 < SAS4 < PCI3 < PCI4 < PCI5);
- drive performance (proportional);
- number of drives in a RAID array (proportional);
- RAID array level (for write operations only);
- queue depth (very significant);
- the ratio of read and write operations (significant for solid-state drives);
- block size (insignificant for block size <8KB);
- caching write operations (significant for hard drives).
In the following, we will take a closer look at the impact of the most significant factors on performance.
How Many IOPS Does Typical Applications Need
The following table shows approximate IOPS values when running some typical applications.
| Role servers | Recommended IOPS value | Size block | read / write ratio |
| File server | 1,000 – 10,000 | 64 KB – 1 MB | 50/50 |
| Web server | 5,000 – 20,000 | 8 KB – 64 KB | 80/20 |
| Base data , average (OLTP) | 10,000 – 50,000 | 4 KB – 8 KB | 70/30 |
| Base data , large (OLTP) | 100,000 + | 4 KB – 8 KB | 70/30 |
| Virtual machine (1 VM) | 1,000 – 5,000 | 4 KB – 64 KB | 60/40 |
| Server virtualization (10-20 VM) | 20,000 – 100,000 + | 4 KB – 64 KB | 60/40 |
Types of storage devices
There are two types of drives disk subsystems: hard disks (HDD – Hard Disk Drive) and solid-state drives (SSD - Solid-State Drive).
Hard drive performance is within the range:
- 100 ÷ 300 MB/s for sequential access operations;
- 100 ÷ 350 IOPS on random access operations.
SSD performance is within the range:
- 500 ÷ 14,000 MB/s for sequential access operations;
- 10,000 ÷ 2,500,000 IOPS on random access operations.
Thus, solid-state drives are adegree faster than hard drives for sequential access operations and two to four times faster for random access operations.
The maximum capacity of hard drives and solid state drives is approximately the same and approaches 40 terabytes.
Interfaces of storage devices
Drive interface is the method of connecting the drive to the disk subsystem controller. The interface defines the principles of information transfer between the drive and the controller. The term "Drive interface" refers to both the physical connector and the data transfer protocol.
There are three main types of storage interfaces: SATA, SAS, and NVMe. SATA and SAS interfaces are used with both hard drives and solid state drives, but the NVMe ( Non-Volatile Memory Express ) is designed specifically for solid-state drives connected via the PCI Express (PCIe) bus.
Each interface, in the process of its development, received several standards that differ from each other in throughput.
NVMe drives specify the PCIe bus standard (respectively, PCIe 5.0), at which the drive operates, instead of the NVMe interface standard (for example, NVMe Gen5), in the drive specifications. We will also adhere to this rule.
In the context of our topic, important characteristics of interfaces are their throughput and maximum queue depth of the drive (defined by the interface standard).
Interface bandwidth limits the performance of the drive for random and sequential data access. The following table lists the main characteristics of the interfaces, as well as the maximum possible IOPS and MB/s values for them.
| Storage interfaces | Interface bandwidth | Maximum storage queue depth | Max IOPS through interface (4KB block) | Max MB/s through interface (1MB block) |
| SATA3 | 6 Gb/s | 32 | 150 000 | 600 |
| SAS3, 1 port | 12 Gb/s | 256 | 300 000 | 1 200 |
| SAS4, 1 port | 24 Gb/s | 256 | 600 000 | 2 400 |
| PCIe 3.0 x4 | 4 GB/s | 65,536 | 1,000,000 | 4 000 |
| PCIe 4.0 x4 | 8 GB/s | 65,536 | 2,000,000 | 8 000 |
| PCIe 5.0 x4 | 16 GB/s | 65,536 | 4,000,000 | 16 000 |
The maximum IOPS and MB/s performance of a drive, as specified by the manufacturer in the drive specification, never exceeds the interface bandwidth. However, if the drive interface is of a newer standard than the controller interface, the controller interface bandwidth may limit the drive performance. For example, if you connect an NVMe drive that supports the PCIe 4.0 bus standard to a controller with a PCIe 3.0 interface , the drive performance will be limited to 1 million IOPS and 4GB/s. The same applies to different generations of the SAS interface.
Another important point about interface bandwidth concerns SAS SSDs. Their specifications usually state maximum performance when connected via two ports. However, in servers, SAS drives are almost always connected via only one port, so it is necessary to take into account the interface bandwidth limitations when assessing expected performance.
Types of controllers
A disk subsystem controller is a device that provides data exchange between the server's central processor and the disk subsystem drives.
Main components of the controller:
| Host interface | the communication channel between the server processor and the controller. Insufficient host interface bandwidth can limit the performance of the disk subsystem. |
| Controller processor | a controller chip that transmits operating system commands to drives, manages data transfers, and may also include RAID support functionality. Insufficient processor performance can limit random access performance (IOPS). |
| Drive ports | controller connectors to which drives of the disk subsystem with an interface of the corresponding type are connected via intermediate cables. Controllers usually have from 4 to 24 ports. |
Controllers differ in the type of interface of supported drives.
SATA controllers can only work with drives with a SATA interface. There are several types of implementation of these controllers:
- Built-in CPU. Intel processors Xeon 6, as well as AMD EPYC, have a built-in SATA controller. Such controllers do not support RAID arrays, so you can create a RAID array on such a controller only by software means of the operating system ( Software RAID).
- Built into the server motherboard chipset. Intel processor-based systems Xeon Scalable are equipped with SATA controllers of this type. Typically, such a controller has 8 ports and supports RAID arrays of levels 0, 1, 10 and 5 (RAID 10 can only consist of 4 drives). The performance of these controllers is limited to about 250,000 IOPS, and the host interface bandwidth is 2 GB/s.
SAS controllers support drives with SAS and SATA interfaces and are usually an additional expansion card that is installed in a PCIe slot . The number of ports is from 4 to 24. The host interface is 8 PCIe 3.0 lanes with a total throughput of 8 GB/s. Maximum performance is no more than 1 million IOPS. Support for RAID levels 0, 1, 10, 5, 6, 50, 60. They can have built-in cache memory of up to 2 GB.
Tri-Mode controllers support drives with SAS, SATA and NVMe interfaces and are the next step in the evolution of SAS controllers. The host interface has a higher bandwidth: 8 or 16 PCIe 4.0 lanes (16 GB/s or 32 GB/s). The processor of these controllers is more productive - up to 4 million IOPS. Support for the same RAID levels (0, 1, 10, 5, 6, 50, 60). The number of ports is from 8 to 24, but it should be borne in mind that when connecting NVMe drives to such a controller , each drive "occupies" 4 ports. This is necessary because the bandwidth of the NVMe interface is at least 4 times greater than that of SAS: 8 GB/s for the PCIe 4.0 x4 interface versus 12 Gb /s for SAS3 or 24 Gb /s for SAS4.
NVMe drives can also be connected directly to the PCIe lanes of the server's CPU via the NVMe protocol , and in this case, a controller is not needed. The processor interacts with the drives directly. There are no bandwidth limitations from the controller, which is absent. Although RAID arrays can only be created using the operating system ( Software RAID), performance is limited only by CPU resources and can be several million IOPS.
NVMe hardware controller is implemented in Intel processors Xeon Scalable and Intel Xeon 6. It is built directly into the central processor and uses Intel's proprietary VMD technology ( Volume Management Device ). Up to 24 drives are connected directly to the PCIe lines of the central processor. RAID levels 0, 1, 10, 5 can be supported, provided that the Intel VROC ( Virtual RAID on CPU) license key is installed in the server. Since the drives are connected directly to the processor (without an intermediary) , there is no limitation on the controller's throughput, but, as the test results show, there is a general limitation on IOPS performance (about 1 million operations per second).
The following table shows the maximum IOPS and MB/s values for different types of hardware controllers.
| Controller | Host Interface | Limitation interface host | Performance processor controller by IOPS | |
| Bandwidth | IOPS, 4K block | |||
| SATA controller | DMI 3.0 x2 | 2 GB/s | 500,000 | 250,000 |
| SAS controller | PCI e 3.0 x8 | 8 GB/s | 2,000,000 | 1,000 000 |
| Tri-Mode controller | PCI e 3.0 x8 | 8 GB/s | 2,000,000 | 1,200,000 |
| PCI e 4.0 x8 | 16 GB/s | 4,000,000 | 1,600,000 | |
| PCIe 4.0 x16 | 32 GB/s | 8,000,000 | 4,200,000 | |
| Intel VMD- controller | PCIe 3.0/4.0/5.0 x4 on | 4/8/16 GB/s storage | 1/2/4 MIOPS storage storage | 1,000,000 |
Impact depths queues on productivity
The number of requests to read and write data that are simultaneously processed by the disk subsystem (queue depth) has a very significant impact on its performance. The greater the array queue depth, the greater the number of requests per drive in the array.
Let's consider the impact of queue depth on the operation of solid-state drives and hard drives.
Solid State Drives
The performance of random access operations (reading and writing) for solid-state drives increases by 10 times or more when the queue depth increases from 1 to the saturation depth, since an increase in the number of requests allows more parallel channels of the drive's internal controller to be used and, as a result, provides simultaneous access to a larger number of flash memory pages.
The performance of sequential access operations (read and write) of solid-state drives is weakly dependent on queue depth and typically reaches maximum values under single-threaded workloads with queue depths from one to several units.
Hard drives
The performance of random access operations (reading and writing) for hard drives depends on the speed at which the head can position itself over the required block of data. As the queue depth increases, the head movement path is better optimized, the positioning time decreases, and the performance increases. SATA drives can handle up to 32 simultaneous requests, SAS drives – up to 256. At maximum queue depth, the performance of the drive increases approximately three times compared to single requests.
Hard drives are usually equipped with a cache memory of 128 to 512 MB. If this memory is enabled, then when writing, the data first goes to the cache, and then is rewritten to the disk in the background with maximum optimization of the head route, which increases productivity several times. In this case, the queue depth does not matter, since the data "instantly" gets to the cache and the write operation is immediately considered complete. The cache of hard drives is volatile, so it requires power protection.
An alternative to disk cache is controller cache. Enabling it for write operations (Write mode Back) gives an even greater effect of increasing productivity.
The performance of sequential access operations (read and write) for hard disks is independent of queue depth and typically reaches its maximum values under single-threaded workloads with a queue depth of one.
Types of RAID arrays
RAID array ( Redundant Array of Independent Disks (redundant array of independent disks) - combining several physical drives into one logical array.
Advantages of RAID arrays:
- Increased reliability by duplicating information across multiple drives or adding parity blocks, which allows data to be saved in the event of failure of one or more drives.
- Increase performance with the ability to read and write data from multiple drives in the array simultaneously.
- Expand storage capacity by combining the space of multiple drives into one large logical drive.
The most commonly used types (levels) of RAID arrays are: 0, 1, 10, 5, 6, 50, 60.
| RAID 0 | array With striping . Data is divided into blocks called strips , the block size is specified when the array is created. Blocks are written to each drive in the array one by one. A set of consecutive blocks (stripes), one for each drive in the array, starting with the first, is called a stripe . A RAID 0 array provides maximum performance for reading and writing data, but does not have redundancy - if one drive fails, all data will be lost. |
| RAID 1 | a mirrored array of two drives. Each drive in the array contains the same set of data. RAID 1 is twice as slow as RAID 0 for write operations, since data must be written twice. RAID 1 has redundancy, so failure of one drive will not result in data loss. |
| RAID 10 | a mirrored array of two RAID 0 arrays, each containing the same data set. RAID 10 can consist of any even number of drives. Twice as slow as RAID 0 for write operations. Allows for half of the drives to fail, provided that they are all from different mirror pairs. |
| RAID 5 | is a striped array, like RAID 0, but a parity block is added to each stripe . Parity blocks are distributed evenly across all drives in the array. RAID 5 is 4 times slower than RAID 0 in write operations, since one write operation requires two read operations - the old data block and the old parity block - and two write operations - a new data block and a new parity block (this is true for hard drives, but for solid-state drives the difference between RAID 5 and RAID 0 is smaller, because in their case read operations are an order of magnitude faster than write operations). RAID 5 allows the failure of any one drive without data loss. The advantage over RAID 10 is the reduction in the number of drives - only one "extra" drive is needed to provide redundancy. |
| RAID 6 | an array with striping, like RAID 0, but two parity blocks are added to each stripe . The parity blocks are distributed evenly across all drives in the array. RAID 6 is 6 times slower than RAID 0 in write operations, since one write operation requires three read operations - the old data block and two old parity blocks - and three write operations - a new data block and two new parity blocks (again, this is true only for hard drives, the difference is smaller for solid-state drives). Allows the failure of any two drives without data loss. The advantage over RAID 5 is greater reliability. |
| RAID 50 | is similar to RAID 0, but instead of using drives as array members, RAID 5 arrays of the same size (two to eight RAID 5 arrays) are used. The data blocks and parity block are first written to the first RAID 5, then to the second RAID 5, and so on. RAID 5 arrays within RAID 50 are called spans . RAID 50 has the same performance as RAID 5. It is more reliable than RAID 5 because it allows one drive in each span to fail . RAID 50 arrays are typically used when the number of drives in an array is greater than 32 (the limit for RAID 5). |
| RAID 60 | is similar to RAID 0, but instead of using drives as array members, RAID 6 arrays of the same size (two to eight RAID 6 arrays) are used. The data blocks and two parity blocks are first written to the first RAID 6, then to the second RAID 6, and so on. The RAID 6 arrays within RAID 60 are called spans . RAID 60 has the same performance as RAID 6. It is more reliable than RAID 6 because it allows two drives in each span to fail . RAID 60 arrays are typically used when the number of drives is greater than 32 (the limit for RAID 6). |
The following table provides coefficients for calculating the theoretical maximum performance and capacity of a RAID array of N drives, which must be multiplied by the performance and capacity values of a single drive. SPAN – the number of " spans " in array .
| Type ( level ) of the RAID array | RAID 0 | RAID 1 | RAID 10 | RAID 5 | RAID 6 | RAID 50 | RAID 60 |
| Random reading (IOPS) | N | 2 | N | N | N | N | N |
| Random record (IOPS) | N | 1 | N / 2 | N / 4 | N / 6 | N / 4 | N / 6 |
| Sequential record (MB/s) | N | 1 | N / 2 | N - 1 | N - 2 | N - SPAN | N - 2 x SPAN |
| Sequential reading , HDD (MB/s) | N | 1 | N / 2 | N - 1 | N - 2 | N - SPAN | N - 2 x SPAN |
| Sequential read , SSD (MB/s) | N | 2 | N | N | N | N | N |
| Useful volume array (TB) | N | 1 | N / 2 | N - 1 | N - 2 | N - SPAN | N - 2 x SPAN |
| Number acceptable refusals drives | 0 | 1 | 1 ÷ N / 2 | 1 | 2 | 1 ÷ SPAN | 2 ÷ 2 x SPAN |
Recall that maximum array performance is achievable only with sufficient queue depth (saturation depth). This dependence is especially strong for SSD drives.
Below are explanations for the table.
Random read (performance of read operations during random access to data). For all types of RAID arrays, it is the same and proportional to the number of drives, since data can be read from each drive in the array at the same time.
Random Write (performance of write operations during random access to data). Maximum for RAID 0. Half as much for mirrored arrays, since data must be written twice. Four times as much for RAID 5 and RAID 50, since four operations are required to write one block. Six times as much for RAID 6 and RAID 60, since six operations are required to write one block. Important: arrays with parity must be initialized before use to form correct parity blocks. Otherwise, each new write will require reading the entire stripe , i.e. instead of two or three drives, all drives in the array will have to be used.
Sequential Write (performance of write operations when accessing data sequentially). Maximum for RAID 0. Half as much for mirrored arrays, since data must be written twice. For arrays with parity, lower by a fraction of the parity block size (because the size of the data block being written is usually larger than the stripe size , there is no need to pre-read old data and the parity block, only write data and parity).
Sequential read for an array of hard drives (performance of read operations during sequential access to data). Maximum for RAID 0. Half as much for mirrored arrays, since data is read from only one disk of a mirrored pair. For arrays with parity, it is lower by a fraction of the parity block size, since the disk heads make an "idle" run over them.
Sequential read of an array of solid-state drives (performance of read operations during sequential access to data). For all types of RAID arrays, it is the same and proportional to the number of drives, since data is read simultaneously from each drive in the array. Achievable only at a certain queue depth.
Useful volume of the array. Maximum for RAID 0. Half as much for mirrored arrays. For arrays with parity, less by the total volume of parity blocks.
Conclusion
We hope you find the Calculator useful.
We would appreciate any advice on how to improve the Calculator.
Send your wishes to info@team.ru
Thank you!
