Tuesday, February 22, 2011

Sizing SAN disks for performance

As part of the technical team at my work, I often get the following request: "Build me a database system with xGB disk space". Initially I wanted to rebel against such requests and provide the poorest performing disk I could (such as RAID5 on SATA). Instead I often would oversize the disk performance to make sure the system meets or exceeds end-user expectations.
As I have done this more, I have found a few tools that help me properly size environments. The first is to know how many database operations a disk can perform. The below list is a basic setup, I have seen higher and lower, but this is a good average for sizing.

Disk TypeIOPs
15k FC180
10k FC120

The second tool in my belt is the ability to look at an existing environment and understand how that translates to the new environment. In Windows, perfmon can be used to capture the counters Disk Reads/sec and Disk Writes/sec. These counters are the number of disk operations per second (IOPs) as seen by the OS. It conveniently separates out the numbers by read and write. What we do here is configure perfmon to save these counters to a CSV file, and let it collect for several hours (or days if possible).
Once we have the data collected, we open it in Excel and use the PERCENTILE function to calculate the 95th percentile. This means that 95% of all read and write activity can be satisfied with the IOPs calculated. This meets most performance requirements without oversizing your solution.
Now that we have our 95th% read and write numbers, its time to do some math. These numbers are from the OS perspective and don't take into account RAID penalty. Assuming we are using RAID10, we calculate the needed SAN IOPs as:
SAN IOPs = Disk Reads/sec + (2 * Disk Writes/sec)
This calculation is needed because writing to RAID10 requires the data to be written to 2 disks (requiring 2 transactions), but reads only need 1 transaction. There are more technical ways to describe it, but thats the gist.
Assuming we are using RAID5 instead, we calculate the needed IOPs as:
SAN IOPs = Disk Reads/sec + (4 * Disk Writes/sec)
Similar to the RAID10 example, except RAID5 requires 4 writes to the disks.

So, what does all this mean?
Lets take an example of a database system with the following stats from perfmon.
  • 95th% Disk Reads/sec=830
  • 95th% Disk Writes/sec=170
Assuming we want a RAID10 disk configuration, we calculate the required SAN IOPs:
SAN IOPs = Disk Reads/sec + (2 * Disk Writes/sec)
SAN IOPs = 830 + (2 * 173)
SAN IOPs = 830 + 346
SAN IOPs = 1,176
To determine the necessary number of disks, we simply take the number of SAN IOPs and divide by the type of disk we are using. So for 15k FC disk we get 1176/180, or 6.5 disks. Round up to the nearest even number and we have 8 FC disks to support this database.


Anonymous said...

Is the 700 on SATA a correct number?

Pawan Sharma said...

I have created a blog based on Linux education check this ::


This blog is all about Red Hat Enterprise Linux 6.It contains details about
Please link it on your site/blog.

Sean Fuller said...

I have a Best Practice slide from EMC that uses the following metrics:

140 I/O/sec 15K FC
120 I/O/sec 10K FC
60 I/O/sec SATA

But, I think those are conservative numbers.