Tuesday, February 22, 2011

Sizing SAN disks for performance

As part of the technical team at my work, I often get the following request: "Build me a database system with xGB disk space". Initially I wanted to rebel against such requests and provide the poorest performing disk I could (such as RAID5 on SATA). Instead I often would oversize the disk performance to make sure the system meets or exceeds end-user expectations.
As I have done this more, I have found a few tools that help me properly size environments. The first is to know how many database operations a disk can perform. The below list is a basic setup, I have seen higher and lower, but this is a good average for sizing.

Disk TypeIOPs
15k FC180
10k FC120

The second tool in my belt is the ability to look at an existing environment and understand how that translates to the new environment. In Windows, perfmon can be used to capture the counters Disk Reads/sec and Disk Writes/sec. These counters are the number of disk operations per second (IOPs) as seen by the OS. It conveniently separates out the numbers by read and write. What we do here is configure perfmon to save these counters to a CSV file, and let it collect for several hours (or days if possible).
Once we have the data collected, we open it in Excel and use the PERCENTILE function to calculate the 95th percentile. This means that 95% of all read and write activity can be satisfied with the IOPs calculated. This meets most performance requirements without oversizing your solution.
Now that we have our 95th% read and write numbers, its time to do some math. These numbers are from the OS perspective and don't take into account RAID penalty. Assuming we are using RAID10, we calculate the needed SAN IOPs as:
SAN IOPs = Disk Reads/sec + (2 * Disk Writes/sec)
This calculation is needed because writing to RAID10 requires the data to be written to 2 disks (requiring 2 transactions), but reads only need 1 transaction. There are more technical ways to describe it, but thats the gist.
Assuming we are using RAID5 instead, we calculate the needed IOPs as:
SAN IOPs = Disk Reads/sec + (4 * Disk Writes/sec)
Similar to the RAID10 example, except RAID5 requires 4 writes to the disks.

So, what does all this mean?
Lets take an example of a database system with the following stats from perfmon.
  • 95th% Disk Reads/sec=830
  • 95th% Disk Writes/sec=170
Assuming we want a RAID10 disk configuration, we calculate the required SAN IOPs:
SAN IOPs = Disk Reads/sec + (2 * Disk Writes/sec)
SAN IOPs = 830 + (2 * 173)
SAN IOPs = 830 + 346
SAN IOPs = 1,176
To determine the necessary number of disks, we simply take the number of SAN IOPs and divide by the type of disk we are using. So for 15k FC disk we get 1176/180, or 6.5 disks. Round up to the nearest even number and we have 8 FC disks to support this database.

SQL query to list all VMs and HW version

I had a need to find all of my VMware VMs of a certain hardware version in order to compare to another list. Since I have production, test, and DR virtual machines, it was also important to know the path in the VMware hierarchy so I could easily identify the VM types.

use VCenter
declare @VMs table (path varchar(500),name varchar(255), ver varchar(50), ID int, Parent int)

insert @VMs
select vci.name, vci.name, vci.version, vci.ID,ve.Parent_IDfrom vpx_vm_config_info vci
INNER JOIN vpx_entity ve on vci.ID=ve.ID

declare @counter int
set @counter=0
while @counter < 8
update @VMs 
set [path]=(select top 1 name from vpx_entity ve where ve.ID=Parent) + '\'+ [path]
, Parent=(select top 1 Parent_ID from vpx_entity ve where ve.ID=Parent)
where Parent is not null
set @counter = @counter + 1

select [path],name,ver from @VMs
order by path 

Thursday, February 17, 2011

Rerun failed Avamar VMDK backup jobs

With our recent use of VMDK backups via Avamar, I have been annoyed at having to deal with at least 1 failure every night. Resolution would normally require logging into the Avamar console, finding the appropriate machine(s), and relaunching the backup. This assumes that it isn't a production critical system and can launch a backup during the day. Compound this with the relatively short retention cycle we have defined (2 weeks), we want to make sure all backups are kept current and viable.
A little research and I found that there is a command line client for Avamar that runs on Linux. Reading through the MCCLI Programmer Guide, you will find simple command lines that can launch on-demand backups. So now I have a semi-automated method to initiate failed backups, I just need to identify which backups have failed.
The Avamar grid stores activity in a Postgres database. I am not sure if this is supposed to be used by admins, but it is fairly well laid out and navagable using pgAdmin. With a little trial and error, I was able to craft up a SQL statement that would report back all VMDK backup failures and allow me to relaunch them.

Below is the script and query I used to automate this. The script runs the query and exports it to a file called rerun.sh. Cron is used to initiate the queryDB script, and a few minutes later, initiate the rerun script.


export PGHOST=<FQDN of grid here>
export PGPORT=5555
export PGDATABASE=mcdb
export PGUSER=viewuser
export PGPASSWORD=viewuser1
psql -tf /<Path to SQL command>/queryCMD -o /<Path to rerun command>/rerun.sh

Postgesql statement, queryCMD

select distinct '/usr/local/avamar/5.0.3-29/bin/mccli client backup-group-dataset '

|| '--domain=<VC Name>/VirtualMachines '

|| '--group-domain=<VC Name> '

|| '--group-name="' || group_name || '" --name=' || display_name 

from v_activities_2 

where (display_name, recorded_date_time) in

    (select DISTINCT b.display_name, max(b.recorded_date_time)

    from v_activities_2 b

    where b.group_name like 'Tier 5 VM%' 

    AND recorded_date_time > CURRENT_TIMESTAMP - interval '6 day' 

    group by b.display_name)

AND status_code_summary <> 'Activity completed successfully.'; 

Thursday, February 03, 2011

Rescan SCSI bus on Linux

I recently added a hard disk to a linux system in my VMware environment and was waiting to reboot to allow the OS to see the new disk. I then realized that the disks are added as SCSI devices, which should enable hot-swap capabilities.

A little googling and I found the command
echo "- - -" > /sys/class/scsi_host/host*/scan
This rescans the bus and immediately recognised the new disks. Fdisk and mkfs and it was ready to mount

Tuesday, February 01, 2011

Passed my RHCSA exam! But not the RHCE

I am proud to announce I just received the results for my RHCSA and I passed with a score of 300. It's my understanding that is 100%! unfortunately I only got a 91 for my RHCE.

Before anyone asks, I won't say what's on the test. I worked hard to take them, and I personally think this is one of the Best testing methods I have seen. If microsoft and other vendors would get away from their multiple-choice formats and follow this testing style, then their certifications would mean a lot more.

There are two general areas that I feel I was completely unpreparred for: iptables and selinux. Not only did I struggle to get these working properly during the exam, it would explain my exceptionally low score. Evrn though I think i accomplished most of the individual requirements, the security aspects could have limited accessibility. If I get a computer working 100%, but you can't access it, then all the work is for nothing.

At least now I know where I stand, and I know what I feel comfortable with. Time to reschedule my retake!