Wednesday, December 02, 2009

Top 10 VMware Consumers Report

If you have even a small VMware environment, you will begin to worry about resource consumption in your VM clusters. Specifically, I was concerned about 1 or more systems behaving poorly and stealing all the CPU, memory, disk, and network resources. So the idea of a top 10 report that uses the VCenter performance statistics came to mind.

Step 1: Find the Cluster ID

This assumes you have more than one cluster in your environment, so you will need to select from them. A simple SQL query will give you the name and ID of the various clusters.

SELECT ID, Name
FROM
vpxv_entity
WHERE Type_ID=3
ORDER
BY Name

Step 2: Find the top 10 consumers

Now that you know which cluster you want, you can put the ID in a variable named @ClusterID. The below report looks at the average CPU utilization for the last 7 days for all systems in your cluster and returns the top 10 consumers. This query performs a lot of work, and if your database isn't working optimally it will take a long time to complete, so be careful when you first run it.

SELECT TOP 10 v.VMID
FROM vpxv_VMs v (NOLOCK)
INNER
JOIN vpxv_entity e (NOLOCK) ON v.HostID=e.ID
INNER JOIN vpxv_entity_moid m (NOLOCK) ON m.EntityID=v.VMID
INNER JOIN dbo.VPXV_HIST_STAT_WEEKLY sd (NOLOCK) ON sd.ENTITY=m.MOID
WHERE e.type_id=1 AND e.Parent_ID=@ClusterID
AND
stat_name='usagemhz'
AND
STAT_ROLLUP_TYPE='average'
AND
sample_time > getdate()-7
GROUP
BY v.VMID, v.Name
ORDER BY sum(sd.stat_value) DESC

Step 3: Chart the top 10 consumers

Now that we know which cluster, and which systems are the top consumers, we can graph it. The below query embeds the query in Step 2 to limit its results, and then returns the name of the VM, sample time, and sample value for the last 7 days. Using SQL Server Reporting Services (SSRS), you can pipe this into a pretty graph and email it out automatically every week.

SELECT e.Name 'Host'
,
sd.stat_name, sd.sample_time, sd.stat_value
FROM vpxv_entity e (NOLOCK)
INNER
JOIN vpxv_entity_moid m (nolock) ON m.EntityID=e.ID
INNER JOIN dbo.VPXV_HIST_STAT_WEEKLY sd (NOLOCK) ON sd.ENTITY=m.MOID
where e.type_id=0
AND
e.ID in
(SELECT top 10 v.
VMID
FROM vpxv_VMs v
INNER JOIN vpxv_entity e ON v.HostID=e.ID
INNER JOIN vpxv_entity_moid m (nolock) ON m.EntityID=v.VMID
INNER JOIN dbo.VPXV_HIST_STAT_WEEKLY sd (NOLOCK) ON sd.ENTITY=m.MOID
WHERE e.type_id=1 AND e.Parent_ID=@ClusterID
AND
stat_name='usagemhz'
AND
STAT_ROLLUP_TYPE='average'
AND
sample_time > getdate()-7
GROUP
BY v.VMID, v.Name
ORDER BY sum(sd.stat_value) DESC)
AND
stat_name='usagemhz'
AND
STAT_ROLLUP_TYPE='average'
AND
sample_time > getdate()-7
ORDER
BY sample_time


Once the CPU graph is done, you can do the same for memory, disk, network, and many other data points. The monitors available to report on differ based on the monitoring level you have configured in VCenter, so if you don't see something you want, look if you can add it.

Below is a sample graph that came from one of my clusters. As you can see, there are 2 VMs consuming a majority of CPU resources, and for the most part the consumption appears flat across the entire week. This suggests that there is either a long-running job, or something is wrong with the VMs.



Wednesday, November 25, 2009

VMware Type IDs

I am looking into the VMware database to try and setup some automated reporting, and the first thing needed is to find the various objects in the database.

Looking at the views I see VPXV_ENTITY, a good place to start looking. It appears that every object (or entity) in VMware is listed here. Now to segment them out based on object type, or TYPE_ID.

Below is a list of type_id's that I have identified in my environment. Not sure if these stay the same between versions and installations, but this is the best I have for now.

0 VM
1 host
3 cluster
4 resources
5 VM folder
6
7
8 Datacenter
16 datastore folder
17 Network folder
18 Datastore
19 Network

Wednesday, November 18, 2009

ETrust updates not workig on Windows 7

I upgraded my system to Windows 7, but noticed that ETrust was not downloading updates. Turns out the ETrust downloader needs to run in Vista compatibility mode.

To fix this, do the following:
  1. Open file explorer and go to C:\Program Files\CA\SharedComponents\ScanEngine
  2. Right-click the file ITMDIST.EXE and select Properties
  3. On the Compatibility tab, click Change settings for all users
  4. Change Compatibility mode to Windows Vista (Service Pack 1)
  5. Click OK and OK
The updates should now install properly

Thursday, November 12, 2009

Setting Custom Attributes in VMware programtically

If you have more than a few VMs in your ESX environment, you have already found a need to properly organize the VMs with folders and hierarchies. However, this structure goes away when you view all the VMs in your datacenter, making your carefully created tree structure useless.

The good news is that VMware has a "Custom Attribute" option for each VM. Displayed next to the Notes field, this allows you to define attributes like Customer, Department, Owner, Production Status or anything else you can imagine to tag every system in your environment. The question is - how to do this without manually typing in the attribute for each VM: and in comes powershell.

Assuming you want to create an attribute to match your folder structure, you can use the Get-Vm -Location powershell command to retrieve a list of all VMs in a folder (and sub folders). Pipe this output into the Set-CustomField command and let the computer do the work for you. An example of this is below

Get-Vm -Location 'App X' | Set-CustomField -Name 'System Function' -Value 'App X'

You can use other switches with Get-Vm to filter on name, datastores, host servers, and other options.

Wednesday, November 11, 2009

VMware guest level monitoring and alerting

Probably 90% of the monitoring needed in any environment consists of extremely basic measures: CPU utilization, Memory Utilization, Disk throughput, Network throughput, etc… Defining thresholds for these and alerting on them provides immesurable insight into an environment and quickly identifies any problems or bottlenecks. Amazingly, VMware provides many of these basic system monitors out of the box.

Out of the box, ESX contains 2 VM monitors, unfortunately no alerting or other action plans are defined. The first monitor is for the virtual CPU utilization and triggers a warning when it has reached 75% for more than 5 minutes, and critical when it reaches 90% for more than 5 minutes. The second monitor is for the virtual memory utilization and triggers a warning when it has reached 75% for more than 5 minutes, and critical when it reaches 90% for more than 5 minutes.

Those 2 monitors identify the most common causes of system slowness I have ever seen. When either of those reaches 80% or more, a huge bottleneck occurs and can cascade into a completely unusable system. Now you can be alerted and preemptively resolve the issues – focusing your time and money on the problems that truly effect your environment. Simply configure an action plan to email you when these events are triggered and your half way there.


 

There are plenty of other monitors/triggers for the Virtual Machines in your ESX environment. Below is a list of available triggers and their default settings. If you are seeing a potential problem area – such as unreliable or slow disk – then feel free to test those triggers and see if they provide insight into how your environment is working, and how it isn't working.

Trigger Type

Condition

Warning

Condition Length

Alert

Condition Length

VM CPU Ready Time (ms)

Is above

4000

for 5 min

8000

for 5 min

VM CPU Usage (%)

Is above

75

for 5 min

90

for 5 min

VM Disk Aborts

Is above

10

for 5 min

25

for 5 min

VM Disk Resets

Is above

10

for 5 min

25

for 5 min

VM Disk Usage (KBps)

Is above

 

for 5 min

 

for 5 min

VM Fault Tolerance Latency

Is equal to

Moderate

n/a

High

n/a

VM Heartbeat

Is equal to

Intermittent Heartbeat

n/a

No Heartbeat

n/a

VM Memory Usage (%)

Is above

75

for 5 min

90

for 5 min

VM Network Usage (kbps)

Is above

 

for 5 min

 

for 5 min

VM Snapshot Size (GB)

Is above

 

n/a

 

n/a

VM State

Is equal to

Powered On

n/a

Powered Off

n/a

VM Total Disk Latency (ms)

Is above

50

for 5 min

75

for 5 min

VM Total Size on Disk (GB)

Is above

 

n/a

 

n/a

Tuesday, November 10, 2009

VSphere and Windows 7 / Windows 2008 R2

It appears that VSphere (VMware 4.0) does not fully support the latest releases of Windows. Luckily there are some published workarounds.

Running Windows 7 or 2008 R2 as a VM
This is a simple fix - just dont use the VMware SVGA driver on the OS, instead use the generic Windows driver. It may also help to reconfigure hardware accelleration to OFF
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1011709&sliceId=1&docTypeID=DT_KB_1_1&dialogID=47064540&stateId=0%200%2047326772

Running the VCenter client on Windows 7 or 2008 R2
This is a little more complicated and requires some tweaking of the system configuration. In my experience, its still a little shaky and has some stability problems.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1014316&sliceId=1&docTypeID=DT_KB_1_1&dialogID=47064540&stateId=0%200%2047326772

/3GB and /PAE management

Managing memory in computers has always been a nightmare. Back in the DOS days there were TSR and drivers that had to be tweaked, an in the x86 Windows there is additional tuning to be aware of.

Below are a couple of links that help describe the various options avialable.
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
http://blogs.technet.com/perfguru/archive/2007/12/05/scenarios-using-3gb-and-pae-switch-and-appropriate-uses-of-these-switches.aspx

Tuesday, October 13, 2009

Oracle quick reference guide

While searching for an easy method to delete the tables in an Oracle DB, I found this link http://pages.cs.wisc.edu/~dbbook/openAccess/thirdEdition/Oracle/user_guide/oracle_guide.html. It has a great script to delete tables, and a lot of other great Oracle reference

Monday, October 12, 2009

Cannot power on Red Hat in VMware

I have a Red Hat system that I was recovering from backup into our VMware environment, but couldnt get it to power on properly. I tried everything I could to get it working, but as soon as I turned it on, GRUB would load and display
Booting the kernel
then the system would power off. I went as far as building out a new Red Hat system and mounting the volumes to ensure that the data was still there and accessible.

About ready to give up and build a new system to copy the data onto, I realized the Red Hat installation has an upgrade option. I inserted the DVD and began the upgrade process when it prompts me if I want to change from 64bit to 32bit linux. Crap - that means I need a different DVD.

A few minutes later I have downloaded the new DVD and began booting from it when it unexpectedly powered off. I restarted the VM and noticed it got as far as Booting the kernel, and powered off. Why now? This is the same error as I had with booting the OS, but now on the DVD. Is the DVD bad? What else is in common?

I then realized the CPU type on the VM was 32bit, and I was attempting to boot a 64bit OS. I swapped the configuration around and successfully booted the DVD. I then realized this may be the entire problem I was having, and rebooted directly to the hard drive - everything works.

Symptom: Virtualized Red Hat system powers off after displaying Booting the kernel
Resolution: Ensure the processor type is 64bit

Monday, September 21, 2009

SSRS Format codes

If you use SQL Server Reporting Services (SSRS), then you have probably used the "Format Code" setting when desiging a report. But what codes are available, and where can you find a list of them? Below is a list of what I have been able to find, it may not be complete, but its a start:

Number






1234g
1,234.00N
1234D0
1234.00F
1.234123e+003e

Date





9/21/2009 10:06 AMg
Monday, September 21, 2009D
9/21/2009d

Time





9/21/2009 10:06 AMg
10:06:25 AMT
10:06 AMt

Percentage




123,400.00 %P
123,400 %P0
Currency



$1,234.12C

Wednesday, August 19, 2009

Rescan HBAs on multiple ESX hosts via Powershell

Got a lot of ESX hosts and hate having to rescan HBAs every time you add storage? Try this simple 1 liner in powershell.

Get-Cluster | get-vmhost | Get-VMHostStorage -RescanAllHBA

This command is courtasy of a post at http://communities.vmware.com/docs/DOC-7400.

Configuring VMware pathing via Powershell

We have a fairly large VMware environment with a large number of LUNs. I did some performance reviews the other day and found that most of my FC traffic was passing through only 2 of the 4 ports in my Brocade switches.
This causes 2 problems: 1) I could potentially run into a bottleneck by being limited to IOPs and throughput, and 2) A large number of LUNs will have to fail-over if the primary path fails.

A quick bit of googling popped up the site: http://blogs.vmware.com/vipowershell/2008/07/managing-storag.html. This allows for all the LUNs on a single server to be changed to a Round-Robin policy (listed experimental, but still stable). Some tweaking and you could automatically set the preferred path for all LUNs on all servers.

A little more searching found the following script. This changes all LUNs on all servers to RR.

$Mypolicy = "rr"
Get-Datastore | where {$_.Type -eq "VMFS"} | %{(Get-View $_.ID).Info.Vmfs.Extent[0].DiskName} |%{
$diskname = $_
Get-VMHost | %{Get-View (Get-View $_.ID).configmanager.storageSystem} | %{
$mpathpolicy = New-Object vmware.vim.HostMultipathInfoLogicalUnitPolicy
$mpathpolicy.policy = $Mypolicy
$_.SetMultipathLunPolicy($diskname,$mpathpolicy)
}
}

Monday, August 17, 2009

Windows 7 Admin tools

In prior versions of windows, if you wanted the admin tools to manage remote systems, you simply installed the adminpack.msi from the OS CD. In windows 7 however, its a download that you need to run.

Now called the Remote Server Administration Tools (RSAT), they can be downloaded from microsoft at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=7d2f6ad7-656b-4313-a005-4e344e43997d

Migrate VM to a different datastore via Powershell

There has been a lot of chatter regarding performing and SVmotion via powershell, but I couldnt ever get them to work. I decided to update to the latest powershell tools and try the simple move-vm command.

get-vm <systemname> | move-vm -datastore <dsname>

List all the VMs in a VMware Datastore

If you ever have to do backend maintenenance on the disks hosting your VMware datastores, then you need to know what systems are on them (in case you have to move or recover them). A quick 1 liner in Powershell will list this out for you:
get-vm -datastore <datstore name>

If you have multiple datastores that are named similarly, you can use the "*" character as a wildcard in the datastore name

Friday, August 14, 2009

Installing Windows 7 from USB

OK, now that Windows 7 is out on MSDN, do I really have to waste a DVD to install it? Turns out it can run right from a USB drive. I found the below post that details how to configure a USB stick to install Windows, and its much faster than normal.

http://dotnetwizard.net/vista-stuff/tips-how-to-bootinstall-windows-7vista-from-usb-flashhard-drive/

Thursday, August 06, 2009

Identifying a disk bottleneck in Linux

One of my DBAs simply asked for a virtual machine with 350GB disk space. Since he stated this was a "jump box" for accessing the other Oracle DB servers, I didnt think anything of it and simply built out a linux system with the appropriate space. It wasnt until a few days later that they started complaining that their DB performance was horrible that I realized I had given them utility class SATA disks that were shared with several other VMs, and everything on that array was being killed.

I then built out a RAID10 with 4 300GB disks and moved the LUN to it. Things were better, and other VMs started working, but they were still complaining about performance. I found the following article and began doing some testing http://it.toolbox.com/blogs/database-soup/testing-disk-speed-the-dd-test-31069.

Turns out that the disk transfers were being limited by the number of IOPs supported on the disk. Using Navisphere Analyzer, I quickly realized the IOPs were exceeding 500 even though MBs were below 100MBs. The large number of IOPs is because of the 8k block size in the DB, instead of the larger block size in many Windows systems.

I migrated the LUN to a larger RAID10 array and poof! Performance skyrocketed

Wednesday, August 05, 2009

Regular Expressions Cheat Sheet

I love regex, but I dont get a chance to use it enough to remember which special character does what. Thankfully, there is a nice cheat sheet available at http://www.addedbytes.com/cheat-sheets/regular-expressions-cheat-sheet/ to make this easier to use

Tuesday, July 14, 2009

Converting DVDs to DIVX files under Linux

I recently purchased an Iomega ScreenPlay device that allows me to copy a DVD to USB drive and then play it to my TV. Using a 500GB USB then allows me to access most of my DVD library without having to hunt for disks.

However, I am not running in High Definition, I am not connecting via HDMI, and I dont think my TV can handle anything larger than a standard TV signal, so why do I need to copy the full DVD? I have seen online how lower quality videos play fine on my home TV, so how could I import my DVDs at a lower size and quality?

I found this link referring to using a Linux program called DVD::RIP to convert a DVD to divx format: http://www.my-guides.net/en/content/view/136/26/

Monday, July 13, 2009

Recovering NFS datastores

I recently setup a linux server running an NFS export that I mounted in my VMware cluster. This was going to essentially be for archive information - offline servers that were not needed to be active, but still had to be kept in case they were needed. Everything was working great and I needed to reboot the server one last time before letting it run on its own.
When I rebooted, the NFS datastores became unavailable - as expected since the server was offline. However, when the server came back online the NFS datastores still reported as unavailable. Even after waiting several hours, the datastores did not come back.

A little googling and I found several people restarting services, running esxcfg-nas -r (which is supposedly unsupported), and even rebooting their VMware hosts. The only option I could get to work was rebooting my physical host, but that only resolved connectivity on that one host. And that could become a nightmare if I had to reboot every single host everytime my NFS server had to make changes.

I thought about removing the datastores and then readding them - a simple matter of a powershell script. However that would leave any guest systems on those datastores unavailable and require me to re-import all of them. Then I realized that I could remove and re-add the datastores to an individual ESX host, without breaking any bindings or cluster wide configurations.

Resolution:
To resolve this, we need to remove and re-add the datastore on the individual ESX server. This needs to be done on each ESX server that is reporting the issue
  1. Connect via SSH to the ESX host
  2. List the NFS datastores by running esxcfg-nas -l
  3. Remove the offending datastore by running esxcfg-nas -d datastorename
  4. Re-add the datastore by running esxcfg-nas -a -o hostname -s sharename datastorename
  5. Wait a few minutes for the VC to be updated with the datastore status

Why this works:
It appears that the individual servers check datastore connectivity on a routine cycle, then reports status to the VC server. By removing and re-adding a datastore at the host level, the VC still sees it as the same datastore (it does have the same name and same target). As such, the next time the ESX host polls the datastores, it reports it as active and the VC doesnt know any different.
This is of-course all an educated guess, and I encourage anyone with more understanding on the mechanics to update the technical details.

Wednesday, July 01, 2009

Bonding interfaces in Linux

I recently setup a Linux box to act as an NFS gateway and wanted to bond multiple 1GB interfaces together. The bonding would allow for more throughput, as well as redundancy in case a link failed. I asked my network guy to create an LACP group on the Cisco switch, and then followed the bonding how-to's on the internet.

Problem: Some hosts couldnt connect. To make it even more odd, the hosts that couldn't connect can connect to the working hosts. Hmmmm.... Something is either wrong with the Linux firewall, routing tables, Cisco LACP configuration, or Linux LACP configuration.

I ruled out the firewall by turning it off (not suggested). I changed the IP of the server and confirmed that the routing was working. I checked the Cisco config against other LACP configs, and they were configured the same as this one. That only left the Linux LACP config - which was configured the way everyone said to configure it.

Finally, I gave up and started looking to see if Linux supported Etherchannel instead of LACP (I am using a Cisco switch) and came across this link - http://zmq503o1.spaces.live.com/blog/cns!2DE8BC7CE0181410!493.entry?wa=wsignin1.0&sa=302266494. The title of the link referrs to Etherchannel, but I noticed a new option "lacp_rate" being used in the modprobe.conf

I added this option to my config and reloaded the bonding module and everything worked! Supposedly this option specifies how fast the bond attempts to create teh LACP group. Setting it to 1 (fast), forced the link to initiate and begin passing traffic to all source/destinations.

Wednesday, June 24, 2009

Configuring VNC to auto-start

http://www.walkernews.net/2008/06/20/configure-vnc-server-to-auto-start-up-in-red-hat-linux/

How to auto start VNC server in Red Hat Linux after system reboot?

My testing environment is a machine running on Red Hat Enterprise Linux 4 Update 4, with the bundled VNC Server (i.e. vnc-server-4.0-8.1). Also assume that a Linux user account called “walker” needs the VNC server to start up automatically when Linux boots up.

  1. The Linux user account that needs VNC server to automatically start up after system reboot, must have a VNC password. To create a new (or reset a forgotten) VNC password, just login or su (switch user) with that Linux user account and execute this simple command:
    vncpasswd

    Enter a password when prompted, which is used for VNC authentication.
  2. A hidden directory named .vnc is created in the user home directory by the vncpasswd command (if it’s not current exists). Execute ls -la $HOME/.vnc command to check if there is a file called xstartup. If this file is not exists, bring up VNC server with another simple command:
    vncserver :1

    If you get this similar message “A VNC server is already running as :1″, meaning that there is another instance of VNC server running with the same display number. To resolve this, just try to replace the :1 with :2, :3, etc. Alternatively, you may execute this netstat command with root user privilege:
    [root@walkernews ~]# netstat -tulpan | grep vnc
    tcp    0    0 0.0.0.0:5801    0.0.0.0:*   LISTEN   3402/Xvnc
    tcp    0    0 0.0.0.0:5802    0.0.0.0:*   LISTEN   8447/Xvnc
    tcp    0    0 0.0.0.0:5901    0.0.0.0:*   LISTEN   3402/Xvnc
    tcp    0    0 0.0.0.0:5902    0.0.0.0:*   LISTEN   8447/Xvnc
    tcp    0    0 0.0.0.0:6001    0.0.0.0:*   LISTEN   3402/Xvnc
    tcp    0    0 0.0.0.0:6002    0.0.0.0:*   LISTEN   8447/Xvnc

    The netstat output shows that there are two VNC servers running with display number 1 and 2. So, for the 3rd VNC server to start, the command should be vncserver :3.
     
  3. Edit $HOME/.vnc/xstartup file with your favourite editor, to un-comment these two lines in order to get the “normal” Linux Desktop view:
    unset SESSION_MANAGER
    exec /etc/X11/xinit/xinitrc
  4. Switch user to root account (i.e. su - root), edit /etc/sysconfig/vncservers with your favourite editor, append the display number and Linux user account information to the VNCSERVERS (an array variable). This configuration file defines who can start up VNC server with what display number via the VNCSERVERS array (that’s read by Linux start up scripts /etc/init.d/vncserver). For example,
    VNCSERVERS="1:root 2:tester 3:walker"

    That means there are three Linux user accounts (root, tester, and walker) will start up VNC server with display number 1, 2, and 3 respecitively, as Linux boots up.

    Note: Don’t simply add more than one VNCSERVERS array in /etc/sysconfig/vncserver configuration file. Otherwise, only the last VNCSERVERS array will be used.
     
  5. Make sure VNC server (the daemon or server process) is set to auto run upon system boots up to your runlevel. For example,
    [root@walkernews ~]# chkconfig ––list | grep vnc
    vncserver  0:off  1:off  2:off  3:off  4:off  5:on  6:off

    The ––list option of chkconfig shows VNC server is set to auto run in Linux runlevel 5 (the default multi-user runlevel with Linux Desktop console). To configure VNC server to auto run when Linux boots into runlevel 5, use the ––level with on option switch:
    chkconfig --level 5 vncserver on

Ok, that’s all you need. You should have the VNC server automatically running when Red Hat Linux boots up at runlevel 5. Although the guide might looks lengthy to you, but works involve shouldn’t take you more than 3 minutes after you get used with Linux!

Creating NFS datastores on a VMware cluster

The one issue I have seen so far with VMware storage is with regards to NFS datastores; they have to be configured onto each node in the cluster individually. Since this is done manually, it can be prone to error, so I started using the following powershell commands to perform it on all the hosts in a cluster.

foreach ($ESXhost in get-cluster clustername| get-vmhost)
{
New-datastore –nfs –vmhost $ESXhost –name datastorename –path /export/share –nfshost servername
}


Simply replace the clustername, datastorename, /export/share and servername with the appropriate values and run it in the VI Toolkit Powershell extension

Tuesday, June 16, 2009

VMware treadmill process

In my VMware environment I have several VMs that have very large VMDKs, but are only using a small portion of that space. These are referred to as "THICK" disks. I would like to reclaim that unused space and turn them into "THIN" disks, which means I need some kind of a treadmill process to thin them out.

Various googling shows some examples of using VMKFSTOOLS to clone the disk into a thin provisioned copy, delete the source, then rename the new disk to replace the original. But this misses 2 key requirements: 1) Ensuring the system is offline before starting, 2) Performing the task on all available systems at once. For this, we need a couple other steps.

Identifying offline systems and VMDKs
Using the VI Toolkit powershell extension, I can run the following command on a single line
get-vm | where {$_.PowerState -eq 'PoweredOff'} | get-harddisk | export-csv d:\temp\foo.csv
This exports all the systems and drives into a CSV file that can be opened in Excel and massaged as necessary. Change the path name from [Datastore] /path to /vmfs/volumes/datastore/path.

Creating a script to thin out the disks
Now we need a script that will put a specified disk on the treadmil.
#!/bin/bash

E_BADARGS=65

if [ ! -n "$1" ]
then
echo "Usage: `basename $0` vmdkpath"
exit $E_BADARGS
fi

echo "Copying file $1"
vmkfstools -i $1 $1.thin -d thin
echo "Deleting file $1"
vmkfstools -U $1
echo "Renaming file $1"
vmkfstools -E $1.thin $1
exit 0

Using VI, save this file to the VMware host (i.e. putty into the host, create the file and save the contents). Use chmod +x treadmill to mark the file as an executable. Now all that remains is to queue up the disks to change.

Execute
Open notepad and paste in all the paths from the Excel spreadsheet. At the beginning of each line, add a call to the treadmill script so that it looks like the following
./treadmill /vmfs/volumes/datastore1/server1/server1.vmdk
Once all the lines look like this, simply copy and paste them into your existing putty session. This will go through each VMDK, create a thin copy, delete the original, and rename the new to the original. This could take a long time depending on the size and number of disks

Monday, June 15, 2009

VMware "unable to access a file since it is locked"

Ran into an unusual problem with my VMware farm, when trying to power on a system I get the error "Unable to access a file since it is locked".

A quick look on the web brings up http://www.techhead.co.uk/vmware-esx-unable-to-access-a-file-since-it-is-locked
The first two options are somewhat expected (the file is locked by something else). In my case the third option resolved the issue for me, I recreated the VMX and everything is good.

Wednesday, May 20, 2009

EFD Specs

There is a large lack of knowledge about Enterprise Flash Drives, what they can do and what they cant. I for one am one of those with minimal knowledge and understanding. This is some of the information I found at EMCWorld.

An EFD is composed of 16 smaller flash chips, managed by an internal processor with onboard cache

Lifespan - We all know that flash drives have a limited lifespan, is that something I should be concerned with? Supposedly, the EFDs can support millions of rewrites without issue, comporable to that of FC drives.

Performance - A conservative number is 2500 IOPs per disk, and 100 MB/s. These are very conservative numbers and could be up to 10x this.

RAID configurations - You still want redundant disks, just as with FC, SAS, or SATA, so what configuration do you use. All RAID types are available (0,1,5,10), and due to the high performance characteristics, RAID5 is generally appropriate.

READ vs. Write - Read for EFD is considerably faster than Write, but writes are still faster than with traditional disks. This can be improved by enabling the SAN write cache, but it is general practice to leave this disabled.

What are you trying to purchase with your disks? This ultimately determines what type of storage is needed.






TypeStorage
SpaceSATA / FC
IOPSEFD
Response TimEFD
ThroughputEFD / FC


Is EFD a silver bullet for your performance issues? Maybe. It will help resolve many hardware wait issues, but wont resolve all the processing performance issues you may have. You should always make sure and validate where the waits are to ensure this will help.

Time to start using EFDs?

Just went through a session at EMCWorld about using Enterprise Flash Drives (EFD) in Oracle environments. My initial assumption with EFDs was that they were fast and expensive, and only for your biggest databases and workloads did they make sense, now I am starting to think differently. They are still very fast and expensive (I assume, I dont really know the cost), but it may not only be for the biggest and most intensive workloads.

In a database it is best practice to separate workloads onto different disks. The most basic form of this is to put log files on different disks than your database files, but you can also segment your database into multiple files on multiple disks. This way an update to one table wont contend with disk resources for an update to a different table on different disks.

So what does this mean for EFD? Well, if you have a specific table(s) or index(s), that are very busy then you may be able to segment that object onto an EFD, while leaving the remaining objects on FC drives. This way you can have only a handful of EFD drives (i.e. 3 disks in a RAID5 configuration), get exceptional performance, high number of IOPS, and extremely low latency.

Now comes the hard part - how do you identify which object would benefit most from being moved to an EFD? The presenter discussed searching for what is called Hot Tables, or Hot Objects within the database that are waiting for disk resources. Any Oracle DBA worth his salt should be able to identify these using the statspack tool for Oracle. There is a similar tool in MSSQL (I forget the name now) that will give you similar output.

Once I get back to work I expect to try these tools out to identify how some of our DBs could be improved with EFDs. Once I do that, I should be able to properly assess if EFDs would really benefit us performance wise, as well as cost wise.

New Celerra Simulator

In my testing I have used the Celerra simulator quite a bit. This helps me get familiar with the technology and features before trying anything in production. Now, it looks like they have released a new version that has the latest dedupe and iSCSI features.

The simulator can be downloaded from ftp://ftp.documentum.com/vmwarechampion/Virtual_Machine_Library/Celerra/5_6_43_18/. More information on the simulator is available at http://virtualgeek.typepad.com/virtual_geek/2009/04/new-celerra-vsa.html

Wednesday, April 22, 2009

VMware/NFS performance with the Celerra NS-G8

Further testing with the Celerra raised the question of performance - can things be tweaked to improve performance and how does IP storage compare with FC? The scenario I am looking at is how VMware uses the NFS export as a datastore (something that isnt officially supported, yet a wizard exists to configure it).

Our Celerra will eventually be configured with a 10G Ethernet link into a Xsigo I/O Director - if your not familiar with Xsigo, check them out as there is some impressive technology there. We are waiting on parts to show up, so the current perf testing will use 2 1G Ethernet links in an Etherchannel configuration.

The test plan is:
  1. Create VMDKs using FC, Celerra with FC disks, and Celerra with SATA disks
  2. Use IOMETER to test the various "baseline" configuration
  3. Tweak the VMware/Celerra configuration according to EMC best practices and benchmark again
  4. Once the 10G modules arrive, repeat the tests
Storage Configuration:
Our Clariion is a CX4-240 with 1 tray of 1TB SATA disks, and 6 trays of 300GB FC disks. The SATA disks are configured into (2) 6+1 RAID5 arrays, with (4) 1.6TB LUNs each. The FC disks are configured into (3) 8+1 RAID5 arrays, with (2) 1.2TB LUNs each. The remaining FC disks are unused or configured for Celerra root file system. A temporary FC MetaLUN was also created with the same configuration as the existin FC exports for baseline testing

The SATA LUNs are combined on the Celerra to create 1 Storage Pool, and the FC LUNs are combined on the Celerra to create 1 Storage Pool. These pools are then exported to via NFS VMware over the 2GB Etherchannel links.

Once the baseline performance was captured using the default profiles in IOMETER, a whitepaper from EMC called "VMware ESX Server Optimization with EMC Celerra Performance Study Technical Note" (you may need a powerlink account to access this) was used to tweak NFS performance settings. According to their performance tests:
Based on the results of this study, the following recommendations should be considered when using VMware in a Celerra environment:
􀂊
Recommendation #1: Use the uncached option on Celerra file systems provisioned to the ESX Server as NAS datastore. This can improve the overall performance of the virtual machines. This should be considered particularly with random and mixed workloads that have write I/O.
􀂊
Recommendation #2: When using random and mixed workloads without replication, consider disabling prefetch on the Celerra file system. This can improve the overall performance of virtual machines provisioned with Celerra NFS storage.
􀂊
Recommendation #3: Align virtual machines that are provisioned with Celerra iSCSI storage as it can improve the overall performance of the virtual machines.
􀂊
Recommendation #4: Do not align virtual machines that are provisioned with Celerra NFS storage because this may degrade the overall performance of the virtual machines.
􀂊
Recommendation #5: Consider using an iSCSI HBA on the ESX Server with sequential workloads with Celerra iSCSI storage. This can improve the overall performance of virtual machines with such workloads.

This added 2 more scenarios into the testing mix: setting the uncached write option, and setting the prefetch read option. Configuring the uncached write option is done with the following command:
server_mount ALL -option rw,uncached
Configuring the prefetch read option is done with the following command:
server_mount ALL -option rw,noprefetch



MB/Second result
IOPs result

Conclusion:
While 2GB Ethernet does not match the throughput and performance of 4GB FC, it is close. I greatly expect that if this testing was done with 4GB Ethernet, the Ethernet/FC differences would be minimal, and once out 10G modules are installed that the Ethernet will outperform FC.
Additionally, at 2GB Ethernet speed, there is little difference between SATA disks and FC disks. This suggests that the bottlenec is the transport mechanism, and that more differences will be identified once the bandwidth is increased.
Lastly, there is little visible difference between the baseline configuration and the EMC best practice configuration. Some of this may be due to the workload profiles being mostly sequential instead of random, but it does suggest that the out-of-the-box configuration is fairly optimized.

Tuesday, April 21, 2009

Celerra NFS and VMware testing

We just recieved a EMC Celerra NS-G8 and it is my job to implement NFS serving VMware. Beyond the standard "get away from vmfs", there were a few features that peaked my interest: thin provisioning and deduplication.

Thin provisioning was a big letdown. If you are using NFS, you have some degree of thin provisioning by default. Additionally, most any VMware function that touches the disks (StorageVMotion, cloning, deploy from template) will bloat the vmdk to full size. I did find a way to thin out the VMs (I called it a treadmill process), but its not seemless and requires hours of downtime. I still have this feature enabled, but dont expect great things from it.

Deduplication was a bit of a surprise to me since I didnt think this feature was available until I got the system. My previous experience with deduplication was with EMC Avamar, which is block-level deduplication that allows for over 90% deduplication rates. Celerra deduplication however is file-level, meaning only fully duplicate files are freed up.
I have worked with Exchange and Windows Single-Instance-Storage before, so this is a great item for file servers where the same file may exist dozens or hundreds of times, but no 2 VMDKs are ever going to be alike.

Celerra Deduplication however also does compression, something that may be very useful if it can compress the zero blocks in a VMDK. To test this I created a "fat" vmdk and copied it to a NFS datastore, then initiated the dedupe process and identified the size differences.


Step 1: Create the bloated VMDK
The first thing needed is to create a bloated/fat/inflated disk to test against
  1. SSH into the VMware host
  2. CD to /vmfs/volumes/
  3. Create the disk vmkfstools -c 50G foo.vmdk -d eagerzeroedthick

The size of the disk can be confirmed by executing ls -l, and by viewing it in the Datastore Browser, make sure both locations list it as a full 50G in size (to ensure that thin provisioning isnt effecting us)


Step 2: Change the dedupe parameters
By default, deduplication is limited to files that meet the following requirements:
  • Havn't been accessed in 30 days
  • Havn't been modifed in 60 days
  • Is larger than 24kb
  • Is smaller than 200MB

To test the dedupe process, we need to change these using the server_param command. To see the current settings, ssh into the Celerra and run server_param server_2 -facility dedupe -list. This will list all the deduplication settings, the settings can then be changed by running server_param server_2 -facility dedupe -modify <attribute> -value <value>. In my case I need to reset the access and modified times to 0, and maximum size to 1000.


Step 3: Initiate the dedup process
Every time a filesystem is configured for deduplication, the dedupe process is triggered - meaning we can start a dedupe job manually by telling the filesystem to enable deduplication (if that makes sense). There are 2 ways we can do this - via the web console, or via the command line.

To kick off a dedupe job via the web console, browse to the File Systems node and open the properties for the target file system. In the File System Properties page, set Deduplication = Suspended and click Apply. The set Deduplication = On and click Apply. As soon as dedupe is set to on, a dedupe job will be initiated.

To kick off a dedupe job via command line, ssh into the Celerra and run fs_dedupe -modify -state on. This will automatically start a deduplication job. To view the status of the job, run fs_dedupe -info

Step 4: Compare the results
Initiating a dedupe on a file system with only VMDKs ultimatly results in 0 gain. Even with the disks being completely blank, compression doesnt seem to come into play - meaning a big waste of time in testing it.


Additional testing of dedupe with other files (ISOs, install files, home folders, etc...), show that dedupe works properly on the file level, but not for VMDKs.

Monday, April 20, 2009

Synchronizing ZenPacks across multiple collectors

I have setup multiple Zenoss collectors in my environment, and one of the issue I ran into was keeping the customizations and ZenPacks in sync. Unfortunatly there is no "out of the box" way to do this (maybe in Zenoss enterprise, but not in Zenoss core).

Initially I used SCP to copy the files from my primary server to the backup servers. This was problematic since the Zenoss user didnt have a password (that I knew at least), and running SCP as root changed the ACLs.

I then remembered my old friend rsync, and a quick bit of googling suggested that it may be the answer. A little more searching and I found a simple command line to copy the ZenPack files, customizations, and ACLs without any concern. A little tweaking and I can throw this right into my cronttab to synchronize on an hourly basis

rsync -avz /usr/local/zenoss/zenoss/ZenPacks/ root@serverB:/usr/local/zenoss/zenoss/ZenPacks

Wednesday, April 08, 2009

Configuring multiple Zenoss collectors

This post discusses how to configure multiple Zenoss collectors for centralized monitoring and alerting. These instructions are inspired by http://www.zenoss.com/Members/fdeckert/how-to-install-distributed-collectors/. Many of the tasks below reference $ZENHOME, which can be found by logging in as the zenoss user, but the tasks may need to be run as root.

Install zenoss on ServerB, but do not start it

Ensure DNS is setup with FQDN for both servers

Task

ServerA

ServerB

Install snmpd on both systems

apt-get install snmp snmpd

Configure snmpd

Snmpconf

Select none

Select to create snmpd.conf

Select Access Control Setup

Choose SNMPv1/SNMPv2c read-only access community name

Enter the read-only community name, enter, enter

Finished, Finished, Quit

mv snmpd.conf /etc/snmp/snmpd.conf

Enable remote snmp access

Edit /etc/default/snmpd

Change the line: SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'

To: SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid'

Restart snmpd

Verify snmp is working

snmpwalk -v 2c -c public ServerB .1.3

snmpwalk -v 2c -c public ServerA .1.3

Configure Zenoss services

Create two files in $ZENHOME/etc named DAEMONS_TXT_ONLY and daemons.txt

Enter the below into daemons.txt and save:

zeoctl

zopectl

zenhub

zenping

zensyslog

zenstatus

zenactions

zentrap

zenmodeler

zenrender

zenperfsnmp

zencommand

zenprocess

zenwin

zeneventlog

Create two files in $ZENHOME/etc named DAEMONS_TXT_ONLY and daemons.txt

Enter the below into daemons.txt and save:

zenping

zensyslog

zenstatus

zenactions

zentrap

zenmodeler

zenrender

zenperfsnmp

zencommand

zenprocess

zenwin

zeneventlog

Configure ServerB to use local monitors, but use the hub on ServerA

In $ZENHOME/etc, edit the following files: zenactions.conf, zencommand.conf, zendisc.conf, zeneventlog.conf, zenmodeler.conf, zenperfsnmp.conf, zenping.conf, zenprocess.conf, zenrender.conf, zenstatus.conf, zensyslog.conf

zentrap.conf, zenwin.conf, zenwinmodeler.conf

Enter the following 2 lines in all files:

monitor ServerB

hubhost ServerA

Configure ServerB to use the zope engine on ServerA

In $ZENHOME/etc, edit zope.conf

Find the zeoclient section

Change the line: server localhost:8100

To: server ServerA:8100

Add the remote collector

In the Web Interface, browse to Management | Collectors | Add Monitor

Enter the name ServerB

Change Render URL from: /zport/RenderServer

To: http://ServerA:8090/ServerB

Copy ZenPacks and Plugins

Scp -r $ZENHOME/ZenPacks ServerB:$ZENHOME/ZenPacks

Make sure any other alterations (symlinks, packages, etc…) are duplicated

Ensure files are owned by zenoss

chown –R zenoss.zenoss $ZENHOME/ZenPacks

chown –R zenoss.zenoss $ZENHOME/ZenPacks

Start Zenoss

/etc/init.d/zenoss-stack restart

/etc/init.d/zenoss-stack start

Begin moving devices

In the Web Interface, browse to Management | Collectors | localhost

Select several devices and click Devices | Set Perf Monitor

Monitoring and automatically restarting services in Zenoss

Now we need to monitor a service - say the Print Spooler - and we want to know when it fails. Actually, since we are lazy and overworked, we want to automatically restart the service, and only be alerted if it doesnt restart.

Monitor a Windows Service
Enabling monitoring of Windows services is quite intuitive, below are the steps needed to setup monitoring.
  1. In the action pane, select Services -- various classes of services will be listed
  2. Select WinService
  3. Find the service you are interested in monitoring (spooler) by paging through the list, or type in the name in the search box to the right
  4. Click the Spooler service and select the Edit tab
  5. Change Monitor to True, click Save

Enable Automatic Restart
This was inspired from http://blog.zenoss.com/2008/03/21/restarting-windows-services-with-zenoss, more help may be available from there if my words dont make sense.

Create a Transform to recognize the event
  1. In the action pane, select Events
  2. Select Status, WinService
  3. Select WinService | More | Transform
  4. Enter the following in the Transform and click Save
#get service name
msg = getattr(evt, "message", None)
#parse the message which looks like this
#Windows Service 'W32Time' is down
if msg:service = msg.split("'")[1]
#make new message now with only service name
#we don’t loose anything since summary is the same message
evt.message = service

Create an Event Manager Command
  1. In the Action pane, under Management select Event Manager and click the Commands tab
  2. Enter the name Start Windows Service and click Add
  3. Click the command just created and change Enabled to True
  4. For Command, enter the following: winexe -U "${dev/zWinUser}%${dev/zWinPassword}" //${dev/manageIp} "net start ${evt/message}"
  5. For the Where clause, enter Event Class | begins with | /Status/WinService (see image below)
  6. Click Save

To test this, simply stop the spooler service on a monitored system - it should automatically restart. If you stop and disable the service, you should recieve an alert.


Changing the graphs shown under the Perf tab in Zenoss

The default Windows class in Zenoss has several SNMP based performance graphs included. Since I (and most Windows admins) don't use SNMP, I want to replace these with custom graphs built on WMI. I have already created my Performance Templates, and now I need to select them to be the default performance graphs.

  1. Browse to /Devices/Server/Windows and click the Templates tab
  2. Select Available Performance Templates | Bind Templates
  3. Select the template or templates to include and click OK

Now when you browse to a server and click the Perf tab, the graphs included in the template you chose will appear.

There is actually much more to what is happening here than defining what graphs appear where, but since this was my first question about the graphs, thats how I am stating it. More iformation about binding templates can be found at http://www.zenoss.com/community/docs/zenoss-guide/2.2.4/ch13s03.html

Configuring email in Zenoss

Where would we be without email.... The world would slow to a crawl. So how do we configure Zenoss to send us emails?

Enable Emails
  1. In the web console, select Management | Settings
  2. On the Settings tab, enter the SMTP Host and FROM: addresses, click Save

Setting a user for email
  1. Click the Users tab and select the appropriate user
  2. Enter the Email addresses and click Save
  3. On the Alerting Rules tab, click Alerting Rules | Add Alerting Rule
  4. Enter a name for the rule and click OK
  5. Click the rule to open its settings
  6. Change the Delay from 0 seconds to 600 -- this will force the alerts to age for 10 minutes before being sent
  7. Enable the rule and click Save

To test the emails are working
  1. Select Management | Settings and Users tab
  2. Next to the users email address, click TEST
  3. Validate the email is recieved

Windows Performance Monitoring in Zenoss using WMI

Now that we have our ZenPack, we need to begin making it do something for us. The goal here is to monitor Windows server performance using WMI and alert on overage. To do this we are going to start by importing a Perl script to access WMI.

Importing the Perl script
  1. There is a ZenPack available from Zenoss called Perfmon that includes a script called perfmon.pl. We could simply import this ZenPack and then reference the file, but I am going to import the file itself to keep things simple in the future.
  2. Download and extract the Perfmon ZenPack and copy perfmon.pl to the $ZENHOME/ZenPacks/ZenPacks./ZenPacks/lib
  3. Install the wmi-client tools -- apt-get install wmi-client
  4. Make a symbolic link for winexe -- ln -s /usr/bin/winexe /usr/local/zenoss/zenoss/bin/

Create the performance template - Data Sources
  1. Browse to Classes | Devices | Server | Windows and click the Templates tab
  2. Create a new template by selecting Available Performance Templates | Add Template
  3. Enter the name "Basic Windows Performance" and click OK
  4. Create a new Data Source by clicking Data Sources | Add Data Source
  5. For name, enter dsCpuPercentage, Type of Command
  6. For Command Template, enter
  7. $$ZENHOME/ZenPacks/ZenPacks../ZenPacks///lib/perfmon.pl 1 "${dev/manageIp}" "${dev/zWinUser}" "${dev/zWinPassword}" "\Processor(_Total)\% Processor Time" "cpu_ProcessorTime"
  8. Select DataPoints | Add Data Point
  9. Enter cpu_ProcessorTime and click Add

Create the performance template - Thresholds
  1. Browse to Classes | Devices | Server | Windows and click the Templates tab
  2. Click the Basic Windows Performance template
  3. Select Thresholds | Add Threshold
  4. Enter a name of thCpuPercentage, type MinMaxThreshold, click OK
  5. On the threshold page, select the appropriate Data Points
  6. For Max Value, type 80
  7. Set Event Class to /Perf/CPU
  8. Click Save


Create the performance template - Graph Definitions
  1. Browse to Classes | Devices | Server | Windows and click the Templates tab
  2. Click the Basic Windows Performance template
  3. Select Graph Definitions | Add Graph
  4. For name, enter CPU Utilization
  5. Select Graph Points | Add DataPoint, select the data points to include
  6. Click OK, Save

Include the template in the ZenPack

  1. Browse to Classes | Devices | Server | Windows and click the Templates tab
  2. Click the Basic Windows Performance template
  3. Select Performance Template | Add to ZenPack