Monday, July 30, 2012

Deleting PTR records with Upper Case text

I ran into an issue today where I had a DNS PTR record that I needed to delete, but couldn't. I would delete the record in the GUI without issue, but then I would refresh and the record would reappear.
A little searching and I found http://support.microsoft.com/kb/842127 that talks about a known issue in Server 2003 DNS and upper case text. I am running Server 2008, so I assumed the issue may still exist and hasn't been patched.
However, following the instructions in the KB article and using dnscmd failed to delete the record. I didn't want to try deleting the entire zone (option 2), so I was at a standstill.

A little more searching and I came across  http://www.techtalkz.com/windows-help/66165-unable-delete-static-ptr-record.html, a posting discussing using ADSIEdit.msc to delete the misbehaving record. I deleted the record from AD, but it never cleared from DNS; and after waiting a few minutes, reappeared in AD.
I suddenly had an idea - if the upper case text is causing the problem, maybe ADSIEdit will allow me to edit the text. I copied the dnsRecord attribute from a known good record and overwrote my problem record. A few moments later DNS was updated and reflected the name change. Right-click and delete - the record is gone.

This wasn't an intuitive solution and I hope I don't have to perform it again, but at least I now have the necessary steps to repeat if necessary.

Friday, July 20, 2012

Windows 8 Test Labs

I started playing with Windows 8 and Server 8 to learn about the new technologies and capabilities in them. Since the platforms are fairly new, I was concerned there wasnt much information about the platforms. You can imagine my joy when I came across the following link: http://social.technet.microsoft.com/wiki/contents/articles/7807.windows-server-8-beta-test-lab-guides-en-us.aspx

This page has several test labs with step-by-step instructions on how to setup and configure Server 8 for various tasks. I just completed the Base Configuration lab and created snapshots of all the VMs. Now I am moving on to the DirectAccess labs to learn more about this new'ish technology and how it can help in my production environment.

Monday, July 16, 2012

Completely unsupported method of configuring CommVault clients

I came across an issue that required me to tweak a global policy in my CommVault environment. Basically I needed to turn off client-side compression for my Oracle servers, but I had a problem, how do I disable it on only the target systems when the policy is defined in my Global Dedupe Policy.
CommVault has a built in scripting environment (http://documentation.commvault.com/commvault/release_9_0_0/books_online_1/english_us/features/cli/xml_scripts.htm), but I couldn’t figure out how to report on this setting. And if I can’t validate the setting via script, I am a little wary of making changes via script.

A little bit of searching and I found some settings in the SQL backend. The particular setting “Software Compression” is stored in the table archPipeConfig. This first query shows me that compression setting for my clients.

Show the compression status from all ‘SQL Server’ agents
select ac.name 'Client'
      , ai.name 'Agent'
      , ain.name 'Instance'
      , ap.subclientName 'SubClient'
      , apc.compressWhere
      , case apc.compressWhere
            when 0 then 'Client'
            when 1 then 'Media Agent'
            when 2 then 'Use Storage Policy'
            when 4 then 'Off'
            else 'unknown' end as 'Compress'
from archPipeConfig apc
LEFT JOIN APP_Application ap on apc.appNumber=ap.id
LEFT JOIN APP_Client ac on ap.clientId=ac.id
LEFT JOIN APP_IDAType ai on ap.appTypeId=ai.type
LEFT JOIN APP_InstanceName ain on ap.instance=ain.id
where ai.name='SQL Server'
      --and compressWhere<>0
order by 1, 2, 3, 4


Now the fun part – I am sure this isn’t supported, and I don’t even know if it actually works. The below SQL command will update the compression value on clients based on agent type.
Update compression status on all ‘SQL Server’ agents, set it to ‘Client’
update archPipeConfig set compressWhere=0
where appNumber in (
select ID from APP_Application where appTypeId in
(select TYPE from app_IDaType where name='SQL Server')
)


** UPDATE **
After running through all that work and effort, I found a post to http://documentation.commvault.com/commvault/release_9_0_0/books_online_1/english_us/features/cli/qscripts/CommServ.QS_CompressionConfig.Readme.html that uses qoperation.exe to view and change the compression setting.
Using the examples on the page, I was able to quickly review and set the compression settings on the clients I needed

Friday, July 06, 2012

A powershell image downloader for Reddit

Yes, I know there are a million of these, so this will make one million and one.

This is a image downloader for Reddit. Many of the images on Reddit are simply linked and can be downloaded directly from the XML source. This example configures a web proxy, and downloads from the /r/pics subboard.



# .NET class 
$webClient = new-object System.Net.WebClient 
# specify your proxy address and port 
$proxy = new-object System.Net.WebProxy "proxy.company.com:8080"  
# replace your credential by your domain, username, and pasword 
$proxy.Credentials = New-Object System.Net.NetworkCredential ("Domain\UserName", "Password") 
$webclient.proxy=$proxy  
# specify an header if you want to check info in your logs on your proxy 
$webClient.Headers.Add("user-agent", "Windows Powershell WebClient Header") 
# File to download 
$url = "http://reddit.com/r/pics/.xml" 
[xml]$myxml = $Webclient.Downloadstring($url)
$myxml.rss.channel.item.description.split("<") | %{$_| select-string -pattern link} | %{$webclient.DownloadFile($_.line.split('"')[1],"c:\temp\red\" + $_.line.split('"')[1].split("/")[-1] )}

Found my first problem with vCOps

I setup an eval of vCenter Ops in a test environment about a week ago and was letting it run. This morning I came in and found the below alert in my email:
New alert was generated at Fri Jul 06 05:12:21 PDT 2012:
Info:Object`s demand is 104.0 percent of its available resource capacity. Disk I/O is the most constrained resource.

Alert Type : Health
Alert Sub-Type : Workload
Alert State : Critical
Resource Kind : Datastore
Resource Name : TKPD_T2_J250003_02
Alert ID : 1806

VCOps Server - 10.89.12.104
Alert details


Always concerned about storage performance, I began digging into the VMware performance stats for the datastore and identified ESX host TKJ1721VK as the likely culprit.

There were only 2 VMs on the datastore/host combination: WTSQLSRV2 and WTDPW065C. A quick glance at the perf stats for each VM shows WTSQLSRV2 as the cause. Because this is a SQL server, I am assuming there is a scheduled job on the server that launches at 5am and is causing the CPU/Memory/Disk utilization to spike.
Looking through the other alerts I received this morning, I now see that there were alerts for both the VM WTSQLSRV2 and host TKJ1721VK at approximately the same time.

This is a fairly simple alert to resolve, with the final resolution of tuning SQL and/or moving the VM to dedicated storage. This does show that the vCOps has some merit, though it took a lot more steps to identify the cause than I would have expected.

Thursday, July 05, 2012

Importing EMC SAN data to vCenter Operations

I recently began testing vCenter Operations, and once I got it setup I quickly grew bored. Not because its not a good product, but the normalizing period takes several days/weeks, so there isn't much of an immediate WOW factor.

While waiting, I remembered hearing that EMC was releasing a Storage Analytics pack for vCOps. The solution isnt released yet, but there are several examples of using PowerShell to gather performance data and import it in. A few posts on EMC's community pages (https://community.emc.com/thread/126637https://community.emc.com/thread/130486) give sample scripts to gather information from EMC SANs an import them.


I have a few notes to add here: First off, make sure you are using vCOps Enterprise. The enterprise version allows you to create a custom dashboard (https://vcopsAddr/vcops-custom), which is required to use non-VMware data sources. 
Second, for an EMC SAN there are 2 scripts you run - 1 for gathering perf data, and 1 for creating parent/child relationships of the SP/RG/LUN/Disk. The first script will gather and import the performance data continuously. Once it has posted data once, you need to stop it and run the second script. Once the second script is complete, restart the first.
Third, the relationships associate the discreet SAN components, but not the SAN as a whole. To group all the disks as part of the same SAN, you have to create an application Environment | Applications Overview | Add new application. Then you create the Tiers and associate the resources  in the tiers. A slightly more manual process than it should be, but it works.
Fourth, you have to create your own dashboards. There are so many options available, and their operations/interactions are unique so it takes a bit to understand what you are getting.


So far I have been running the scripts as-is for a few days and am fairly impressed with vCOps handling of them. I am still trying to fully understand the product and understand what I am seeing represented, but so far it looks good.