Thursday, December 30, 2010

Setting runlevel at boot

Changing runlevel on a live system is fairly simple - as root simply type init 3 (or whatever runlevel you want).
However, what if you need to specify a runlevel at boot time? Imagine there is something wrong with your X configuration that is stopping you from being able to log in, how do you force the system to only boot into console mode?
  1. At boot time the system will tell you to hit any key to enter the menu, pressing a key will allow you to see the GRUB boot menu
  2. Find the appropriate boot line and press e to edit the boot options
  3. The second line looks something like kernel /vmlinuz-2.6.32-71.el6......., select it and press e to edit it
    1. At the end of the line, enter the number of the runlevel you wish to boot into and hit enter
  4. Press b to start booting
Thats all there is too it, the system will boot into the appropriate runlevel for this cycle. Rebooting will continue to operate the same as before.

NOTE: This can be very useful if you forget the root password. Simply boot into runlevel 1 and you are logged in as root without being prompted for a password.
Some people may see this as a security hole, but if you have physical access to any other system you can do the same. This process may have made it a little easier, but its not much harder under Windows

Installing Redhat in various ways

There are multiple methods and options available when installing Redhat. Here are just a few options and how to perform them.
VNC
  1. Insert the installation DVD and boot the system
  2. At the installation prompt, hit [TAB] to modify the boot options
  3. At the end of the boot command, enter vnc and hit enter to start the boot process
  4. The process will begin walking through setting up the network, then prompt you to connect using a vnc client
  5. Use vncviewer (or some other client) and connect via IP to 192.168.10.1:1 (note the :1 at the end)
Network install
  1. Create a repository on a server and share it via http, ftp, or nfs
  2. On the new system boot using the DVD
  3. At the installation prompt, hit [TAB] to modify the boot options
  4. At the end of the boot command, enter askmethod
  5. The installer will prompt you for language settings, then ask for an installation source type and IP information
    1. NFS: 192.168.10.1:/repo
    2. HTTP: http://192.168.10.1/repo
    3. FTP: ftp://192.168.10.1/repo
Kickstart
  1. First create or edit a kickstart script (there is a sample ks-anaconda.cfg in the root users home directory)
  2. Copy the kickstart script to a shared location/floppy/cd
  3. Boot the new system using the rhel6 DVD
  4. At the installation prompt, hit [TAB] to modify the boot options
  5. At the end of the boot command, enter the location of the kickstart script
    1. Floppy: ks=floppy
    2. CD: ks=cdrom:/ks-anaconda.cfg
    3. NFS: ks=nfs:192.168.10.1:/repo/ks-anaconda.cfg
    4. HTTP: ks=http://192.168.10.1/repo/ks-anaconda.cfg
iSCSI
  1.  During the installation process, a screen will prompt What type of devices will your installation involve? 
    1. Select Specialized Storage Devices and click Next
  2. Click the Add Advanced Target and select Add iSCSI target
  3. Select the appropriate network interface and configure the network
  4. Enter the Target IP Address and any additional information as needed, click Add target
  5. The installer will scan for iSCSI drives and return to the device selection list
  6. Select the tab Other SAN Devices and make sure the proper LUN is selected
Encrypted file system
  1. During the installation process, a screen will prompt Which type of installation would you like? 
    1. At the bottom of the screen, check Encrypt system
  2. You will be prompted for a passphrase that will be required to mount the partitions
    1. This must be at least 8 characters
  3. If you also chose to review the partitions, you can select individual partitions and Encrypt them

System Configuration and Management -- Use shell scripting to automate system maintenance tasks

This objective is quite a bit more ethereal than the others - with no clear end game, this could mean almost anything. With that in mind, here are a few of the very basic scripts that I have found to assist in automating management. A good place to find help is the man bash page

  • Doing something to each file in a directory
    •  for i in [`ls`]; do echo $i; done
  • Doing something for each line in a file
    •  while read i; do echo $i; done < anaconda-ks.cfg
  • Repeating a task every 10 seconds
    • while true; do echo Hello World; sleep 10; done
  • Create a task that occurs the same time every day
    • crontab -e
    • Enter 1 22 * * * echo Hello World
  • Create a task that occurs once at a specific time/day
    • at 10pm Dec 31 [return]
    • echo Hello World [return]
    • [CTRL]+z
  • Creating an executable script
    • Identify a working set of bash commands and save them to a file
    • Add a #!/bin/bash as the first line (not required, but good form)
    • Execute chmod +x foo.sh to make it executable
http://www.linuxconfig.org/Bash_scripting_Tutorial is a great basic overview of bash scripting. Note that all these commands may work differently in a different shell.

System Configuration and Management -- Produce and deliver reports on system utilization (processor, memory, disk, and network)

sar appears to be the tool to save the day here. Installed and running by default, the sysstat package contains tools that capture system performance throughout the day, and automatically summarizes it for you. Generating utilization reports is then a simple matter of knowing the right sar command to execute. If all else fails, simply try man sar
  • Processor
    • Basic processor report: sar or sar -u
    • Basic report every second for the next 10 seconds: sar 1 10
    • Load average: sar -q
    • Per processor statistics: sar -P ALL
    • Power management (not enabled by default): sar -m
  • Memory
    • Kernel paging: sar -B
    • Unused memory: sar -r
    • Swap space: sar -S
  • Disk
    • Disk IO stats (avg): sar -b
    • Disk IO stats: sar -d (-p to use pretty names)
  • Network
    • Network statistics: sar -n DEV
    • Network errors: sar -n EDEV
  • Everything
    • All reports simultaneously: sar -A
Links with more information:
http://www.linux.com/archive/feed/52570
http://www.softpanorama.org/Admin/Monitoring/sar.shtml

SMTP -- Configure an MTA to forward (relay) email through a smart host

This is a simple one, similar to defaulting to accept mail locally, by default postfix will only send mail to local recipients.

  1. Install the necessary packages
    1. yum install postfix 
  2. Edit the /etc/postfix/main.cf file
    1. Find the relayhost section and add a line relayhost = 192.168.10.1
  3. Restart the service
    1. service postfix restart
You should be able to test this by sending an email to a remote user.

Wednesday, December 29, 2010

NTP -- Synchronize time using other NTP peers

The easiest way to configure NTP is to use the GUI.
  1. On the top bar, right-click the time and select Preferences
  2. Click Time Settings and the Set System Time
  3. Check the box Synchronize date and time over the network
  4. Edit the list of NTP servers and click OK
Alternatively, you can execute system-config-date to go directly to step 3.

To perform the same via command line:
  1. Edit /etc/ntp.conf
    1. Configure 1 or more server lines like below
      1. server 0.rhel.pool.ntp.org
      2. or server 192.168.10.1
  2. Start the service
    1. service ntpd start
When all finished, make sure ntpd is set to start automatically for next reboot chkconfig ntpd on.
You can also perform a one-off sync by running ntpdate 192.168.10.1 (this only works if ntpd isnt running)

SMTP -- Configure a mail transfer agent (MTA) to accept inbound email from other systems

This is a simple one, by default postfix will accept only mail originating locally, and all it takes is updating a config file to change that.
  1. Install the necessary packages
    1. yum install postfix 
  2. Edit the /etc/postfix/main.cf file
    1. Find the line inet_interfaces = localhost and change it to inet_interfaces = all
  3. Restart the service
    1. service postfix restart
  4. Open the firewall
    1. iptables -I INPUT -p tcp --dport 25 -j ACCEPT
You should be able to test this by telnetting from a remote computer. If you receive a connection, your good to go.

SSH -- Configure key-based authentication

One of my favorite things to do when managing Linux systems is to configure key-based authentication. This allows me to connect to multiple systems via scripts without have to re-authenticate every time. A decent walkthrough is available at http://linuxproblem.org/art_9.html
Scenario: You are user A on host A, and you want to log onto host B as user B
  1. As user A on host A, execute
    1. ssh-keygen -t rsa
  2. Echo out the contents of ~/.ssh/id_rsa.pub (save to clipboard or copy via ssh to host B)
  3. As user B on host b, make the .ssh directory if it doesnt already exist
    1. mkdir ~/.ssh
  4. Edit the file ~/.ssh/authorized_keys and enter the contents from id_rsa.pub
Your now done. Assuming all went well, user A on host A should be able to run ssh b@b and be automatically logged in

DNS -- Configure a caching-only name server to forward DNS queries

I am not sure what the difference si between forwarding queries and caching queries - they both forward to another location.

There is a good walkthrough of setting up a chaching nameserver at http://www.redhat.com/magazine/025nov06/features/dns/

Install the needed components yum install bind bind-utils bind-libs bind-chroot caching-nameserver

copy /etc/named.conf to /var/named/chroot/etc/
edit /var/named/chroot/etc/named.conf

  • Change listen-on port from 127.0.0.1; to any;
  • Change allow-query from localhost; to any;
  • Add forwarders { 1.2.3.4; 5.6.7.8; }; and forward only; to the options section
Restart dns - service named restart
Edit /etc/resolve.conf to use the local DNS server

NFS -- Provide network shares suitable for group collaboration

Restricting access to NFS shares can be done by restricting firewall access (iptables), or by configuring the /etc/exports file. The /etc/exports file can restrict access to a single machine, a wildcard, or an IP network.
It all starts with installing and starting NFS
  • yum install nfs-utils rpcbind
  • service nfs start
  • service rpcbind start
Restricting to a single machine (can be exported to an IP or hostname)
  • Edit /etc/exports
  • Configure the export command like the following
    • /media 192.168.10.10(rw,no_root_squash)
  • Restart the service - service nfs restart
Restricting to a wildcard -- this allows exporting to a name or IP address with wildcards
  • Edit /etc/exports
  • Configure the export command like the following
    • /media *.example.com(rw,no_root_squash)
    • or /media 192.168.*10(rw,no_root_squash)
  • Restart the service - service nfs restart
 Restricting to an IP network -- this allows exporting to an entire subnet, or group of addresses
  • Edit /etc/exports
  • Configure the export command like the following
    • /media 192.168.10.0/24(rw,no_root_squash)
  • Restart the service - service nfs restart

SMB -- Provide network shares suitable for group collaboration

Configuring SMB shares for groups is very similar to sharing for an individual. The only gotcha here is making sure security on the folder are set properly.
Scenario: You have a group named group1, users foo and bar are members of this group. You need to share a directory named /group1 to these users only.

  1. Install samba
    1. yum install samba-client samba-common samba
  2. Configure the /etc/samba/samba.conf file
  3. Find the line workgroup and set the correct workgroup name
  4. At the end of the file, create a new directory block using the same syntax as the others. Note the use of the +group1 for valid users, this identifies it as a group instead of a user
    1.  #group1
    2. [group1]
    3. path = /group1
    4. writeable = yes
    5. browseable = yes
    6. valid users = +group1
  5. Save the file and restart the services -- service smb restart, service nmb restart
  6. Ensure the folder being shared is owned by the group
    1. chown root:group1 /group1
  7. Ensure the file permissions allow the group to read/write
    1. chmod 775 /group1 -R

SMB -- Provide network shares to specific clients

Setting up samba is always fun. Laden with landmines between the Windows and Linux world, everytime you think you have a working solution, it flakes out on you. Here are the steps needed to add an SMB share to specific clients.
  1. Install samba 
    1. yum install samba-client samba-common samba
  2. Configure the /etc/samba/samba.conf file
    1. Find the line workgroup and set the correct workgroup name
    2. At the end of the file, create a new directory block using the same syntax as the others. This example will create a share named "foo" that is only accessible by user "foo"
      1. # foo
      2. [foo]
      3. path = /foo
      4. writeable = yes
      5. browseable = yes
      6. valid users = foo
  3. Save the file and restart the services -- service smb restart, service nmb restart
  4. Make sure the user "foo" exists, and set the samba password - smbpasswd -a foo

NFS -- Provide network shares to specific clients

Restricting access to NFS shares can be done by restricting firewall access (iptables), or by configuring the /etc/exports file. The /etc/exports file can restrict access to a single machine, a wildcard, or an IP network.
It all starts with installing and starting NFS
  • yum install nfs-utils rpcbind
  • service nfs start
  • service rpcbind start
Restricting to a single machine (can be exported to an IP or hostname)
  • Edit /etc/exports
  • Configure the export command like the following
    • /media 192.168.10.10(rw,no_root_squash)
  • Restart the service - service nfs restart
Restricting to a wildcard -- this allows exporting to a name or IP address with wildcards
  • Edit /etc/exports
  • Configure the export command like the following
    • /media *.example.com(rw,no_root_squash)
    • or /media 192.168.*10(rw,no_root_squash)
  • Restart the service - service nfs restart
 Restricting to an IP network -- this allows exporting to an entire subnet, or group of addresses
  • Edit /etc/exports
  • Configure the export command like the following
    • /media 192.168.10.0/24(rw,no_root_squash)
  • Restart the service - service nfs restart

FTP -- Configure anonymous-only download

Enabling anonymous-only download appears to be enabled by default, so I am not sure why this is an objective. Below are the steps needed in case it isnt in the future.
  1. Install packages - yum install vsftpd ftp
  2. Edit /etc/vsftpd/vsftpd.conf
    1. Find the line anonymous_enabled and set it to YES
    2. Ensure anon_upload_enabled is set to NO
  3. Restart ftp - service vsftpd restart

DNS -- Configure a caching-only name server

There is a good walkthrough of setting up a chaching nameserver at http://www.redhat.com/magazine/025nov06/features/dns/

Install the needed components yum install bind bind-utils bind-libs bind-chroot caching-nameserver

copy /etc/named.conf to /var/named/chroot/etc/
edit /var/named/chroot/etc/named.conf
  • Change listen-on port from 127.0.0.1; to any;
  • Change allow-query from localhost; to any;
  • Add forwarders { 1.2.3.4; 5.6.7.8; }; and forward only; to the options section
Restart dns - service named restart
Edit /etc/resolve.conf to use the local DNS server

HTTP/HTTPS -- Deploy a basic CGI application

The default apache configuration allows execution of CGI scripts in the /var/www/cgi-bin/ directory. This is controlled by the ExecCGI option for a specified directory. A good reference for this is at http://www.brennan.id.au/13-Apache_Web_Server.html#cgi

An example of this is below


    Options ExecCGI
    SetHandler cgi-script

Once a script is included in the target directory, it will begin to respond as an executable. A sample cgi script is below to test with, more can be found online with a simple search
helloworld.cgi
#!/usr/bin/perl
print "Content-Type: text/plain", "\n\n";
print "Hello World in Perl", "\n";

HTTP/HTTPS -- Configure private directories

At first glance, this objective can mean 2 things: allowing users to setup public_html directories, or securing directories with configuration files or .htaccess. A great page detailing the setup of HTTP can be found at http://www.brennan.id.au/13-Apache_Web_Server.html

public_html
edit the /etc/httpd/conf/httpd.conf and find the line UserDir disabled. Comment out this line, and uncomment the line UserDir public_html.
Restart the web server - service httpd restart
NOTE: There may be multiple layers of security blocking access including folder, file and selinux restrictions.
Specifically, ensure the apache user has access to the home and public_html directories, as well as all files under the public_html directory. Additionally, run setsebool -P httpd_enable_homedirs true

Securing directories
edit the /etc/httpd/conf/httpd.conf file

        AuthType Basic
        AuthName "Private area - authorization required"
        AuthUserFile /etc/httpd/conf/authusers
        Require valid-user
Add users to the authusers file - htpasswd /etc/httpd/conf/authusers username
Restart the web server - service httpd restart
Access should now be restricted to username

.htaccess
This is traditionally used to restrict access to public_html directories since the average user doesnt have access to edit the httpd.conf file.
In the target folder, touch 2 files: .htaccess and .htauthusers
Edit .htaccess and enter the following (note the AuthUserFile appears to need a fully qualified path)
AuthType Basic
AuthName "Private Area"
AuthUserFile /home/username/public_html/private/.htauthusers
Require valid-user
Execute htpasswd .htauthusers username
Access should now be restricted to user username

HTTP/HTTPS -- Configure a virtual host

HTTP virtual hosts allow a single web server to act like multiple web servers, either by publishing to multiple IPs and ports, or by publishing multiple sites and identifying them by name. This feature allows you to publish http://foo.example.com and http://bar.example.com from the same host with a single address, and the server returns the appropriate information based on the site name the customer has typed in.


STEP 1: Create directories to hold the content
cd /var/www
mkdir foo
mkdir bar
echo foo > foo/index.html
echo bar > bar/index.html

STEP 2: Create virtual directories
vi /etc/http/conf/httpd.conf
uncomment the line NameVirtualHost *:80
copy the last 7 lines twice, remove the # at the beginning
edit the DocumentRoot and ServerName lines to match your new directories
Save the file and execute service httpd restart
Test

Network Services -- Configure the service to start when the system is booted

Services can generally be started in 2 ways - on demand, and at bootup. The on demand startup is easy, service nfs start or /etc/init.d/nfs start.
Starting services at bootup is often controlled through the command chkconfig.
Below are several primary commands for using chkconfig, the NFS daemon will be used for the examples
  • List services and startup configuration
    • chkconfig --list
  • Configure service to start on boot
    • chkconfig nfs on
  • Configure service to start at a specific runlevel
    • chkconfig --level 3 nfs on
  • Configure service not to start on boot
    • chkconfig nfs off
  • Add a service to the list of services
    • chkconfig --add nfs
  • Remove a service from the list of services
    • chkconfig --del nfs

Network Services -- Install the packages needed to provide the service

Installing and configuring packages are a base function of managing a Linux environment. Packages are preconfigured files that are design to install an run on multiple systems. The alternative to this is to download the source code for the packages and build them yourself - a step that would stop most people from ever using Linux.
There are 2 basic methods to install packages in redhat: manual installation via RPM, and using a package manager such as YUM.
RPM
RPM is a very versatile and flexible tool that allows for installation, removal, querying and updating of packages on a given system. The only drawback to this tool is that it doesn't handle package dependencies well.
Imagine you are installing program A, and program A requires program B in order to run, using RPM to install program A will fail and report back that you need to install program B first. The administrator must then track down program B and use RPM to install it first, assuming that B doesn't have additional depenencies.

YUM
YUM is a package management tool that allows for multiple distributed repositories of information that can be searched and installed. The tool performs automatic dependency management and will prompt to install any required packages as needed.
YUM is configured with files in the /etc/yum.repos.d/ folder. Each repository needs a minimum of 3 lines, an example is shown below
[RHEL6Server]
name=RHEL6Server
baseurl=http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/
For this exam objective, I am going to assume we are installing packages via YUM, and that the below services will be installed. In case you dont know the name of the package to install, try searching for it like yum search ftp
  • HTTP
    • yum install httpd
  • DNS
    • yum install bind bind-utils 
    • NOTE: bind-utils is included to assist in setup and testing
  • FTP
    • yum install ftp vsftpd 
    • NOTE: ftp is included to assist in setup and testing
  • NFS
    • yum install nfs-utils rpcbind 
    • NOTE: rpcbind is needed to enable NFSv3 connections
  • SMB
    •  yum install samba-client samba-common samba
  • SMTP
    • yum install postfix
    • NOTE: alternative SMTP servers can be configured, but this is the default redhat selection
  • SSH
    • yum install openssh-server
  • NTP
    •  yum install ntp ntpdate

Tuesday, December 28, 2010

System Configuration and Management -- Configure a system to accept logging from a remote system

In prior releases of redhat, remote logging was configured via syslogd. In RHEL6, this is replaced with rsyslog.

The first step is to setup a remote server to receive the logging messages, http://www.rsyslog.com/receiving-messages-from-a-remote-system/ has a great walkthrough on setting this up.
  1. Edit /etc/rsyslog.conf an clear the # before the lines allowing syslog reception
    1. $ModLoad imudp.so
    2. $UDPServerRun 514
    3. $ModLoad imtcp.so
    4. $InputTCPServerRUN 514
  2. Restart the rsyslog daemon - service rsyslog restart
  3. Open the firewall to allow syslog connections
    1. iptables -I INPUT -p tcp --dport 514 -j ACCEPT
    2. iptables -I INPUT -p udp --dport 514 -j ACCEPT
    3. iptables-save > /etc/sysconfig/iptables
The next step is to configure the local system to send messages, http://www.rsyslog.com/sending-messages-to-a-remote-syslog-server/ has a great walkthrough on setting this up
  1. Edit /etc/rsyslog.conf and enter the below line (using the appropriate IP or DNS name)
    1. *.*   @@192.168.10.1:514
  2. Restart the rsyslog daemon - service rsyslog restart
Test the configuration by running logger -p warn foo. This will log a message in the local /var/log/messages and should log a similar message at the same location on the remote server

Migrating SQL servers

In the process of our SCOM rollout we used a temporary server for our SQL databases. Finally the new hardware came in, but now we have to migrate the databases with minimal downtime. This is compounded by the fact that the new server is 64bit OS and SQL, where the original was 32bit. For posterity, this is what I did to migrate the server to new hardware:

  1. Bring up the new server with 64bit OS, named as server1a (instead of server1)
  2. Setup the drives in the standard partitioning scheme (C: OS, D: SQL install, F: SQL data, G: SQL backup, L: SQL logs, T: SQL Temp)
  3. Install SQL 2005 64bit on D:
  4. Move the temp database to T:
  5. Copy the SCOM databases from the old server to the new
  6. Shutdown the original server1
  7. Rename Windows from server1a to server1
  8. Rename SQL from server1a to server1


Moving the tempDB

Using the informaiton from the article at http://www.databasejournal.com/features/mssql/article.php/3379901 I executed the following SQL script

use master
go
Alter database tempdb modify file (name = tempdev, filename = 't:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\tempdb.mdf')
go
Alter database tempdb modify file (name = templog, filename = 't:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\templog.ldf')
Go


I then stopped SQL and copied the tempdb.mdf and templog.mdf from their default locations to the new location on T.

Copying the databases

  1. Stop SQL on the originating server
  2. Use robocopy to copy the data and log files to the new server (robocopy \\server1\f$ f:\ /mir /r:0 /w:0)
  3. In the new server, attach the databases

Renaming the server

Once the system was up and running with the databases attached, I then ran the following TSQL to rename the SQL instance to the new server


Finally I renamed the windows server (powering off the original server first) and rebooted. Voila, everything works!

System Configuration and Management -- Configure a system to log to a remote system

In prior releases of redhat, remote logging was configured via syslogd. In RHEL6, this is replaced with rsyslog.

The first step is to setup a remote server to receive the logging messages, http://www.rsyslog.com/receiving-messages-from-a-remote-system/ has a great walkthrough on setting this up.
  1. Edit /etc/rsyslog.conf an clear the # before the lines allowing syslog reception
    1. $ModLoad imudp.so
    2. $UDPServerRun 514
    3. $ModLoad imtcp.so
    4. $InputTCPServerRUN 514
  2. Restart the rsyslog daemon - service rsyslog restart
  3. Open the firewall to allow syslog connections
    1. iptables -I INPUT -p tcp --dport 514 -j ACCEPT
    2. iptables -I INPUT -p udp --dport 514 -j ACCEPT
    3. iptables-save > /etc/sysconfig/iptables
The next step is to configure the local system to send messages, http://www.rsyslog.com/sending-messages-to-a-remote-syslog-server/ has a great walkthrough on setting this up
  1. Edit /etc/rsyslog.conf and enter the below line (using the appropriate IP or DNS name)
    1. *.*   @@192.168.10.1:514
  2. Restart the rsyslog daemon - service rsyslog restart
Test the configuration by running logger -p warn foo. This will log a message in the local /var/log/messages and should log a similar message at the same location on the remote server

System Configuration and Management -- Configure a system as an iSCSI initiator that persistently mounts an iSCSI target

Wonderful iSCSI - the cheap mans method of SAN connectivity. A quick update on terminology - iSCSI initiator is the client who initiates the connection, iSCSI target is the server providing the storage.

The first thing to do is setup an iSCSI target - this is not a role provided by redhat out of the box, and the exam objective doesnt seem to state that it is expected. The easiest method I have seen of setting up a target is to use openfiler (http://www.openfiler.com/) its a quick install and a fairly easy configuration. There is a nice walkthrough at http://www.techhead.co.uk/how-to-configure-openfiler-v23-iscsi-storage-for-use-with-vmware-esx that details how to setup the filer for iscsi.

Once setup, we now need to configure the iSCSI initiator. There is a great article on doing this at http://www.cyberciti.biz/tips/rhel-centos-fedora-linux-iscsi-howto.html

STEP 1: Install necessary packages
yum install iscsi-initiator-utils
service start iscsi

STEP 2: Configure initiator
Execute iscsiadm -m discoverydb -t sendtargets -p 192.168.10.1 -D to perform the discovery
Use fdisk to view all partitions and identify the new disk fdisk -l (should be something like /dev/sdb)
Use fdisk to create a partition, then execute mkfs.ext4 /dev/sdb1 to format it as ext4

STEP 3: Make the disk mount persistent
Execute chkconfig iscsi on
Because the device name can change between reboots, redhat suggests to mount the partition by using the UUID, execute ls -l /dev/disk/by-uuid, to find the uuid of the new disk
Edit /etc/fstab to configure the disk to mount on startup (should already be an example for /boot)

System Configuration and Management -- Configure system to authenticate using Kerberos

This is a new objective to RHEL6, previously the objective was to setup NIS. Apparently they realized that nobody uses NIS any more and updated the requirements.
Personally, I have been using a tool called Likewise Open (http://www.likewise.com/products/likewise_open/) that enables Linux systems to join an AD domain. Since I doubt this is the solution RedHat is looking for, its back to the books for this one.
STEP 1: Ensure all packages are installed
For this to work properly, you need the kerberos and samba package both installed
yum install krb5-server pam_krb5 samba samba-common samba-winbind samba-client samba-winbind-clients

STEP 2: Configure the system to authenticate
Execute system-config-authentication and choose winbind for the account database
For security model, select ads
Under winbind domain, enter the short-name for the domain (i.e. without the .com)
Under ADS Realm, enter the FQDN of the domain
Under Domain Controllers, enter your preferred domain controller
Select a desired shell template
Click Join Domain and enter the credentials

STEP 3: Confirm
Log out of the system and attempt to log in using domain\user as the username

NOTE: This may be all wrong. I cant find any specific details on what redhat is looking for here (i.e. kerberos authentication via winbind)

System Configuration and Management -- Use /proc/sys and sysctl to modify and set kernel run-time parameters

In its simplest form, this is a fairly straight forward task. If you view the file /etc/sysctl.conf, you will see several attributes with their appropriate values, these are the values applied at startup.
This file can be edited directly with a text editor, then the values reloaded by executing sysctl -p

These values can also be changed at runtime by modifying the values under /proc/sys. For instance, in /etc/sysctl.conf there is an attribute named net.ipv4.ip_forward, this attribute can also be viewed or modified as /proc/sys/net/ipv4/ip_forward.
To view the running value, run cat /proc/sys/net/ipv4/ip_forward
To change the running value, run echo 1 > /proc/sys/net/ipv4/ip_forward

The kernel attributes and values available to change can be found by either browsing the /proc/sys folders, or by running sysctl -a

System Configuration and Management -- Use iptables to implement packet filtering and configure network address translation (NAT)

Assuming you know basic networking, routing, and firewalling, basic packet filtering in RHEL is fairly easy. While you can get quite complex with solutions, all we are worrying about here is basic filtering.

Packet filtering in RHEL6 is controlled using a program called iptables. You can find the syntax for managing the tables by running iptables -h, or for more detail use man iptables. You can view the currently implemented rules by running iptables -L, or by viewing the file /etc/sysconfig/iptables (these are the rules loaded at startup). Viewing the /etc/sysconfig/iptables file is probably the easiest way to understand the syntax needed to add/modify the rules.

Configuring NAT
NAT'ing is similar to the process of setting up routing, except the firewall rules are different. Here I will be using the information from http://www.revsys.com/writings/quicktips/nat.html as a basis for the below steps.
SCENARIO:  Your server has 2 network cards: eth0 and eth1. The external network (internet) is connected to eth0, and the internal network is connected to eth1. You want all hosts from eth1 to be able to access resources on eth0 via a NAT'ed connection.


STEP 1: Enable IP Forwarding
Edit /etc/sysctl.conf and find the line net.ipv4.ip_forward = 0
Change the value of this line to 1, save and exit the file
Execute sysctl -p to reload the file (or simply reboot)

STEP 2: Enable Masquerading
Execute the following commands to enable Masquerading (NAT'ing)
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

Execute iptables-save > /etc/sysconfig/iptables to save the rules

System Configuration and Management -- Route IP traffic and create static routes

Using a Linux system as a router is nothing new - in fact many routers run some flavor of Linux. To perform this task you need a system with at least 2 interfaces (physical, virtual, or vlans) to route traffic through.

The base of the actions I will be performing are from http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch03_:_Linux_Networking

SCENARIO:
RHEL6 server with 2 interfaces: eth0 and eth1. eth0 is configured with address 192.168.10.1/24, eth1 is configured with address 192.168.20.1/24. Hosts on either segment will use this this server as their default gateway in order to access resources on the other segment.

STEP 1: Enable IP Forwarding
Edit /etc/sysctl.conf and find the line net.ipv4.ip_forward = 0
Change the value of this line to 1, save and exit the file
Execute sysctl -p to reload the file (or simply reboot)

STEP 2: Enable firewall to forward packets
Use the iptables command to enable forwarding of specific packets and traffic type.
example: Enable ICMP by executing iptables -I FORWARD -p icmp -s 192.168.0.0/16 -d 192.168.0.0/16 -j ACCEPT
When things are confirmed to be working, save the firewall rules so they reapply during a reboot by executing iptables-save > /etc/sysconfig/iptables

Preparing for my RHCE exam

I am scheduled to take my RHCE exam later this month. Prior to the release of RHEL6, I felt fairly confident in my ability to pass, but now that they have released the new OS things have changed. Below is a list of the RHCE exam objectives (according to http://www.redhat.com/certification/rhce/objectives/). My hope is to methodically move through each and document the steps needed to accomplish them.

Exam Objectives

RHCE exam candidates should consult the RHCSA Exam Objectives document and be capable of RHCSA-level tasks, as some of these skills may be required in order to meet RHCE exam objectives. Red Hat reserves the right to add, modify and remove objectives. Such changes will be made public in advance through revisions to this document.
RHCE exam candidates should be able to accomplish the following without assistance. These have been grouped into several categories.

System Configuration and Management

  • Route IP traffic and create static routes
  • Use iptables to implement packet filtering and configure network address translation (NAT)
  • Use /proc/sys and sysctl to modify and set kernel run-time parameters
  • Configure system to authenticate using Kerberos
  • Build a simple RPM that packages a single file
  • Configure a system as an iSCSI initiator that persistently mounts an iSCSI target
  • Produce and deliver reports on system utilization (processor, memory, disk, and network)
  • Use shell scripting to automate system maintenance tasks
  • Configure a system to log to a remote system
  • Configure a system to accept logging from a remote system

Network Services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:
  • Install the packages needed to provide the service
  • Configure SELinux to support the service
  • Configure the service to start when the system is booted
  • Configure the service for basic operation
  • Configure host-based and user-based security for the service
RHCE candidates should also be capable of meeting the following objectives associated with specific services:

HTTP/HTTPS

  • Configure a virtual host
  • Configure private directories
  • Deploy a basic CGI application
  • Configure group-managed content

DNS

  • Configure a caching-only name server
  • Configure a caching-only name server to forward DNS queries
  • Note: Candidates are not expected to configure master or slave name servers

FTP

  • Configure anonymous-only download

NFS

  • Provide network shares to specific clients
  • Provide network shares suitable for group collaboration

SMB

  • Provide network shares to specific clients
  • Provide network shares suitable for group collaboration

SMTP

  • Configure a mail transfer agent (MTA) to accept inbound email from other systems
  • Configure an MTA to forward (relay) email through a smart host

SSH

  • Configure key-based authentication
  • Configure additional options described in documentation

NTP

  • Synchronize time using other NTP peers

Tuesday, December 21, 2010

Disaster Recovery Planning (DRP) and Business Continuity Plan (BCP) Template

Disaster Recovery Planning (DRP) and Business Continuity Plan (BCP) Template

Business Continuity Planning is the way an organization can prepare for and aid in disaster recovery. It is an arrangement agreed upon in advance by management and key personnel of the steps that will be taken to help the organization recover should any type of disaster occur. These programs prepare for multiple problems. Detailed plans are created that clearly outline the actions that an organization or particular members of an organization will take to help recover/restore any of its critical operations that may have been either completely or partially interrupted during or after (occurring within a specified period of time) a disaster or other extended disruption in accessibility to operational functions. In order to be fully effective at disaster recovery, these plans are recommended to be regularly practiced as well as outlined.
In layman's terms, a Business Continuity Plan or BCP is how an organization guards against future disasters that could endanger its long-term health or the accomplishment of its primary mission. BCPs take into account disasters that can occur on multiple geographic levels-local, regional, and national-disasters like fires, earthquakes, or pandemic illness. BCPs should be live and evolving strategies that are adjusted for any potential disasters that would require recovery; it should include everything from technological viruses to terrorist attacks. The ultimate goal is to help expedite the recovery of an organization's critical functions and manpower following these types of disasters. This sort of advanced planning can help an organization minimize the amount of loss and downtime it will sustain while simultaneously creating its best and fastest chance to recover after a disaster.

Disaster Recovery (DR) is the process an organization uses to recover access to their software, data, and/or hardware that are needed to resume the performance of normal, critical business functions after the event of either a natural disaster or a disaster caused by humans. While Disaster Recovery plans, or DRPs, often focus on bridging the gap where data, software, or hardware have been damaged or lost, one cannot forget the vital element of manpower that composes much of any organization. A building fire might predominantly affect vital data storage; whereas an epidemic illness is more likely to have an affect on staffing. Both types of disaster need to be considered when creating a DR Plan. Thus, organizations should include in their DRPs contingencies for how they will cope with the sudden and/or unexpected loss of key personnel as well as how to recover their data.
Disaster Recovery Plans are generally part of a larger, more extensive practice known as Business Continuity Planning. DR plans should be well practiced so that the key players are familiar with the specific actions they will need to take should a disaster occur. DR plans must also be adaptable and routinely updated, e.g. if new people, a new branch office, or new hardware or software are added to an organization they should promptly be incorporated into the organization's disaster recovery plan. Companies must consider all these facets of their organization as well as update and practice their plan if they want to maximize their recovery after a disaster.

 Business continuity / Disaster Recovery Plans come in various forms, each reflecting the corporation's particular set of circumstances. The following are some of the general step required to develop and implement a plan.
Policy Statement (Goal of plan, reasons and resources Business Impact Analysis (how does a shutdown impact the business financially and otherwise) Identify Preventive Steps (can disaster be avoided by taking prudent steps) Recovery Strategies (how and what you will need to recover) Plan Development (Write plan and implement plan elements) Plan buy-in and testing (very important so that everyone knows the plan and knows what to do) Maintenance (continuous changes to reflect current situation)

Monday, December 20, 2010

Identifying and Eliminating Backup System Bottlenecks

I have recently had the opportunity to work with several companies with regards to their backup and recovery environment. Every company is unique - different hardware, different software, different policies, and different requirements. However, one thing remains constant across all of them -- Bottlenecks during backup.

I found a great article discussing bottlenecks, where they may be, and how to tailor your environment to support them. The article is at http://www.snia.org/education/tutorials/2008/spring/data-management/Farmer-J_Eliminating_%20Backup_System_Bottlenecks.pdf, the article abstract is included below.


Identifying and Eliminating Backup System Bottlenecks:  Taking Your Existing Backup System to the Next Level

Jacob Farmer Download
Identifying and Eliminating Backup System Bottlenecks: Taking Your Existing Tape Backup System to the Next Level -- This tutorial reveals the obvious and not-so-obvious bottlenecks found in enterprise backup systems and offers practical examples for applying the technologies described in the Data Protection tutorials to achieve one's performance objectives.  The goal of this session is to illustrate how one can take an existing backup system to the next level by integrating a combination of modern backup techniques and low-cost disk.  We start with the assumption that the end user has made a sizable investment in his/her enterprise backup system and is looking for a road map for affordable growth in both performance and capacity.  We also assume that tape is here to stay (at least for now) and that the ultimate goal is to get data on tape for off-site removal.  Topics include balancing the use of the LAN and SAN for backup traffic, ILM helping or hindering, achieving maximum performance from tape, disk staging with ordinary disk, de-duplication, block-level differencing, and virtual tape.  The take home message is that you cannot simply buy your way out of backup system headaches, you must design your way out.

Learning Objectives:
  1. To understand the underlying challenges to building a scalable backup system. Once you undersdtand the root causes of poor backup performance, you have the means to set realistic goals and achieve them.
  2. To understand the various ways in which disk can be integrated into an existing backup system to deliver superior performance and reliability.  A little disk can go a long way.
  3. To be able to compare and contrast the major architectural paradigms in enterprise backup and recovery.

Wednesday, December 15, 2010

Cool screenshotting tool

I have long looked for the ideal tool to perform screenshots. While I have used the PrtScn button religiously in the past, I was always dismayed at the inflexibility of it -- only capture 1 image at a time, only copy it to the clipboard, and thats it.
My ideal screenshot tool would have the following requirements:
  • Ability to capture the screen, window, or a specific area
  • Hotkeys to selectively choose the capture to do
  • Automatically save the images to a folder
  • Lightweight and easy to use
  • Free
I finally came across a tool called Lightscreen (http://lightscreen.sourceforge.net/) that meets all of my requirements

Tuesday, November 30, 2010

Using Oracle OVM templates in VMware

I was looking through several of Oracle's pre-built OVM templates - whole working virtual environments of many of Oracle's products. I would like to use these templates as a starting point for some work I am doing, but need to use it in VMware, so the question is, how do I do it.

I started looking around and it appears that because OVM is based on XEN, there are some options. Below are the steps I took to open the templates and access the data from a Linux machine.
  1. Download the templates from http://edelivery.oracle.com/EPD/GetUserInfo/get_form?caller=LinuxWelcome (make sure you get all the downloads needed for an environment)
  2. Copy the zip files to a linux environment
    1. ensure the Linux system has qemu-img installed (run qemu-img --help to confirm)
  3. Unzip the files (unzip V21365-01.zip), repeat for each file
    1. NOTE: You can delete the original zip files once completed
  4. Combine and extract the multi-part files (cat E1_ENT_898.3.1_X86_64.tar.bz2.part.* | tar -xjvf -)
    1. NOTE: You can delete the original bz2.part files once completed
  5. Extract the single files (tar -xjvf E1_HTML_898.3.1_WLS_X86_64.tar.bz2)
    1. NOTE: You can delete the original bz2 files once completed
  6. Convert the *.img files to vmdk files (qemu-img convert -f raw E1_ENT.img -O vmdk E1_ENT.vmdk)
    1. Due to white space consolidation, the vmdk file may be significantly smaller than the img file
    2. NOTE: You can delete the original img files once completed
  7. Copy the vmdk files to a VMware datastore and attach them to a Linux VM
    1. Shutdown the VM and add an existing hard drive
    2. The hard drive will attach as an IDE disk, if this doesnt work in your environment, you can perform a V2V using the VMware converter
  8. Mount the new disk's LVM partition
    1. Confirm the disk and partition is there (fdisk -l /dev/hdb)
    2. As root, execute (vgchange -ay)
    3. Mount the disk (mount /dev/VolGroup01/LogVol00 /root/foo)
  9. Review the setup and configuration files for recreation in your environment
    1. You can now browse the virtual disk and copy any configurations you need
Because the virtual hardware between OVM and VMware is different, I dont believe a direct conversion is possible (unless you have a working OVM environment and attempt an online V2V). This process however should give you access to all configuration and installation steps/options.

NOTE: Your mileage may vary on the effectiveness of these steps. Also, I am not sure of any legal implications or licensing restrictions with regards to these products or downloads, make sure you check your licensing before using these.

Additional information:
Mounting LVM partition: http://www.fedoraforum.org/forum/archive/index.php/t-64964.html
Converting using qemu-img: http://communities.vmware.com/thread/193272, http://wiki.laptop.org/go/VMware/Convert

Monday, November 22, 2010

Removing left-over snapshots

We recently started using Avamar to perform VMDK backups of our VMware environment. This process creates a snapshot of a VM, mounts the snapshot to a backup appliance, backs up the snapshot, and then deletes the snapshot - a fairly decent process.
Unfortunatly, some times it doesnt delete the snapshot, and if you leave them they can cause issues. The snapshots usually have unique names like 'Avamar-12903912904b5c2b7dc3ea83ca7de0f1c7020f6820cd1f5326', not very descriptive, but at least we know they are Avamar snaps.

Powershell to list the snapshots
$snaps = Get-VM | Sort Name | Get-Snapshot | Where { $_.Name -like 'Avamar*' } 
$snaps | Select VM,Name,Description,Created

Powershell to remove the snapshots
foreach($snap in $snaps) 
{
    Remove-Snapshot -snapshot $snap -confirm:$false
}

Sunday, October 24, 2010

Does God give grace?

Romans 8:38-39
For I am convinced that neither death nor life, neither angels nor demons, neither the present nor the future, nor any powers, neither height nor depth, nor anything else in all creation, will be able to separate us from the love of God that is in Christ Jesus our Lord.
God promises forgiveness of our sins, and that he won't give up on us. But for how long? Through how much? Can we mess up so much that god won't forgive us?

Think back to how we were before we found God. God loved us enough to die for us even when we didn't care, or love, or even try. How much more now that we love him back!

Romans 5:6
You see, at just the right time, when we were still powerless, Christ died for the ungodly.

Friday, October 22, 2010

Reset the root password to a random value

I recently began setting up some Linux servers at work and become concerned about access to the root user account. I have stressed the need for using SUDO, but the fact that people may know the root account was a gaping hole in accountability - if everyone logs in as root, how do I know who made what change?

I decided that a good method of enforcement would be to change the root password on a daily basis to a random string. The thinking is that anyone with SUDO access doesn't need the password, the password can be changed through the use of SUDO, and worst case scenario we can boot to a rescue disk and change it from there.

I did some searching on the net and found the below perl script at http://www.sunmanagers.org/pipermail/summaries/2005-May/006495.html. I put this script in the crontab for root and set it to run every day. Hopefully when I come back into the office, my root account password will have changed.

script:
#!/usr/bin/perl -w

use strict;

my $shadow="/etc/shadow";
my @Salt=("a".."z","A".."Z",0..9,".","/");

my ($uid,$gid)=(stat($shadow))[4,5];
(defined $uid) || die "$0: Failed to stat(\"$shadow\") - $!\n";

my $pass="";
map { $pass=$pass . sprintf("%c",int(rand(93))+33) } (1..8);
my $unx=crypt($pass,$Salt[int(rand($#Salt + 1))] . $Salt[int(rand($#Salt + 1))]);

open(OLD,$shadow) || die "$0: Failed to open $shadow for reading - $!\n";
open(NEW,">$shadow.new") || die "$0: Failed to open $shadow.new for writing - $!\n";
chmod(0600,"$shadow.new") || die "$0: Failed to 
chmod(0600,\"$shadow.new\") - $!\n";

while(<OLD>) {
 s/^(root):[^:]+:(.*)$/$1:$unx:$2/;
 print NEW $_;
}

if (!close(OLD)) {
 unlink("$shadow.new");
 die "$0: Failed to close file handle on $shadow - $!\n";
}
 if (!close(NEW)) {
 unlink("$shadow.new");
 die "$0: Failed to chown($uid,$gid,\"$shadow.new\") - $!\n";
}

if (!rename("$shadow.new",$shadow)) {
 unlink("$shadow.new");
 die "$0: Failed to rename(\"$shadow.new\",\"$shadow\") - $!\n";
}

exit 0;

Thursday, October 21, 2010

Using RedHat public YUM repository

I never found anything official about RedHat having a public YUM repository, but I stumbled across something that appears to work.

Simply create a new repo file under /etc/yum.repos.d/ and set the baseurl line to
baseurl = http://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/

Tuesday, October 19, 2010

Definition of Cloud Computing

We have heard a lot about cloud computing recently, but what exactly is it? After a lot of sales pitches, and various lingo thrown around, I finally found this article (http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-def-v15.doc) that breaks it down into these 5 key items:
  1. On-demand self-service
  2. Broad network access
  3. Resource pooling
  4. Rapid elasticity
  5. Measured Service
If you have started virtualizing your environment and use web applicaitons at all, you probably have most of these features - but it is important to have all to properly scale up and out.
An example of the reasoning why you want all of these components is from http://www.oracle.com/dm/offers/fy11/2_powering_new_generation_cloud_implementations.pdf where the transitions from a silo environment to a cloud environment.

Thursday, September 23, 2010

Export and Import VMware affinity rules

More searching for migration of my VCenter and I found http://ict-freak.nl/2009/09/06/powercli-export-and-import-drs-rules-v2/ which allows me to export and import the affinity rules


Export

Get-Cluster -Name "TUK_UT_04" | Get-DrsRule | `
Export-CliXml 'd:\temp\AffinityRules.xml' 

Import

ForEach ($rule in (Import-CliXml 'd:\temp\AffinityRules.xml')){
    New-DrsRule -Cluster (Get-Cluster -Name "TUK_UT_04") `
    -Name $rule.Name -Enabled $rule.Enabled `
    -KeepTogether $rule.KeepTogether `
    -VM (Get-VM -Id $rule.VmIds)} 

Export and Import VMware customization specs

While looking for ways to migrate to our new VCenter server, I came across http://www.van-lieshout.com/2009/07/export-and-import-customization-profiles-using-powershell/ which describes how to export your customization specs (passwords and all) and then reimport them into the new VC.


Export
$path="d:\temp"
#Export Customization Profiles
$view = get-view CustomizationSpecManager
ForEach ($CustomizationProfile in $view.info) {
    $xml = $view.CustomizationSpecItemToXml($view.GetCustomizationSpec($CustomizationProfile.name))
    $xml | Out-File ($path + "\" + ($CustomizationProfile.name) + ".xml")
}

Import
$path="d:\temp"
#Import Customization Profiles
$view = Get-View CustomizationSpecManager
ForEach ($xmlfile in (Get-ChildItem -Path $path | where {$_.extension -eq ".xml"})) {
    $xml = Get-Content ($xmlfile)
    $view.CreateCustomizationSpec($view.XmlToCustomizationSpecItem($xml))
}

Script to migrate to new VCenter

We are looking at building a new VCenter on new x64 hardware with a new DB server and were struggling with how to migrate our environment from one server to the other. Thanks to the script at http://technodrone.blogspot.com/2010/01/vcenter-powercli-migration-script.html, it looks like this may be an easy task.
It appears that some properties (such as affinity rules) arent copied over, but its a darn good start.

Monday, September 20, 2010

TSQL script to shrink all log files

I got alerted over the weekend to a SQL server running out of disk space on one of its drives. Upon further inspection, several of the LOG files had grown and were filling the log drive. I went to shrink the files, but realized there must be a better way.

A little googling and I found http://codesnippets.joyent.com/posts/show/665. This script cycles through all DBs and shrinks the LOG files for each DB. Simply schedule it with the SQL Server Agent and your good to go.


declare @ssql nvarchar(4000)
set @ssql= '
        if ''?'' not in (''tempdb'',''master'',''model'',''msdb'') begin
        use [?]
        declare @tsql nvarchar(4000) set @tsql = ''''
        declare @iLogFile int
        declare LogFiles cursor for
        select fileid from sysfiles where  status & 0x40 = 0x40
        open LogFiles
        fetch next from LogFiles into @iLogFile
        while @@fetch_status = 0
        begin
          set @tsql = @tsql + ''DBCC SHRINKFILE(''+cast(@iLogFile as varchar(5))+'', 1) ''
          fetch next from LogFiles into @iLogFile
        end
        set @tsql = @tsql + '' BACKUP LOG [?] WITH TRUNCATE_ONLY '' + @tsql
        --print @tsql
        exec(@tsql)
        close LogFiles
        DEALLOCATE LogFiles
        end'

exec sp_msforeachdb @ssql

Thursday, September 16, 2010

Executing commands in a VM via powershell

I got to thinking today that when you tell a VM to shutdown the OS, it is actually executing a command against the VM itself, not just in VMware. I decided to do a little searching to see what it takes to do something similar with other commands (such as srvinfo, or listing disk usage) and started googling.
I came across this interesting post that describes how to change a VMs IP address using the Invoke-VMScript powershell command.http://www.virtu-al.net/2010/02/05/powercli-changing-a-vm-ip-address-with-invoke-vmscript/
A little more searching and I found the following page that lists several other powershell functions to interact with guest VMs. Copy-VMGuestFile and Invoke-VMScript are 2 particularly interesting ones.http://www.amikkelsen.com/?p=357

Wednesday, September 15, 2010

Powershell to disconnect CD and Floppy drives in ESX

This script will enumerate all VMs and disconnect all CDs and Floppy drives
get-cddrive -VM *  |where {$_.ISOPATH -ne $null}|set-cddrive  -nomedia -Confirm:$False
get-cddrive -VM *  |where {$_.HOSTDevice -ne $null}|set-cddrive  -nomedia -Confirm:$False

get-FloppyDrive -VM *  |where {$_.FloppyImagePath -ne $null}|set-FloppyDrive -nomedia -Confirm:$False
get-FloppyDrive -VM *  |where {$_.HOSTDevice -ne $null}|set-FloppyDrive -nomedia -Confirm:$False

Syntax highlighting in blogger

http://www.cyberack.com/2007/07/adding-syntax-highlighter-to-blogger.html

List of brushes http://alexgorbatchev.com/SyntaxHighlighter/manual/brushes/

Brush nameBrush aliasesFile name
ActionScript3as3, actionscript3shBrushAS3.js
Bash/shellbash, shellshBrushBash.js
ColdFusioncf, coldfusionshBrushColdFusion.js
C#c-sharp, csharpshBrushCSharp.js
C++cpp, cshBrushCpp.js
CSScssshBrushCss.js
Delphidelphi, pas, pascalshBrushDelphi.js
Diffdiff, patchshBrushDiff.js
Erlangerl, erlangshBrushErlang.js
GroovygroovyshBrushGroovy.js
JavaScriptjs, jscript, javascriptshBrushJScript.js
JavajavashBrushJava.js
JavaFXjfx, javafxshBrushJavaFX.js
Perlperl, plshBrushPerl.js
PHPphpshBrushPhp.js
Plain Textplain, textshBrushPlain.js
PowerShellps, powershellshBrushPowerShell.js
Pythonpy, pythonshBrushPython.js
Rubyrails, ror, rubyshBrushRuby.js
ScalascalashBrushScala.js
SQLsqlshBrushSql.js
Visual Basicvb, vbnetshBrushVb.js
XMLxml, xhtml, xslt, html, xhtmlshBrushXml.js

VBScript to list and change DNS settings on all domain computers

If you ever change your Domain Controllers / DNS servers, you need to ensure all clients begin accessing the new servers before the old are retired. For DHCP, this is easy by editing the DHCP scopes and waiting until all the clients have renewed. For staticly assigned IPs, this can become a problem.

Below is a script that searches AD for all domain computers and queries them via WMI for DNS settings on each NIC. If the SetDNSServerSearchOrder line is uncommented, it will reset the DNS configuration of the systems.


'QueryDNS.vbs

Const ADS_SCOPE_SUBTREE = 2
arrNewDNSServerSearchOrder = Array("10.10.52.14","10.10.52.15")

EnumComputers

SUB EnumComputers
    on error resume next
    strDomain = "mydomain.com"

    Set objConnection = CreateObject("ADODB.Connection")
    Set objCommand =   CreateObject("ADODB.Command")
    objConnection.Provider = "ADsDSOObject"
    objConnection.Open "Active Directory Provider"

    Set objCOmmand.ActiveConnection = objConnection
    objCommand.CommandText = "Select Name, Location, whenChanged from 'LDAP://" & strDomain & "' Where objectCategory='computer'"  
    objCommand.Properties("Page Size") = 1000
    objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE 
    Set objRecordSet = objCommand.Execute
    objRecordSet.MoveFirst

    Do Until objRecordSet.EOF
        dLastChanged = objRecordSet.Fields("whenChanged").Value
        serverName = objRecordSet.Fields("Name").Value
        if dLastChanged>now()-60 THEN 
            QueryDNS serverName
        END IF
            objRecordSet.MoveNext
    Loop
END SUB

SUB QueryDNS(strServerName)
    on error resume next
    Set objWMIService =    GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strServerName & "\root\cimv2")
    'Set colNICConfigs = objWMIService.ExecQuery("SELECT DNSServerSearchOrder, Description FROM Win32_NetworkAdapterConfiguration WHERE IPEnabled = True AND DHCPEnabled = false")
    Set colNICConfigs = objWMIService.ExecQuery("SELECT    DNSServerSearchOrder, Description FROM Win32_NetworkAdapterConfiguration WHERE IPEnabled = True")
    for each objNICConfig in colNICConfigs
        OldDNSConfiguration = Join(objNICConfig.DNSServerSearchOrder, ",")
        if LEN(OldDNSConfiguration)>1 THEN
            wscript.echo strServerName &","& OldDNSConfiguration
            'objNICConfig.SetDNSServerSearchOrder(arrNewDNSServerSearchOrder)    ' Uncomment to reset DNS servers
        end if
    next
END SUB

VBScript to list the OS Version of all computers in domain

In a large environment, inventory information can quickly be out of date. Below is a script that searches AD for all computers and then reports the OS from each system. The OS information comes from the target computer, so firewalls and power state is important.


'OSVersion.vbs

on error resume next

Const ADS_SCOPE_SUBTREE = 2
strDomain = "mydomain.com"

Set objConnection = CreateObject("ADODB.Connection")
Set objCommand =   CreateObject("ADODB.Command")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"

Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = "Select Name, Location from 'LDAP://" & strDomain & "' Where objectCategory='computer'"  
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE 
Set objRecordSet = objCommand.Execute
objRecordSet.MoveFirst

Do Until objRecordSet.EOF
    QueryServer objRecordSet.Fields("Name").Value
    objRecordSet.MoveNext
Loop


SUB QueryServer(strServer)
on error resume next
    Set objWMI = GetObject("winmgmts:" & "{impersonationLevel=impersonate,(Security)}!\\" & strServer & "\root\cimv2")
    SET objOSs = objWMI.ExecQuery("select * from Win32_OperatingSystem")
    For Each objOS in ObjOSs
        strCaption = objOS.Caption
        strBuildNumber = objOS.BuildNumber
        wscript.echo strServer & "," & strCaption' & " - " & strBuildNumber
        NEXT

END SUB 

VBScript to list file owner and other attributes

Occasionally I need to look at the owner of a file or group of files. The below script helps me enumerate a folder and list the file attributes


'GetOwner.vbs

Set objShell = CreateObject ("Shell.Application")
Set objFolder = objShell.Namespace ("D:\tools\AD")
Set objFSO = CreateObject("Scripting.FileSystemObject")

Dim arrHeaders(13)
For i = 0 to 13
    arrHeaders(i) = objFolder.GetDetailsOf (objFolder.Items, i)
Next
For Each strFileName in objFolder.Items
    For i = 0 to 13
        If i <> 9 then
            Wscript.echo arrHeaders(i)     & ": " & objFolder.GetDetailsOf (strFileName, i)
        End If
    Next
    Wscript.Echo
Next

VBScript to enumerate domain computers

Ever need to list all the computers in your Active Directory? This handy script will query AD and return all computer objects. A little tweaking and you can search for objects based on last modified time to ensure you are listing valid AD objects.



'EnumComputers.vbs

Const ADS_SCOPE_SUBTREE = 2
strDomain = "mydomain.com"

Set objConnection = CreateObject("ADODB.Connection")
Set objCommand =   CreateObject("ADODB.Command")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"

Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = "Select Name, Location from 'LDAP://" & strDomain & "' Where objectCategory='computer'"  
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE 
Set objRecordSet = objCommand.Execute
objRecordSet.MoveFirst

Do Until objRecordSet.EOF
    wscript.echo objRecordSet.Fields("Name").Value
'    Wscript.Echo "Computer Name: " & objRecordSet.Fields("Name").Value
'    Wscript.Echo "Location: " & objRecordSet.Fields("Location").Value
    objRecordSet.MoveNext
Loop