VMwareHA all change for AAM. Hello FDM in vSphere 5.0

VMwareHA is rewritten from the ground up for vSphere 5.0, some important features to bring to light are as follows.
• Provides a foundation for increased scale and functionality
• Eliminates common issues (DNS resolution)
• Multiple Communication Paths
• Can leverage storage as well as the mgmt network for communications
• Enhances the ability to detect certain types of failures and provides redundancy
• IPv6 Support
• Enhanced Error Reporting
• One log file per host eases troubleshooting efforts
• Enhanced User Interface
• Enhanced Deployment Mechanism
One of the major changes with VMwareHA 5.0 is the rewrite of the underlying code. AAM was the agent in 4.x which stands for “Automated Availability Manager” was responsible for communicating resource information, HA properties to other nodes in the cluster as well as virtual machine states. AAM also is responsible for failure/isolation heartbeats. With vSphere 5.0 there is no longer the AAM agent this has now been replaced by FDM agent or Fault Domain Manager. This agent is important because the concept of Primary/Slave have also gone and replaced with a Master/Slave concept of which FDM plays a major part. There is now only one Master in the cluster on which the FDM agent is set as a Master role, on all other nodes FDM agents on those nodes are changed to Slave roles. One of the Slave nodes can be promoted to a Master if the original Master node fails.
The Master continues to monitor the availability of ESXi 5 hosts and also gathers information on the VM availability. As the Master agent monitors all Slave nodes and in case this slave host fails, all of the VMs on that nodes are restarted on another node.
If the Master Node fails then there is a re-election process and the host which has access to the largest number of Datastores is elected as a master. There is a really good reason for this as there is a new feature which allows you to communicate via Datastores for heartbeating
This communication via the secondary channel through Datastores is known as a Heartbeat Datastores. This secondary network is not used though in normal situations, it will only be used if the primary network goes down. This secondary channel also allows the Master to be aware of all Slave nodes and also the VMs running on those hosts. The Heartbeat Datastores can also determine if host has become isolated or if a network partition has occurred for that host.
The Master node also sends reports states to vCenter. Information from the slaves which monitor the state of their running VMs is sent to the Master also the slaves are notified if the Master is alive via heartbeats. The Slave sends heartbeats to master and if master should fail then that’s when the re-election process occurs. vCenter will know if a new Master is elected as the Master will inform vCenter when its process has finished.

Getting started with KVM

This is the first in my series of KVM tutorials. this guide will walk you through the installation and configuration of KVM.

Getting the system ready for KVM Virtualization

For those of you that have played with Xen you would have noticed that is normally necessary to have the correct version of the kernel to run it, with KVM this is not the case. With todays versions of Linux they will more than likely be ready to support KVM from within their kernels, all that is left for you to do is to install the KVM kernel module.
With a standard installation the modules and tools are not installed by default that is unless you specifically select them, for example within the RHEL 6 installation process for instance.
To install KVM from the command prompt, execute the following command in a terminal window with root privileges:

yum install kvm virt-manager libvirt

If the installation fails just check that you have not attempted to install KVM on a 32-bit system, or a 64-bit system running a 32-bit version of RHEL 6:

Error: libvirt conflicts with kvm

Once the installation of KVM is complete it is recommended to reboot the system once you have closed any running applications.
Once the system has restarted you can check that everything is OK with the installation by making sure that two specific running modules have been loaded into the kernel. This can be done by running the following command:

su -
lsmod | grep kvm

The above command should look similar to the following:

lsmod | grep kvm
kvm_intel              45578  0
kvm                   291875  1 kvm_intel

The installation should have configured the libvirtd daemon to run in the background. Using a terminal window with super user privileges, run the following command to check that libvirtd is running:

/sbin/service libvirtd status
libvirtd (pid  xxxx) is running...

If the process is not running, it can be started as follows:

/sbin/service libvirtd start

You’re now ready to launch the Virtual Machine Manager “virt-manager” by selecting Applications > System Tools > Virtual Machine Manager. If the QEMU entry is not listed, select the File -> Add Connection menu option and select the QEMU/KVM hypervisor before clicking on the Connect button.
If all went OK then you should now be ready to create virtual machines into which guest operating systems may be installed.
In the next post we shall look at virtual machine creation and general configuration for KVM such as Disks, Networks and system management.

Hello world!

Hi and welcome to my blog,
So i have finally done it, Ive started my own blog. its probably about time after all the comments I was getting from friends and collegues about doing one for virtualization.
For more about me please see my “about” page. I shall do every effort to cover what interests me most about virtualization and in the process I hope it will give you interest and information also.
so thats it for my first post, nice and short. next post I will be focusing a little on vSphere 5.0 and specifically VMwareHA. Bye till then………