Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron
Openstack is a solution that I have briefly been following over the past couple of years or so but never really had enough time to give it the focus it probably deserves, the current project I am working on has an element of interaction with Openstack so it seems a great opportunity to gain some in depth hands on experience giving me greater insight on how the various Openstack components click together and the level of interaction required for existing environments.
Having already bult a fairly solid VMware and Hyper-V lab environment meant that I wasn’t going to crash and burn what I already have; I need to shoehorn an Openstack deployment in to the lab environment, utilizing the existing network and storage facilities already available. This blog post will endeavor to layout the steps required to add an Openstack deployment from start to operational build and go over some of the hurdles I encountered along the way. As some background, in my existing built lab I use a typical 192.168.1.0/24 range of IPv4 address and also have a router to the outside world at 192.168.1.254. If your labs the same then it’s a matter of running the commands, if not then modify the address ranges to suit yours.
So many flavors to choose from.
Before I go into the steps, I also wanted to highlight some of the hurdles I encountered to building the Openstack deployment. The first question I asked myself is which distribution to choose to build the environment; initially I reviewed the Openstack docs to see the process of building the latest version of Openstack Juno version. Ubuntu and Centos seemed like the most common distributions that are used, I went for Ubuntu first because of the Devstack deployment process which a friend of mine suggested to check out. The docs surrounding Devstack (http://docs.openstack.org/developer/devstack/) are good, but are not so straight forward as it wasn’t clear exactly which files needed creating or modifying  for building the environment. For example it wasn’t clear if you needed to create the configuration file (local.conf or localrc) to get the options you need installed and configured. After a couple of attempts I did get a working environment going but initially it was a basic Nova/Network setup only finding the correct way to configure the local.conf file I got Neutron installed although configuration was another matter. I did have many late nights trying to get a worked environment but eventually gave up on it.
After ditching the Ubuntu build I then looked at building with Centos, having used Redhat for many years it did feel much more comfortable, I carried out some research on the options with Centos and went for an automatic installation process by using RDO (https://www.rdoproject.org/Main_Page), a community project of Redhat, Fedora and Centos deployments supported by users of the community.  One thing I have found with both Devstack and RDO is that information is out there but it is spread all over the place and not all sites have up to date information, for example some still focus on Havana or Icehouse and not many have info on Juno. Hopefully this guide will bring the installation steps into a single document which will help you.
Building out the Openstack environment following steps 1 to 27
Below are the steps I have created which will build out an Openstack deployment of Juno on a single VM or physical system which is based on Centos 7, it will use Neutron and connect to the existing external lab network of 192.168.1.0/24. The Openstack VM will have an IP of 192.168.1.150 which we will configure as a bridge, we will create a new network for the Openstack instances which will use a private IP pool of 10.0.0.0/24 and a floating IP or 192.168.1.0/24, we will create a floating IP range of 192.168.1.200-192.168.1.220 so that I can have 18 IPs available for instances if needed.
I will use vSphere 6 but really vSphere v5.x would be OK too, my vSphere servers can run nested virtualization which is ideal as I can create a snapshot and revert the snapshot if certain things failed.
1.      Create a new VM, for my requirements I have created a 16gb VM which is enough to run a couple of instances too along with Openstack, it also has a boot disk of 20GB, I also added another disk which I will use for Cinder (block storage), it will be a 100GB disk. I have also attached 2 virtual network cards both are directly connected to the main network.
2.     Install Centos 7.0 on the VM or physical system, I have used CentOS-7.0-1406-x86_64-Minimal.iso for my build. Install the OS following the configuration inputs as requested as asked by the install process.
3.     Some additional house keeping I make on the image is to rename the enolxxxxxxx network devices to eth0 and eth1, I’m a bit old school with device naming.
Modify the /etc/default/grub and append ‘net.ifnames=0 biosdevname=0‘ to the GRUB_CMDLINE_LINUX= statement.

# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rootvg/usrlv rd.lvm.lv=rootvg/swaplv crashkernel=auto vconsole.keymap=usrd.lvm.lv=rootvg/rootlv vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0"

4.     next make a new grub config

# grub2-mkconfig -o /boot/grub2/grub.cfg

5.     Rename the config files for both eno devices

# mv /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eth0

6.     repeat for eth1

# mv /etc/sysconfig/network-scripts/ifcfg-eno32111211 /etc/sysconfig/network-scripts/ifcfg-eth1

7.      Reboot to run with the modified changes.

# reboot

The RDO Install process
8.     Bring the Centos OS up to date

# yum update -y

9.     Open the SE Linux barriers a bit, this is a lab environment so can loosen the security a little

# vi /etc/selinux/config # SELINUX=enforcing SELINUX=permissive

10.  Install the epel repository

# yum install epel-release -y

11.   Modify the epel repo and enable core, debuginfo and source sections.

# vi /etc/yum.repos.d/epel.repo
[epel]enabled=1
[epel-debuginfo] enabled=1
[epel-source] enabled=1

12.   Install net tools

# yum install net-tools -y

13.   Install the RDO release

# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm

14.   Install openstack packstack

# yum install -y openstack-packstack

15.   Install openvswitch

# yum install openvswitch -y

16.   Final update

# yum update -y

Cinder volume preparation
17.   Install lvm2

# yum install lvm2 -y

18. Build out using packstack puppet process

# packstack --allinone --provision-all-in-one-ovs-bridge=n

19.  Remove 20gb loopback file from Packstack install and create new cinder-volume disk on 100GB virtual disk

# vgremove cinder-volumes
# fdisk sdb
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb

UPDATE
Instead of the changes to eth1 and br-ex I have found a simpler method of using eth1 as the NIC that will be used on the OVS switch. just remember that if the server is rebooted to check that the eth1 is still connected to the br-ex port group.
20. Add eth1 to the openvswitch br-ex ports

# ovs-vsctl add-port br-ex eth1

Change network configuration for /etc/sysconfig/network-scripts/ifcfg-br-ex & /etc/sysconfig/network-scripts/ifcfg-eth1

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.1.150
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=192.168.1.1
DNS2=192.168.1.254
ONBOOT=yes
# vi /etc/sysconfig/network-scripts/ifcfg-ifcfg-eth0
DEVICE=eth0
HWADDR=52:54:00:92:05:AE # your hwaddr
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

21.  Additional network configurations for the bridge

# network_vlan_ranges = physnet1
# bridge_mappings = physnet1:br-ex

22.   Restart the network services so that the config takes effect

# service network restart

Configure new network and router to connect onto external network
23.  Remove old network configuration settings

# . keystonerc_admin
# neutron router-gateway-clear router1
# neutron subnet-delete public_subnet
# neutron subnet-delete private subnet
# neutron net-delete private
# neutron net-delete public

24.  Open ports for icmp pings and connection via ssh

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

25.  Create new private network on 10.0.0.0/24 subnet

# neutron net-create private 
# neutron subnet-create private 10.0.0.0/24 --name private --dns-nameserver 8.8.8.8

26.  Create new public network on 192.168.1.0/24 subnet

# neutron net-create homelan --router:external=True 
# neutron subnet-create homelan 192.168.1.0/24 --name homelan --enable_dhcp False --allocation_pool start=192.168.1.201,end=192.168.1.220 --gateway 192.168.1.254

27.  Create new virtual router to connect private and public networks

# HOMELAN_NETWORK_ID=`neutron net-list | grep homelan | awk '{ print $2 }'` 
# PRIVATE_SUBNET_ID=`neutron subnet-list | grep private | awk '{ print $2}'` 
# ADMIN_TENANT_ID=`keystone tenant-list | grep admin | awk '{ print $2}'` 
# neutron router-create --name router --tenant-id $ADMIN_TENANT_ID router 
# neutron router-gateway-set router $HOMELAN_NETWORK_ID
# neutron router-interface-add router $PRIVATE_SUBNET_ID

That’s the install and configuration process complete. I will continue this series of blogs with deployment of instances and floating IP allocation.
Hope this has helped you deploy Openstack. Feel free to leave me a comment.

Please follow and like us:

Goodbye vSphere AppHA, you were just not up to the job, enter Symantec ApplicationHA to the rescue

Well I thought this day would come eventually but I am surprised to see it so soon, its official folks. vSphere AppHA is no more as of vSphere 6.0, the official announcement is here . With the effort that’s required to provide continual support for old and new applications and also having to provide continual support for their updates it looks like the task was not something that VMware wanted to focus on. Don’t think that your covered though with backups, replication, vSphere HA or vSphere FT, non of those will get your application back up and running automatically should it fail.
don’t worry though
Symantec ApplicationHA comes to the rescue…..

As one of the first third party vendors providing support for application availability within virtual machines, Symantec has always been at the forefront providing resilience for applications running within VMware vSphere. ApplicationHA is one solution that has been doing this, and for the past four years its been going from strength to strength adding functionality and automation and importantly, resilience for mission critical applications that enables our customers to sleep at night. If your unfamiliar with Symantec ApplicationHA take a look at this comparison which I made a while back, its very detailed but will give you an insight of ApplicationHA’s true potential. Its inexpensive and doesn’t need vSphere Enterprise Plus to work. Its stable mature technology built on Veritas Cluster Server heritage. The development effort required to keep on top of platform and application updates is a challenge but it’s worth it, after all it’s the applications that drive your business and providing resilience for them should be top of mind.
More info on Symantec Application can be found here there’s also a free trial that you can test drive for 60 days if you like too.

Please follow and like us:

vCenter Server 5 Update 1a released – plus patches for ESX that finally fix an autostart bug !!

One the major pains I have running my home lab is the Autostart bug in ESX 5.0 update 1, I power down my lab when not in use and power up when I need to do demos or try out new builds of software so its only certain scenarios that this effects, that said it is a pain to have to login to the environment and power up my core infrastructure VMs.
To download the patch and read more information for fixes with this issue see
http://blogs.vmware.com/vsphere/2012/07/vsphere-hypervisor-auto-start-bug-fixed.html
vCenter Server 5 Update 1a Released
VMware have released a patch to vCenter Server 5.0 update 1 which now includes fixes for a HA bug and also a memory hog issue with vCenter Server Web Services along with some additional functionality as listed below.
Its also good to see an update for the vCenter Server Application (vCSA) which hasnt been updated since 5.0, I can now test out and check all is working with ApplicationHA and vCSA as it did with vCSA v5.0.
For more information and download visit http://bit.ly/P7UbvD
Release Notes :- http://bit.ly/OC440M
Whats new
vCenter Server Appliance 5.0 Update 1a is the first major update since vCenter Server Appliance 5.0 was released
VMware vCenter Server 5.0 Update 1a is a patch release and offers the following improvements:

  • vCenter Server 5.0 Update 1a introduces support for the following vCenter Databases
    • Oracle 11g Enterprise Edition, Standard Edition, Standard ONE Edition Release 2 [11.2.0.3] – 64 bit
    • Oracle 11g Enterprise Edition, Standard Edition, Standard ONE Edition  Release 2 [11.2.0.3] – 32 bit
  • vCenter Server Appliance Database Support: The DB2 express embedded database provided with the vCenter Server Appliance has been replaced with VMware vPostgres database. This decreases the appliance footprint and reduces the time to deploy vCenter Server further.
  • Resolved Issues: In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section in the release notes.
Please follow and like us:

How to Configure Hyper-V v3 & Windows Server 2012 "8" 8250 nested inside ESXi 5.0

Here is an update to the previous blog http://goo.gl/II3gf regarding nested VMs inside ESX 5.0, I wanted to give an update on how to install the Beta of Windows Server 2012 “8” 8250 build and more importantly how to enable Hyper-V role inside the nested VM.
For the majority of the installation of this build the steps remain the same as with Windows 2008 R2 but with a couple of additions.
First make sure you are either runnning ESX 5.0 Update 1 or atleast have patch ESXi500-201112001 http://goo.gl/oWZXV installed against ESX 5.0
1. You need to enable hardware virtualization by modifying the etc/vmware/config file. Enable SSH via tech support mode and putty to the ESX5i server
2. Once connected with putty  :
# echo ‘vhv.allow = “TRUE” ‘ >> /etc/vmware/config
3. Next create your Virtual Machine hardware, I personally used hardware version 8 to make things easier with configuration.
4. Before you get to booting up the VM and installing Hyper-V you need to add three lines the virtual machines config file .vmx
You can either add these via the vSphere Client in the settings of the virtual machine > Configuration Parameters, or doing it from command-line
To add them using command-line move back in SSH > change into the directory where you Hyper-V VM is installed
For example config file where my VM is located is called Hyper-V.vmx. Type the following commands:
# echo ‘monitor.virtual_exec = “hardware” ‘ >> Hyper-V.vmx
# echo ‘hypervisor.cpuid.v0 = “FALSE” ‘ >> Hyper-V.vmx
# echo ‘mce.enable = “FALSE” ‘ >> Hyper-V.vmx
5. Next there are a couple of changes to be made with the CPU configuration.
in the VM settings > Options > CPU/MMU Virtualization make sure you select the option to pass the Intel EPT feature.

6. Next move to the Options area > CPUID Mask click on Advanced

Add the following CPU mask Level ECX: —- —- —- —- —- —- –H- —-

8. Finally you are now ready to install Beta Windows 2012 “8” and enable the Hyper-V role.
Additional Notes: Watch out for blank screens once VMtools are installed, if this happens then enable 3D support for your Video card in the VM settings  – See VMware KB http://kb.vmware.com/kb/2006859
Also when configuring your VM use the E1000 network driver type and not the VMXNET3 as this driver does not work.
Once the Windows server is installed, just enable the Hyper-V role and your all set to start exploring the world of Hyper-V v3.

Please follow and like us:

Are you running vCenter Server in a virtual machine, what are your high availability options?

Whilst presenting our virtualization solutions for various customers and at many conferences, one question that often gets raised is that of the availability of a virtualized vCenter Server, especially more so when I talk about Symantec ApplicationHA solution.
There seems to be various options available if your vCenter Server is sitting on physical hardware such as VMware’s vCenter Server Heartbeat or maybe with another vendors solution, Symantec too has a dedicated vCenter agent which runs on Veritas Cluster Server and can protect the various components of vCenter Server as well as Oracle or SQL databases which may be part of the configuration. But when the vCenter server is virtualized then these solutions may be a bit of an  over kill or may impose limitations on vMotion due to shared storage requirements, or maybe be a little expensive for those on a tight budget but still need an availability solution.
If you think about it, those solutions provide hardware and software protection, whilst if we take a look at a virtual machine it’s probably one of the more stable platforms, one could run an application on, all virtual machines with VMtools deployed typically run on a selection of the similar types of network, disk, display drivers due to the nature of virtual machine hardware and portability requirements. Just think about the last time you had a blue screen on a Windows virtual machine or when your Linux virtual machine did a panic or core dump due to driver related issues, more often the issue with downtime is typically due to application faults, and according to Gartner this can be up to 40% for a leading cause of unplanned downtime.
Wouldn’t it be easier to have a solution that monitors all the key components of a virtualized vCenter Server along with SQL or Oracle backend configuration databases and resolve any issues that may occur within them? Symantec ApplicationHA has this ability and not just tailored for vCenter Server, ApplicationHA can also control a vast array of enterprise applications such as SQL, Oracle, Exchange and SAP to name but a few, all of which can be view from within the Symantec ApplicationHA Dashboard directly from within the VMware vSphere Client.

Fig 1. Dashboard Management view of applications configured with Symantec ApplicationHA
To control vCenter Server, Symantec ApplicationHA primarily monitors the services that are installed with VMware vCenter Server. However, if configured on the same machine as vCenter Server, ApplicationHA also monitors the SQL Server or Oracle database. ApplicationHA automatically discovers and monitors these resources.
During the vCenter Server installation, you can choose to install the embedded version of SQL Server (SQL Server Express) to host vCenter Server’s information.
If you install it, then ApplicationHA monitors it. You can configure application monitoring for vCenter Server on a virtual machine using the Symantec ApplicationHA Configuration Wizard. vCenter Server protection of the vCenter environment can be carried out in matter of minutes and management is integrated directly in vSphere Client or Web interface if needed also.

Fig 2. Symantec ApplicationHA Configuration Wizard for vCenter Server.

Fig 3. vSphere Client view of the configured vCenter Server application
So if protection of your vCenter Server is important to you and budget is limited, then take a look at Symantec ApplicationHA and test drive a 60 day evaluation and see for yourself just how flexible this solution is for availability of your applications running in virtual machines, not only for vCenter Server but for other applications also.
Get more info and download a trial version at http://www.symantec.com/application-ha

Please follow and like us: