Openstack – Configuring for LVM Storage Backend

The volume service is able to make use of a volume group attached directly to the server on which the service runs. This volume group must be created exclusively for use by the block storage service and the configuration updated to point to the name of the volume group.
The following steps must be performed while logged into the system hosting the volume service as the root user:
  1. Use the pvcreate command to create a physical volume.
    # pvcreate DEVICE
      Physical volume "DEVICE" successfully created
    Replace DEVICE with the path to a valid, unused, device. For example:

    # pvcreate /dev/sdX
  2. Use the vgcreate command to create a volume group.
    # vgcreate cinder-volumes DEVICE
      Volume group "cinder-volumes" successfully created
    Replace DEVICE with the path to the device used when creating the physical volume. Optionally replace cinder-volumes with an alternative name for the new volume group.
  3. Set the volume_group configuration key to the name of the newly created volume group.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_group cinder-volumes
    The name provided must match the name of the volume group created in the previous step.
  4. Ensure that the correct volume driver for accessing LVM storage is in use by setting the volume_driverconfiguration key to cinder.volume.drivers.lvm.LVMISCSIDriver.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
The volume service has been configured to use LVM storage.
Please follow and like us:

Using multiple external networks in OpenStack Neutron

I haven’t found a lot of documentation about it, but basically, here’s how to do it. Lets assume the following:

  • you start from a single external network, which is connected to ‘br-ex’
  • you want to attach the new external network to ‘eth1’.

In the network node (were neutron-l3-agent, neutron-dhcp-agent, etc.. run):

  • Create a second OVS bridge, which will provide connectivity to the new external network:
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
ip link set eth1 up
  • (Optionally) If you want to plug a virtual interface into this bridge and add a local IP on the node to this network for testing:
ovs-vsctl add-port br-eth1 vi1 – set Interface vi1 type=internal
ip addr add 192.168.1.253/24 dev vi1 # you may adjust your network CIDR, or set your system configuration to setup this at boot.
  • Edit your /etc/neutron/l3_agent.ini , and set/change:
gateway_external_network_id =
external_network_bridge =

This change tells the l3 agent that it must relay on the physnet<->bridge mappings at /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini it will automatically patch those bridges and router interfaces around. For example, in tunneling mode, it will patch br-int to the external bridges, and set the external ‘q’router interfaces on br-int.

  • Edit your /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to map ‘logical physical nets’ to ‘external bridges’
bridge_mappings = physnet1:br-ex,physnet2:br-eth1
  • Restart your neutron-l3-agent and your neutron-openvswitch-agent
service neutron-l3-agent restart
service neutron-openvswitch-agent restart

At this point, you can create two external networks (please note, if you don’t make the l3_agent.ini changes, the l3 agent will start complaining and will refuse to work)

neutron net-create ext_net –provider:network_type flat –provider:physical_network physnet1 –router:external=True
neutron net-create ext_net2 –provider:network_type flat –provider:physical_network physnet2 –router:external=True

And for example create a couple of internal subnets and routers:

# for the first external net
neutron subnet-create ext_net –gateway 172.16.0.1 172.16.0.0/24 – –enable_dhcp=False # here the allocation pool goes explicit…. all the IPs available..
neutron router-create router1
neutron router-gateway-set router1 ext_net
neutron net-create privnet
neutron subnet-create privnet –gateway 192.168.123.1 192.168.123.0/24 –name privnet_subnet
neutron router-interface-add router1 privnet_subnet
# for the second external net
neutron subnet-create ext_net2 –allocation-pool start=192.168.1.200,end=192.168.1.222 –gateway=192.168.1.1 –enable_dhcp=False 192.168.1.0/24
neutron router-create router2
neutron router-gateway-set router2 ext_net2
neutron net-create privnet2
neutron subnet-create privnet2 –gateway 192.168.125.1 192.168.125.0/24 –name privnet2_subnet
 neutron router-interface-add router2 privnet2_subnet
Please follow and like us:

Setting Up a Flat Network with Openstack Neutron

This setup will allow the VMs to use an existing network. In this example, eth2 is connected to this pre-existing network (192.168.1.0/24) that we want to use for the OpenStack VMs.
All the configuration is done in the node dedicated to Nova Networking.
1. Set up the Open vSwitch bridge:

# ovs-vsctl add-br br-eth2
# ovs-vsctl add-port br-eth2 eth2

2. Set up /etc/network/interfaces (node’s IP is 192.168.1.7):

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
auto br-eth2
iface br-eth2 inet static
address 192.168.1.7
netmask 255.255.255.0

3. Tell Open vSwitch to use the bridge connected to eth2 (br-eth2) and map physnet1 to it /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2

4. Tell Neutron to not to use the metadata proxy in /etc/nova/nova.conf (otherwise you get a HTTP 400 when querying the metadata service):

service_neutron_metadata_proxy=false

Note that the metadata service is managed by Neutron as usual via the neutron-metadata-agent service anyway.
5. Create the network telling to use the physnet1 mapped above to br-eth2:

# neutron net-create flat-provider-network --shared  --provider:network_type flat --provider:physical_network physnet1

6. Create the subnet:

# neutron subnet-create --name flat-provider-subnet --gateway 192.168.1.5 --dns-nameserver 192.168.1.254  --allocation-pool start=192.168.2.100,end=192.168.2.150  flat-provider-network 192.168.2.0/24

That’s it. Now VMs will get an IP of the specified range and will be directly connected to our network via Open vSwitch.

Please follow and like us:

Fixing Openstack Kilo Horizon Re-login issue

Fixing Horizon Re-login issue

There is an issue in OpenStack Kilo with re-login because of a bad cookie session. Here is how to fix the issue.

#vi /etc/openstack-dashboard/local_settings
AUTH_USER_MODEL = 'openstack_auth.User'
Please follow and like us: