OpenStack nova network multihost with external gateway. Solving a metadata issue in Kilo. The power of ebtables.

In a recent project we found the issue that the metadata service running on the compute node was not reachable by the VMs running on it.

In the project we use Ubuntu 14.04.3 and vanilla OpenStack (kilo). The implementation uses nova network (with VLANs) in multihost mode with an external gateway for each VLAN. The latter can be achieved by:

  • configuring nova in /etc/nova/nova.conf with the option:

dnsmasq_config_file=/etc/dnsmasq-nova.conf

  • and by supplying a file /etc/dnsmasq-nova.conf where you give the external gateway configuration for the different networks (demo-net, demo2.net in the following example):

dhcp-option=tag:demo-net,option:router,10.44.1.1
dhcp-option=tag:demo2-net,option:router,10.44.2.1

In our configuration the VMs are not able to reach the metadata service when being provisioned. In other words 169.254.169.254:80 is not properly DNATed to hypervisor_IP:8775 for the VM.

The underlying reason is that since the VM has a different gateway (the external gateway) than the hypervisors VLAN IP, there will be no IP routing (it is only a L2 not an L3) so the  PREROUTING chain in table NAT will not be traversed by these packets.

To solve this issue we force these packets to traverse the IP routing with an ebtables rule like:

ebtables -t nat -I PREROUTING -p ipv4  –ip-dst 169.254.169.254 –ip-protocol 6 –ip-dport 80 -j redirect –redirect-target ACCEPT

Enjoy the ebtables! Find more examples and documentation in here.

Exposing Infiniband (or any PCI device) to the VMs on an OpenStack Cloud

For a recent project we (me and Wuming) had to provide an OpenStack cloud in which the raw infiniband protocol would be available to the VMs running on the cloud.

The installation was done on Ubuntu 14.04.3 with vanilla OpenStack. Exposing infiniband requires quite a few steps and reading/googling quite a bit so I will document it here in case somebody needs to do the same.

To expose the native hardware interfaces to the VM:

  • The BIOS of the computer has to support it (you may need to activate Intel VT-d (or AMD I/O Virtualization Technology), as with virtualization extensions, it may be off by default). Explore the BIOS of your servers to activate it if necessary;
  • The Infiniband cards themselves have to support it. Look for SR-IOV in an “lspci -v” output as below:

$ sudo lspci -v |grep -A40 Mellanox
04:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Hewlett-Packard Company Device 22f5
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at 96000000 (64-bit, non-prefetchable) [size=1M]
Memory at 94000000 (64-bit, prefetchable) [size=32M]
Capabilities: [40] Power Management version 3
Capabilities: [48] Vital Product Data
Capabilities: [9c] MSI-X: Enable+ Count=128 Masked-
Capabilities: [60] Express Endpoint, MSI 00
Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
Capabilities: [148] Device Serial Number 24-be-05-ff-ff-b6-e3-40
Capabilities: [108] Single Root I/O Virtualization (SR-IOV)
Capabilities: [154] Advanced Error Reporting
Capabilities: [18c] #19
Kernel driver in use: mlx4_core

 

  • The kernel in linux needs to be configured by passing the option intel_iommu=on. You can do it by editing the file /etc/default/grub so that it contains the option GRUB_CMDLINE_LINUX_DEFAULT=”intel_iommu=on” and running update-grub;
  • The Infiniband cards need to be configured to expose the VFs. Edit the file /etc/modprobe.d/mlx4_core.conf to contain options like: options mlx4_core num_vfs=16 (or as high as your card supports. One VM will take one VF so this could be the limiting factor as of how many VMs can be deployed per compute node). You can find more documentation for the mellanox cards in the Mellanox Linux User Manual Mellanox_OFED_Linux_User_Manual_v3.10 or here since you may need to enable this option (in our case it was enabled already);
  • Nova has to be configured to allow pci passthrough. Follow the documentation in here on “How to prepare the environment”
    • Configure the nova (find the Virtualized Interfaces vendor_id and product_id for your case by running an lspci -vn)
    • Create a flavor that automatically adds the interface to the VM

And now… Just launch a VM and log in! You should see the Infiniband card!

A very nice overview talk on OpenStack HA

This talk: “Deep Dive Into a Highly Available OpenStack Architecture“, given at the recent OpenStack Summit in Vancouver, provides an unusual nice balance between overview and details!

It will provide you with a sound understanding of the architecture and services of OpenStack, specially the later, and how to make the OpenStack services Highly available.

It does not cover Heat or Ceilometer but once you understand the other services it is not complicated to use the same architectural concepts.

I highly recommend watching it!

How to fix an installation issue with the Openstack command line tools in OS X 10.9.5 and 10.9.4

If you follow the Openstack  command line client tools installation guide you will have an issue. For example to install the keystone tools (Mind root access, add sudo when needed):

#easy_install pip

#pip install python-keystoneclient

This will result in the error when trying to execute the tools (with nova you would get the same error!):

$ keystone –list

Traceback (most recent call last):

File “/usr/local/bin/keystone”, line 7, in <module>

from keystoneclient.shell import main

File “/Library/Python/2.7/site-packages/keystoneclient/__init__.py”, line 37, in <module>

__version__ = pbr.version.VersionInfo(‘python-keystoneclient’).version_string()

File “/Library/Python/2.7/site-packages/pbr/version.py”, line 78, in version_string

for part in self.release_string().split(‘.’):

File “/Library/Python/2.7/site-packages/pbr/version.py”, line 70, in release_string

self.release = self._get_version_from_pkg_resources()

File “/Library/Python/2.7/site-packages/pbr/version.py”, line 62, in _get_version_from_pkg_resources

return packaging.get_version(self.package)

File “/Library/Python/2.7/site-packages/pbr/packaging.py”, line 870, in get_version

raise Exception(“Versioning for this project requires either an sdist”

Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed?

The solution: install also the distribute package:

 pip install –upgrade distribute

 

 

Cloudlynx. A Highly Compliant and Secure Swiss Cloud

As the website for this new company Cloudlynx states. They are a Highly Compliant Swiss Cloud. They will go to an encrypted experience of the Cloud.

They will not have the problem with data being hosted in the US and thus being subject to the patriot act. For many European this should be attractive.

Also the privacy rules that characterise Switzerland should be very appealing.

The CMS online Cloud (redundant and grizzly reborn) in CHEP2013

I presented the details of our new cloud: “The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)” on the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP2013). Find there details on our easily scalable and redundant infrastructure implementation.

The CMS online Cloud (redundant and grizzly reborn)

In August, we put in production our new implementation of the CMS online Cloud. I will present the results of the efforts by Olivier Chaze, Wojciech and myself in CHEP2013.

The new CMS online Cloud features now a complete easily scalable and redundant infrastructure. We implemented:

  • RabbitMQ in cluster with replicated queues as described by RabbitMQ people in here;
  • MySQL cluster after modifying minimally the definition of the tables needed by the different OpenStack services;
  • Grizzly OpenStack services in active/active using the clusterware corosync/pacemaker and Round Robin aliases as an easy load balancer when needed (keystone, dashboard, APIs…). I want to thank here Florian Haas that gave me a guideline to develop the clusterware agent scripts for the different services.

This made our 1300 hypervisors (~13000 cores) cloud resilient to failures! The controlling infrastructure described briefly above scales nicely. I’ll post the link to the presentation after it is given.

The CMS online Cloud

Another milestone has been achieved. We were able to setup the two bonded 10Gbit links through the “data networks” to serve the cloud with minimal changes on the existing infrastructure.

We configured the networks at the level of the router/switches/hypervisors, and we used OpenvSwitch to redirect the flow (and change the MAC Address of the gateway for this flow) to specific computers, where traffic goes, through the high bandwidth network we have. This means that without a modification on the side of the VMs, they will be using our high bandwidth network to talk to the “data providing” computers.

You can find some information on the talk given by Wojciech: “CMS Experience and plans“.