<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>J.A. Coarasa&#039;s Digital Home &#187; Technology</title>
	<atom:link href="http://coarasa.ddnsfree.com/?cat=2&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://coarasa.ddnsfree.com</link>
	<description>Thinkplify, mentor, enable...</description>
	<lastBuildDate>Thu, 10 Nov 2016 11:30:57 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.8.1</generator>
	<item>
		<title>How to Activate nested virtualization on a VM in ESXi</title>
		<link>http://coarasa.ddnsfree.com/?p=229&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-to-activate-nested-virtualization-on-a-vm-in-esxi</link>
		<comments>http://coarasa.ddnsfree.com/?p=229#comments</comments>
		<pubDate>Thu, 10 Nov 2016 11:14:52 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Computing]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Virtualization]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=229</guid>
		<description><![CDATA[I am consolidating here a couple of instructions (here and here, and here for the ESXi shell activation part) I found to activate nested virtualization on a VM in an ESXi server without access to the server vmware part. You need to: &#8230; <a href="http://coarasa.ddnsfree.com/?p=229">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I am consolidating here a couple of instructions (<a title="How to Enable Nested ESXi &amp; Other Hypervisors in vSphere 5.1 | virtuallyGhetto" href="http://www.virtuallyghetto.com/2012/08/how-to-enable-nested-esxi-other.html" target="_blank">here</a> and <a title="KVM Nested in Vmware ESXi 5.5 - enable guest hypervisor vmx/svm flags without vsphere web client" href="https://forum.ivorde.com/kvm-nested-in-vmware-esxi-5-5-enable-guest-hypervisor-vmx-svm-flags-without-vsphere-web-client-t19773.html Seite 1 von 2" target="_blank">here</a>, and <a title="Using ESXi Shell in ESXi 5.x and 6.0 (2004746) | VMware KB" href="https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2004746" target="_blank">here</a> for the ESXi shell activation part) I found to activate nested virtualization on a VM in an ESXi server without access to the server vmware part. You need to:</p>
<ol>
<li>Check that the ESXi will support it (BIOS, hypervisor&#8230;). Look for nestedHVSupported in the output of https://[your-esxi-host-ip-address]/mob/?moid=ha-host&amp;doPath=capability after authenticating.</li>
<li>Enable the ESXi shell (you need this so that you can edit the VM definition later):
<ol>
<li>Use the vSphere Client to enable local and remote access to the ESXi Shell:</li>
<li>Log in to a vCenter Server system using the vSphere Client.</li>
<li>Select the host in the Inventory panel.</li>
<li>Click the Configuration tab and click Security Profile.</li>
<li>In the Services section, click Properties.</li>
<li>Select ESXi Shell from this list:</li>
<li>Click Options and select Start and stop manually. Note: When you select Start and stop manually, the service does not start when you reboot the host. If you want the service to start when you reboot the host, select Start and stop with host.</li>
<li>Click Start to enable the service.</li>
<li>Click OK.</li>
</ol>
</li>
<li>Log in to the shell with credentials and Alt-F1</li>
<li>Add the vhv.enable = &#8220;TRUE&#8221; at the end of the .vmx file corresponding to the VM that you want to activate the nested virtualization on&#8230;</li>
<li>Identify the  hypervisor vm ID and reload it&#8217;s configuration with the vim-cmd esxi command:<br />
Code:</p>
<blockquote><p>~ # vim-cmd vmsvc/getallvms | grep -i ubun 44 VM6-Ubuntu-KVM [datastore1] VM6-Ubuntu-KVM/VM6-Ubuntu-KVM.vmx ubuntu64Guest vmx-08</p></blockquote>
<blockquote><p>~ # vim-cmd vmsvc/reload 44</p></blockquote>
</li>
<li>Check within the VM as root that it sees the &#8220;hardware virtualization&#8221; with:<br />
<blockquote><p>egrep -c &#8216;(vmx|svm)&#8217; /proc/cpuinfo</p></blockquote>
</li>
</ol>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=229</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>OpenStack nova network multihost with external gateway. Solving a metadata issue in Kilo. The power of ebtables.</title>
		<link>http://coarasa.ddnsfree.com/?p=220&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=openstack-nova-network-multihost-with-external-gateway-solving-a-metadata-issue-in-kilo</link>
		<comments>http://coarasa.ddnsfree.com/?p=220#comments</comments>
		<pubDate>Thu, 15 Oct 2015 19:03:32 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=220</guid>
		<description><![CDATA[In a recent project we found the issue that the metadata service running on the compute node was not reachable by the VMs running on it. In the project we use Ubuntu 14.04.3 and vanilla OpenStack (kilo). The implementation uses nova &#8230; <a href="http://coarasa.ddnsfree.com/?p=220">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>In a recent project we found the issue that the metadata service running on the compute node was not reachable by the VMs running on it.</p>
<p>In the project we use <a title="Ubuntu 14.04.3 release notes" href="https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes" target="_blank">Ubuntu 14.04.3</a> and vanilla <a title="OpenStack Installation Guide for Ubuntu 14.04" href="http://docs.openstack.org/kilo/install-guide/install/apt/content/index.html" target="_blank">OpenStack (kilo)</a>. The implementation uses nova network (with VLANs) in multihost mode with an external gateway for each VLAN. The latter can be achieved by:</p>
<ul>
<li>configuring nova in /etc/nova/nova.conf with the option:</li>
</ul>
<blockquote><p>dnsmasq_config_file=/etc/dnsmasq-nova.conf</p></blockquote>
<ul>
<li>and by supplying a file /etc/dnsmasq-nova.conf where you give the external gateway configuration for the different networks (demo-net, demo2.net in the following example):</li>
</ul>
<blockquote><p>dhcp-option=tag:demo-net,option:router,10.44.1.1<br />
dhcp-option=tag:demo2-net,option:router,10.44.2.1</p></blockquote>
<p>In our configuration the VMs are not able to reach the metadata service when being provisioned. In other words 169.254.169.254:80 is not properly DNATed to hypervisor_IP:8775 for the VM.</p>
<p>The underlying reason is that since the VM has a different gateway (the external gateway) than the hypervisors VLAN IP, there will be no IP routing (it is only a L2 not an L3) so the  PREROUTING chain in table NAT will not be traversed by these packets.</p>
<p>To solve this issue we force these packets to traverse the IP routing with an ebtables rule like:</p>
<blockquote><p>ebtables -t nat -I PREROUTING -p ipv4  &#8211;ip-dst 169.254.169.254 &#8211;ip-protocol 6 &#8211;ip-dport 80 -j redirect &#8211;redirect-target ACCEPT</p></blockquote>
<p>Enjoy the ebtables! Find more examples and documentation in <a title="ebtables" href="http://ebtables.netfilter.org" target="_blank">here</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=220</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Exposing Infiniband (or any PCI device) to the VMs on an OpenStack Cloud</title>
		<link>http://coarasa.ddnsfree.com/?p=210&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=exposing-infiniband-to-the-vms-on-an-openstack-cloud</link>
		<comments>http://coarasa.ddnsfree.com/?p=210#comments</comments>
		<pubDate>Fri, 09 Oct 2015 16:28:36 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=210</guid>
		<description><![CDATA[For a recent project we (me and Wuming) had to provide an OpenStack cloud in which the raw infiniband protocol would be available to the VMs running on the cloud. The installation was done on Ubuntu 14.04.3 with vanilla OpenStack. &#8230; <a href="http://coarasa.ddnsfree.com/?p=210">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>For a recent project we (me and Wuming) had to provide an OpenStack cloud in which the raw infiniband protocol would be available to the VMs running on the cloud.</p>
<p>The installation was done on <a title="Ubuntu 14.04.3 release notes" href="https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes" target="_blank">Ubuntu 14.04.3</a> with vanilla <a title="OpenStack Installation Guide for Ubuntu 14.04" href="http://docs.openstack.org/kilo/install-guide/install/apt/content/index.html" target="_blank">OpenStack</a>. Exposing infiniband requires quite a few steps and reading/googling quite a bit so I will document it here in case somebody needs to do the same.</p>
<p>To expose the native hardware interfaces to the VM:</p>
<ul>
<li>The BIOS of the computer has to support it (you may need to activate Intel VT-d (or AMD I/O Virtualization Technology), as with virtualization extensions, it may be off by default). Explore the BIOS of your servers to activate it if necessary;</li>
<li>The Infiniband cards themselves have to support it. Look for SR-IOV in an &#8220;lspci -v&#8221; output as below:</li>
</ul>
<blockquote><p>$ sudo lspci -v |grep -A40 Mellanox<br />
04:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]<br />
Subsystem: Hewlett-Packard Company Device 22f5<br />
Flags: bus master, fast devsel, latency 0, IRQ 16<br />
Memory at 96000000 (64-bit, non-prefetchable) [size=1M]<br />
Memory at 94000000 (64-bit, prefetchable) [size=32M]<br />
Capabilities: [40] Power Management version 3<br />
Capabilities: [48] Vital Product Data<br />
Capabilities: [9c] MSI-X: Enable+ Count=128 Masked-<br />
Capabilities: [60] Express Endpoint, MSI 00<br />
Capabilities: [100] Alternative Routing-ID Interpretation (ARI)<br />
Capabilities: [148] Device Serial Number 24-be-05-ff-ff-b6-e3-40<br />
<strong>Capabilities: [108] Single Root I/O Virtualization (SR-IOV)</strong><br />
Capabilities: [154] Advanced Error Reporting<br />
Capabilities: [18c] #19<br />
Kernel driver in use: mlx4_core</p>
<p><span style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif;"><span style="font-size: 15px;"><i> </i></span></span></p></blockquote>
<ul>
<li>The kernel in linux needs to be configured by passing the option intel_iommu=on. You can do it by editing the file /etc/default/grub so that it contains the option GRUB_CMDLINE_LINUX_DEFAULT=&#8221;intel_iommu=on&#8221; and running update-grub;</li>
<li>The Infiniband cards need to be configured to expose the VFs. Edit the file /etc/modprobe.d/mlx4_core.conf to contain options like: options mlx4_core num_vfs=16 (or as high as your card supports. One VM will take one VF so this could be the limiting factor as of how many VMs can be deployed per compute node). You can find more documentation for the mellanox cards in the Mellanox Linux User Manual <a title="Mellanox_OFED_Linux_User_Manual_v3.10" href="http://www.mellanox.com/related-docs/prod_software/Mellanox_OFED_Linux_User_Manual_v3.10.pdf" target="_blank">Mellanox_OFED_Linux_User_Manual_v3.10</a> or <a title="This post shows how to enable SR-IOV on Mellanox ConnectX-3 adapters." href="https://community.mellanox.com/docs/DOC-1317" target="_blank">here</a> since you may need to enable this option (in our case it was enabled already);</li>
<li>Nova has to be configured to allow pci passthrough. Follow the documentation in <a title="Pci passthrough" href="https://wiki.openstack.org/wiki/Pci_passthrough" target="_blank">here</a> on &#8220;How to prepare the environment&#8221;
<ul>
<li>Configure the nova (find the Virtualized Interfaces vendor_id and product_id for your case by running an lspci -vn)</li>
<li>Create a flavor that automatically adds the interface to the VM</li>
</ul>
</li>
</ul>
<p>And now&#8230; Just launch a VM and log in! You should see the Infiniband card!</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=210</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Get a deep overview on OpenStack</title>
		<link>http://coarasa.ddnsfree.com/?p=199&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=get-a-deep-overview-on-openstack</link>
		<comments>http://coarasa.ddnsfree.com/?p=199#comments</comments>
		<pubDate>Thu, 10 Sep 2015 07:27:56 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=199</guid>
		<description><![CDATA[In this presentation you find an overview of the functional architecture of OpenStack and a deep and detailed explanation on how OpenStack processes work. Enjoy!]]></description>
				<content:encoded><![CDATA[<p>In <a title="OpenStack Architecture" href="http://www.slideshare.net/mirantis/openstack-architecture-43160012?next_slideshow=1" target="_blank">this presentation</a> you find an overview of the functional architecture of OpenStack and a deep and detailed explanation on how OpenStack processes work. Enjoy!</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=199</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>If privacy is important for you, open a protonmail account</title>
		<link>http://coarasa.ddnsfree.com/?p=195&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=if-privacy-is-important-for-you-open-a-protonmail-account</link>
		<comments>http://coarasa.ddnsfree.com/?p=195#comments</comments>
		<pubDate>Tue, 16 Jun 2015 11:06:26 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Computing]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=195</guid>
		<description><![CDATA[If privacy is important for you, you will wish to have a protomail account. They provide a very good privacy in hosted email, if you of course believe their claims, which I have no reason not to. To celebrate the &#8230; <a href="http://coarasa.ddnsfree.com/?p=195">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>If privacy is important for you, you will wish to have a <a title="Protonmail" href="https://protonmail.ch" target="_blank">protomail</a> account. They provide a very good privacy in hosted email, if you of course believe their claims, which I have no reason not to.</p>
<p>To celebrate the 1st year anniversary, protonmail has created a special link that allows instant account creation:<br />
<a title="Protonmail instant account creation" href="https://protonmail.ch/privacyforall" target="_blank">https://protonmail.ch/privacyforall</a></p>
<p>This will only last for a limited amount of time.</p>
<p>Happy private e-mailing.</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=195</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>A couple of other interesting talks from the OpenStack Summit</title>
		<link>http://coarasa.ddnsfree.com/?p=193&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-couple-of-other-interesting-talks-from-the-openstack-summit</link>
		<comments>http://coarasa.ddnsfree.com/?p=193#comments</comments>
		<pubDate>Fri, 05 Jun 2015 09:00:13 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=193</guid>
		<description><![CDATA[I very interesting talk on Ceph and its scalability issues, and another hands-on one about Docker containers. &#160;]]></description>
				<content:encoded><![CDATA[<p>I very interesting <a title="Ceph at CERN: A Year in the Life of a Petabyte-Scale Block Storage Service" href="https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/ceph-at-cern-a-year-in-the-life-of-a-petabyte-scale-block-storage-service" target="_blank">talk</a> on Ceph and its scalability issues, and another hands-on <a title="Using Docker with OpenStack - Hands On!" href="https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/using-docker-with-openstack-hands-on" target="_blank">one</a> about Docker containers.</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=193</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A very nice overview talk on OpenStack HA</title>
		<link>http://coarasa.ddnsfree.com/?p=191&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-very-nice-overview-talk-on-openstack-ha</link>
		<comments>http://coarasa.ddnsfree.com/?p=191#comments</comments>
		<pubDate>Fri, 29 May 2015 14:33:48 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=191</guid>
		<description><![CDATA[This talk: &#8220;Deep Dive Into a Highly Available OpenStack Architecture&#8220;, given at the recent OpenStack Summit in Vancouver, provides an unusual nice balance between overview and details! It will provide you with a sound understanding of the architecture and services &#8230; <a href="http://coarasa.ddnsfree.com/?p=191">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>This talk: &#8220;<a title="Deep Dive Into a Highly Available OpenStack Architecture" href="https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/deep-dive-into-a-highly-available-openstack-architecture" target="_blank">Deep Dive Into a Highly Available OpenStack Architecture</a>&#8220;, given at the recent <a title="OpenStack Summit in Vancouver" href="https://www.openstack.org/summit/vancouver-2015/" target="_blank">OpenStack Summit in Vancouver</a>, provides an unusual nice balance between overview and details!</p>
<p>It will provide you with a sound understanding of the architecture and services of OpenStack, specially the later, and how to make the OpenStack services Highly available.</p>
<p>It does not cover Heat or Ceilometer but once you understand the other services it is not complicated to use the same architectural concepts.</p>
<p>I highly recommend watching it!</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=191</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to fix an installation issue with the Openstack command line tools in OS X 10.9.5 and 10.9.4</title>
		<link>http://coarasa.ddnsfree.com/?p=183&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-to-fix-an-installation-issue-with-the-openstack-command-line-tools-in-os-x-10-9-5-and-10-9-4</link>
		<comments>http://coarasa.ddnsfree.com/?p=183#comments</comments>
		<pubDate>Fri, 10 Oct 2014 10:41:35 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=183</guid>
		<description><![CDATA[If you follow the Openstack  command line client tools installation guide you will have an issue. For example to install the keystone tools (Mind root access, add sudo when needed): #easy_install pip #pip install python-keystoneclient This will result in the error &#8230; <a href="http://coarasa.ddnsfree.com/?p=183">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>If you follow the Openstack  <a title="Install the OpenStack command-line clients" href="http://docs.openstack.org/user-guide/content/install_clients.html" target="_blank">command line client tools installation guide</a> you will have an issue. For example to install the keystone tools (Mind root access, add sudo when needed):</p>
<blockquote><p>#easy_install pip</p>
<p>#pip install python-keystoneclient</p></blockquote>
<p>This will result in the error when trying to execute the tools (with nova you would get the same error!):</p>
<blockquote><p>$ keystone &#8211;list</p>
<p>Traceback (most recent call last):</p>
<p>File &#8220;/usr/local/bin/keystone&#8221;, line 7, in &lt;module&gt;</p>
<p>from keystoneclient.shell import main</p>
<p>File &#8220;/Library/Python/2.7/site-packages/keystoneclient/__init__.py&#8221;, line 37, in &lt;module&gt;</p>
<p>__version__ = pbr.version.VersionInfo(&#8216;python-keystoneclient&#8217;).version_string()</p>
<p>File &#8220;/Library/Python/2.7/site-packages/pbr/version.py&#8221;, line 78, in version_string</p>
<p>for part in self.release_string().split(&#8216;.&#8217;):</p>
<p>File &#8220;/Library/Python/2.7/site-packages/pbr/version.py&#8221;, line 70, in release_string</p>
<p>self.release = self._get_version_from_pkg_resources()</p>
<p>File &#8220;/Library/Python/2.7/site-packages/pbr/version.py&#8221;, line 62, in _get_version_from_pkg_resources</p>
<p>return packaging.get_version(self.package)</p>
<p>File &#8220;/Library/Python/2.7/site-packages/pbr/packaging.py&#8221;, line 870, in get_version</p>
<p>raise Exception(&#8220;Versioning for this project requires either an sdist&#8221;</p>
<p>Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed?</p></blockquote>
<p>The solution: install also the distribute package:</p>
<blockquote><p> pip install &#8211;upgrade distribute</p></blockquote>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=183</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Cloudlynx. A Highly Compliant and Secure Swiss Cloud</title>
		<link>http://coarasa.ddnsfree.com/?p=180&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=cloudlynx-a-highly-compliant-and-secure-swiss-cloud</link>
		<comments>http://coarasa.ddnsfree.com/?p=180#comments</comments>
		<pubDate>Fri, 15 Aug 2014 17:10:44 +0000</pubDate>
		<dc:creator><![CDATA[toni]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[Technology]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=180</guid>
		<description><![CDATA[As the website for this new company Cloudlynx states. They are a Highly Compliant Swiss Cloud. They will go to an encrypted experience of the Cloud. They will not have the problem with data being hosted in the US and thus &#8230; <a href="http://coarasa.ddnsfree.com/?p=180">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>As the website for this new company <a title="Cloudlynx" href="http://www.cloudlynx.ch" target="_blank">Cloudlynx</a> states. They are a Highly Compliant Swiss Cloud. They will go to an encrypted experience of the Cloud.</p>
<p>They will not have the problem with data being hosted in the US and thus being subject to the patriot act. For many European this should be attractive.</p>
<p>Also the privacy rules that characterise Switzerland should be very appealing.</p>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=180</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Using Squid to pre-cache golden images in a mass scale</title>
		<link>http://coarasa.ddnsfree.com/?p=162&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=using-squid-to-pre-cache-golden-images-in-a-mass-scale-2</link>
		<comments>http://coarasa.ddnsfree.com/?p=162#comments</comments>
		<pubDate>Mon, 13 Jan 2014 20:22:09 +0000</pubDate>
		<dc:creator><![CDATA[Anastasios Andronidis]]></dc:creator>
				<category><![CDATA[OpenStack]]></category>

		<guid isPermaLink="false">http://coarasa.ddnsfree.com/?p=162</guid>
		<description><![CDATA[Abstract Pre-caching images on nova-computes is a tricky issue that may be a big headache to an Openstack Administrator. Here at CERN we have put together a pre-caching system, that has enabled us to send a 1,7 GB image to &#8230; <a href="http://coarasa.ddnsfree.com/?p=162">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p dir="ltr"><strong>Abstract</strong></p>
<p dir="ltr">Pre-caching images on nova-computes is a tricky issue that may be a big headache to an Openstack Administrator. Here at CERN we have put together a pre-caching system, that has enabled us to send a 1,7 GB image to 1300 servers in under 9 minutes. Keeping in mind the ease of maintenance and scalability at any amount, we wish to present this idea.</p>
<p><strong>Description</strong></p>
<p dir="ltr">Our major concern is the pre-caching of the images on nova-compute nodes. Glance is the central service that provides the infrastructure with VM images. One of the problems that Glance has to encounter, is the scaling of image deploying in large infrastructures. This issue has been addressed by the community with the pre-caching middleware, which stores Glance images on the OpenStack API servers. Here we present an alternative way that it is easy to maintain and deploy, is scalable and, most importantly, is based on existing and proven technologies. Before we present our idea, first we would like to give a brief description of our needs and use case.</p>
<p dir="ltr">The CMS experiment at CERN during the Long Shutdown, decided to “lend” its cluster on the GRID community for their experiments. The HLT (High Level Trigger) cluster is composed by 1300 servers that we want to provide to the GRID site, which include 1 Glance server with 1 Gbps link to the network, 4 servers that host all openstack services on failover with 2 Gbps link each and 1300 hypervisors (with more to come in the future) with 1 Gbps link each.</p>
<p dir="ltr">We have some very specific needs, and thus our utilization of OpenStack may not be the standard case of a cloud provider or any other organization that uses cloud. For instance:</p>
<ol>
<li>
<p dir="ltr">each server should run a single virtual machine that consumes all hypervisor’s resources</p>
</li>
<li>
<p dir="ltr">we have a very small variety of images (1-3)</p>
</li>
<li>
<p dir="ltr">we need to start and deploy VMs extremely fast</p>
</li>
</ol>
<p dir="ltr">Despite these specifications, we believe that this presentation may help further develop the caching system in OpenStack.</p>
<p dir="ltr"><strong>Building blocks</strong></p>
<p dir="ltr">This pre-caching system is external to OpenStack. This means that zero configurations and patching is needed to the source code, which implies that it can be backported to any version of OpenStack. It contains 2 python scripts, one installed at Glance’s side and the other one on each nova-compute node, an Apache httpd server and multiple squid servers deployed at key points at the infrastructure. In the following image you can see the a tree topology of the pre-caching system that we use in our case. Any kind of topology can be used, that fit the needs of a certain infrastructure.<img alt="" src="https://lh4.googleusercontent.com/n3KTvTE5kPkQ8vulxsP6Oj5LeJrpzzKp5B80NS52EzuV8HO-WoTZj2resJyzarhrpL24ZeznxcwhQ2ajns6-XctIzbULnk44Ux4-ds---7TpFiV-mheq2eadNw" width="624px;" height="468px;" /></p>
<p>&nbsp;</p>
<p dir="ltr">The Glance server hosts the httpd server where we are going to place our image, for pre-caching. The first python script provides a CLI through which one can declare the image-id that they want to pre-cache. The script finds the image, zips it with gzip and places it on the apache httpd folder.</p>
<p dir="ltr">The second step is to ask the nova-computes to fetch the image. At this point one could use the second script (which runs on the nova-compute side), which also provides a CLI which can take the image-id for caching as an input. The script will connect to the httpd server through the squid proxy. It will then retrieve the image and unzip it at the local folders of nova-compute.</p>
<p dir="ltr"><strong>Benchmarks</strong></p>
<p dir="ltr">We put this idea to the test and the outcome was a great speedup, as expected. Below we present some numbers on image sizes, network traffic and time.</p>
<p dir="ltr"><span style="text-decoration: underline"><em>Image size:</em></span></p>
<div dir="ltr">
<table>
<col width="90" />
<col width="120" />
<col width="136" />
<col width="139" />
<col width="139" />
<tbody>
<tr>
<td></td>
<td>
<p dir="ltr">Uncompressed</p>
</td>
<td>
<p dir="ltr">gzip -6</p>
</td>
<td>
<p dir="ltr">gzip -9</p>
</td>
<td>
<p dir="ltr">bzip2</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Size</p>
</td>
<td>
<p dir="ltr">1341784064B</p>
<p dir="ltr">1.3GB</p>
</td>
<td>
<p dir="ltr">411246527B</p>
<p dir="ltr">393M</p>
</td>
<td>
<p dir="ltr">408981472B</p>
<p dir="ltr">391M</p>
</td>
<td>
<p dir="ltr">357287908B</p>
<p dir="ltr">341M</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Zip Time</p>
</td>
<td>
<p dir="ltr">&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
</td>
<td>
<p dir="ltr">1m37s</p>
</td>
<td>
<p dir="ltr">6m58s</p>
</td>
<td>
<p dir="ltr">3m57s</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Unzip Time</p>
</td>
<td>
<p dir="ltr">&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
</td>
<td>
<p dir="ltr">0m18s</p>
</td>
<td>
<p dir="ltr">0m18s</p>
</td>
<td>
<p dir="ltr">1m5s</p>
</td>
</tr>
</tbody>
</table>
</div>
<p>&nbsp;</p>
<p dir="ltr"><em><span style="text-decoration: underline">Network traffic (used gzip -6):</span></em></p>
<div dir="ltr">
<table>
<col width="*" />
<col width="201" />
<col width="215" />
<tbody>
<tr>
<td></td>
<td>
<p dir="ltr">No Squid</p>
</td>
<td>
<p dir="ltr">With Squid</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">With compression</p>
</td>
<td>
<p dir="ltr">393*8/1024*1300/60</p>
<p dir="ltr">= 66.52(1.1 Hour)</p>
</td>
<td>
<p dir="ltr">393*8/(1024*8)*1300/60</p>
<p dir="ltr">= 8.3 Minutes</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Without compression</p>
</td>
<td>
<p dir="ltr">1300*8/1024*1300/60</p>
<p dir="ltr">= 220.05 Minutes(3.7 Hours)</p>
</td>
<td>
<p dir="ltr">1300*8/(1024*8)*1300/60</p>
<p dir="ltr">= 27.5 Minutes</p>
</td>
</tr>
</tbody>
</table>
</div>
<h6><em>image size * 8 to convert to bits / 1024(Gigabit connection) to get number of seconds to transfer one image * # nodes / 60 seconds / 60 minutes = hours to complete </em></h6>
<h6><em></em><em>Glance server: 1 Gbps / Squid server: 2 Gbps / Nova node: 1 Gbps / # Nodes: 1300</em></h6>
<p dir="ltr"><strong>Implementation details</strong></p>
<p dir="ltr">In this section we would like to describe further the most important reason that has driven us to develop this system, based on technical details.</p>
<p dir="ltr"><em>1. Why not OpenStack Glance caching</em></p>
<p dir="ltr"><em></em>The reason that the OpenStack Glance caching doesn’t fit our needs, is that we demand the lowest possible start time for each VM. This can be done if the images are cached into the nova-compute nodes. Glance caching has no automatic way of pushing the image into the nova-compute nodes.</p>
<p dir="ltr">Glance pre-cache middleware does not support compression. In our case (you can see on the benchmark section the differences in the size), we earned a great benefit in network traffic by compressing the images.</p>
<p dir="ltr"><em>2. Why Squid?</em></p>
<p dir="ltr">The HLT cluster is already using Squid, so it would be a waste of resources as well as an administrative overhead to install a bunch of additional services on top of the existing ones.</p>
<p dir="ltr">A second argument is that Squid provides a fine set of features that can give great flexibility to administrators regarding how they use the storage amount and policies, the network configuration and topology. Many of the squid features are not available through Glance cache.</p>
<p dir="ltr">There is a small catch using Squid as a medium to serve the images to the nova-compute nodes. When the images are prepared at the Glance side, and copied in the httpd directories, there is no way for them to be pushed to Squid servers. There is a workaround to this problem by sending a signal to a custom script on each Squid server and ask the server to fetch the images, but we thought that this would further complicate the pre-caching system to a great deal.</p>
<p dir="ltr">Instead, we decided to insert a random delay on the nova-compute requests, so that the first connection that will arrive to the squid server will work as a “warm-up” request for the Squid cache.</p>
<p dir="ltr"><em>3. Why Gzip?</em></p>
<p dir="ltr">The major advantage of Gzip is its low time of decompression. Unlike bzip2, gzip is very fast, even though gzip is 8% less efficient in compressing the same image.</p>
<p dir="ltr">A second advantage of gzip, is its streaming properties (bzip2 can be used too, but as said above, it is slow). When we as a nova-compute to download the image through the Squid network, we uncompress the image at the same time to speed up the process.</p>
<p dir="ltr"><em>4. Monitoring and cleaning?</em></p>
<p dir="ltr">We created a simple reporting system to help us track down which images are cached where.</p>
<p dir="ltr"><strong>Conclusion and future work</strong></p>
<p dir="ltr">As a conclusion to this article we would like to sum up the differences between the current state of both Glance caching and our system, and a future plan.</p>
<p dir="ltr">This is a small table comparing the two frameworks:</p>
<div dir="ltr">
<table>
<col width="195" />
<col width="224" />
<col width="205" />
<tbody>
<tr>
<td>
<p dir="ltr">Glance caching</p>
</td>
<td>
<p dir="ltr">CERN pre-caching System</p>
</td>
<td>
<p dir="ltr">Future Glance pre-caching</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Images are distributed to API servers</p>
</td>
<td>
<p dir="ltr">Images are pushed to the nova-compute nodes</p>
</td>
<td>
<p dir="ltr">Images are pushed to the nova-compute nodes</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">No compression</p>
</td>
<td>
<p dir="ltr">Any kind of compression</p>
</td>
<td>
<p dir="ltr">Any kind of compression</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">Bind to specific OpenStack version</p>
</td>
<td>
<p dir="ltr">Can work with any OpenStack version</p>
</td>
<td>
<p dir="ltr">Merged with latest version but also can be used as an external system</p>
</td>
</tr>
<tr>
<td>
<p dir="ltr">API on Glance side</p>
</td>
<td>
<p dir="ltr">external API</p>
</td>
<td>
<p dir="ltr">API on Glance and nova-compute sides</p>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Anastasios Andronidis, Olivier Chaze</p>
</div>
]]></content:encoded>
			<wfw:commentRss>http://coarasa.ddnsfree.com/?feed=rss2&#038;p=162</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
