Version 12 (modified by shuang, 9 years ago)

--

Setting up Eucalyptus 1.6.2 with ORCA 2.x

This page explains how to setup and test Euca 1.6.2 for ORCA. Most of the steps are similar with https://geni-orca.renci.org/trac/wiki/Eucalyptus-1.5.2-Setup. However, since the servers we have do not support hardware virtualization, we assume XEN is used.

Debian

XEN installation on Debian Lenny

According to http://open.eucalyptus.com/wiki/EucalyptusInstall_v1.6, Euca 1.6.2 package is only available on Debian Squeeze. Unfortunately, Debian Squeeze does not support XEN well. Therefore we chose to use Lenny and install Euca from source.

XEN installation on Lenny is fairly simple:

  1. apt-get install xen-utils
  2. apt-get install xen-tools
  3. apt-get installxen-linux-system-2.6.26-2-xen-amd64
  4. reboot into xen

Installing Eucalyptus 1.6.2 from source

  1. use dpkg --get-selections to make sure libc and pthreads development files are installed
  2. apt-get install gcc make apache2-threaded-dev ant openjdk-6-jdk libvirt-dev libcurl4-gnutls-dev dhcp3-server vblade apache2 unzip curl vlan bridge-utils libvirt-bin sudo vtun
  3. wget http://eucalyptussoftware.com/downloads/releases/eucalyptus-1.6.2-src.tar.gz
    wget http://eucalyptussoftware.com/downloads/releases/eucalyptus-1.6.2-src-deps.tar.gz
    
  4. follow the steps as in http://open.eucalyptus.com/wiki/installing-eucalyptus-source-16
  5. Axis2
  6. Axis2/C

CentOS5.4

Installing Eucalyputs 1.6.2

  1. export VERSION=1.6.2
  2. yum install -y ntp
  3. yum install -y java-1.6.0-openjdk ant ant-nodeps dhcp bridge-utils httpd
  4. yum install -y xen
    sed --in-place 's/#(xend-http-server no)/(xend-http-server yes)/' /etc/xen/xend-config.sxp 
    sed --in-place 's/#(xend-address localhost)/(xend-address localhost)/' /etc/xen/xend-config.sxp
    /etc/init.d/xend restart
    

Setting up stock Eucalyptus 1.5.2

Hardware setup

You will need several hosts with dual interfaces - one interface on 'management' network and one on 'data plane' that will be stitched into ORCA slices. At RENCI this setup is implemented by having each host its eth1 on BEN management network (192.168.xx.xx address space) and eth0 connected into BEN with unassigned IP addresses.

Each host should support hardware virtualization and be able to run KVM or Xen. The cluster will consist of a single head node and multiple compute nodes. The head node requires substantial disk space to store all VM filesystem images (if you plan to support many options).

The dataplane interfaces of the cluster should be plugged in into an ORCA-controllable switch (a Cisco 6509 in our case) to allow for the mapping of Euca-created vlans to other vlan segments.

Software pre-requisutes

  1. Ubuntu jaunty basic server install
  2. kvm and libvirt (including libvirt-bin). Ubuntu favors kvm over Xen. KVM requires hardware virtualization support in your CPU!
  3. ntp (Euca instructions suggest using open-ntp, however there is no reason not to use the Ubuntu 9.04 stock ntpd3 server
  4. vconfig tools (to enable creating tagged interface)
  5. brctl tools (to enable creating bridges)

Testing software pre-requisites

The notes here are either for the head node [HN], the compute nodes [CN], or for all [ALL]

  1. [ALL] Install and test ntp. Run ntpdc and verify the output is sane (substitute your own NTP server):
    $ apt-get install ntp
    $ echo server clock3.unc.edu >> /etc/ntp.conf
    $ /etc/init.d/ntp restart
    $ ntpdc
    

line to /etc/ntp.conf and restarting ntpd

  1. [ALL] Test vconfig and brctl:
    $ vconfig add eth0 10
    $ ifconfig eth0.10
    $ vconfig rem eth0.10
    $ brctl show 
    
  2. [CN] Make sure kvm is OK. If you receive a message about a problem with a kernel module either your CPU does not support hardware virtualization, or it is disabled in the BIOS. In the latter case, edit the BIOS setting and try again:
    $ /etc/init.d/kvm restart
    
  3. [CN] Make sure libvirtd is running:
    $ /etc/init.d/libvirt-bin restart
    $ virsh list
    
  4. Make sure the dataplane interface (although unconfigured) is UP
    $ ifconfig eth0
    
  5. [CN] identify or create a default bridge for kvm/xen to use. Xen by default creates a bridge (xenbr0). KVM requires that a bridge is manually setup. On Ubuntu this means adding
    auto br0
    iface br0 inet manual
    	bridge_ports eth0
    	bridge_stp off
    	bridge_maxwait 0
    

to /etc/network/interfaces. NOTE: in this setup eth0 is the dataplane interface facing into BEN. It remains unconfigured. The management interface is eth1 and is not shown here - it has a static configuration. Restart networking and verify that bridge br0 exists and eth0 is part of it, verify that br0 and eth0 are in the UP:

$ brctl show
$ ifconfig br0
$ ifconfig eth0

Refer to the discussion here about the significance of having the correct bridge setup.

  1. [HN] Install DHCP server. It does not have to be configured or running. Euca will start it when needed.
    $ apt-get install dhcp3-server
    

Installing Eucalyptus

Follow the instructions here. Be sure to select the right packages for your architecture. BEN cluster uses amd64 packages for eucalyptus and euca2ools.

Configuring Eucalyptus

We will configure Euca to run in MANAGED network mode to enable dynamic VLAN creation. This section only identifies entries in the /etc/eucalyptus.conf that differ from the default or need to be verified:

[HN]

VNET_INTERFACE="eth0"
VNET_BRIDGE="br0"
VNET_DHCPDAEMON="/usr/sbin/dhcpd3"
VNET_DHCPUSER="dhcpd"
VNET_MODE="MANAGED"
VNET_SUBNET="172.16.0.0"
VNET_NETMASK="255.255.0.0"
VNET_DNS="192.168.201.254"
VNET_ADDRSPERNET="32"
#VNET_PUBLICIPS="your-free-public-ip-1 your-free-public-ip-2 ..."
#VNET_MODE="SYSTEM"

[CN]

VNET_INTERFACE="eth0"
VNET_BRIDGE="br0"
VNET_MODE="MANAGED"
#VNET_MODE="SYSTEM"

Running Eucalyptus

  1. [HN]
    $ /etc/init.d/eucalyptus-cloud restart
    $ /etc/init.d/eucalyptus-cc restart
    
  2. [CN]
    $ /etc/init.d/eucalyptus-nc restart
    
  3. Login to the head node and perform initial configuration. It will be running on https://hostname:8443/. Follow the instructions here.
  4. Eucalyptus attempts to guess your Walrus URL (incorrectly in our case). The correct URL for it should be http://head-node.name.or.ip:8773/services/Walrus
  5. [HN] Setup passwordless login for users 'root' and 'eucalyptus' from head node to compute nodes.
    # ssh-keygen
    # cd /root/.ssh
    # cp id_rsa.pub authorized_keys
    # scp authorized_keys root@compute-node.name.or.ip:/root/.ssh/
    # su - eucalyptus
    $ ssh-keygen
    $ cp id_rsa.pub authorized_keys
    $ scp authorized_keys root@compute-node.name.or.ip:/var/lib/eucalyptus/.ssh/
    $ exit
    
  6. [HN] Create a cluster
    # euca_conf -addcluster <cluster name> head-node.name.or.ip
    
  7. [HN] Add compute nodes to it
    # euca_conf -addnode compute-node.name.or.ip
    

How Eucalyptus allocates VLAN tags

Eucalyptus uses vlans as an isolation mechanism between security groups. When a security group is created (euca-add-group) and then instances (VMs) are created within the group, Euca allocates a VLAN for this group, creates bridges in individual hosts (head and worker/client nodes) and attaches VMs to those bridges, instead of the default bridge. VLAN allocation in Eucalyptus is simple. In the configuration file there is a VNET_NETMASK parameter that indicates to it the width of the mask for address assignment, and, consequently, the total number of VMs that can be created (let's call this MAXHOSTS = 2(32-MASK_WIDTH)-2 ). The VNET_ADDRSPERNET parameter dictates the maximum number of VMs per security group/VLAN. Therefore the total number of VLANs that the system will use will be MAXHOSTS/VNET_ADDRSPERNET and the maximum VLAN tag Eucalyptus will use appears to be MAXHOSTS/VNET_ADDRSPERNET -1. For example for a 24 bit mask, MAXHOSTS=254 and for VNET_ADDRSPERNET=16, the maximum VLAN tag Eucalyptus will use will be 15. Default VLAN tag is 10 (for a default security group).

Testing Eucalyptus

  1. [HN as root] Download and install euca2ools 1.0
  2. Register a user through the head node portal and acquire credentials
  3. [HN as regular user] Login, install the credentials and try
    euca-describe-availability-zones verbose
    
  4. [HN as root] If client machines aren't showing up, try restarting cloud controller and portal
    # /etc/init.d/eucalyptus-cc restart
    # /etc/init.d/eucalyptus-cloud restart
    
  5. [HN as regular user] Install a stock kernel, filesystem and ramdisk into walrus using these instructions
    $ tar -zxf euca-ubuntu-9.04-x86_64.tar.gz
    $ cd euca-ubuntu-9.04-x86_64/kvm-kernel
    $ euca-bundle-image -i vmlinuz-2.6.28-11-generic --kernel true
    $ euca-upload-bundle -b kernels -m /tmp/vmlinuz-2.6.28-11-generic.manifest.xml
    $ euca-register kernels/vmlinuz-2.6.28-11-generic.manifest.xml
    $ euca-bundle-image -i initrd.img-2.6.28-11-generic --ramdisk true
    $ euca-upload-bundle -b ramdisks -m /tmp/initrd.img-2.6.28-11-generic.manifest.xml
    $ euca-register ramdisks/initrd.img-2.6.28-11-generic.manifest.xml
    $ cd ..
    $ euca-bundle-image -i ubuntu.9-04.x86-64.img
    $ euca-upload-bundle -b images -m /tmp/ubuntu.9-04.x86-64.img.manifest.xml
    $ euca-register images/ubuntu.9-04.x86-64.img.manifest.xml
    $ euca-describe-images
    
  6. Generate ssh credentials for logging into the VMs
    euca-add-keypair mykey >mykey.private
    
  7. Attempt to create some vms
    euca-run-instances --addressing private -k mykey -n <number of instances to start> <emi-id> 
    

Troubleshooting

  1. Check if the bridges are created (eucabrXX) in the compute nodes:
    $ brctl show
    
  2. Check the VMs are created in machines (may have to hunt for them since you don't know which specific compute node a VM will be created on):
    $ virsh list
    
  3. Check the logs
  • [HN] /var/log/eucalyptus/cc.log
  • [CN] /var/log/eucalyptus/nc.log

Modifying networking setup to work with ORCA

In order to use Eucalyptus with ORCA each physical host must have two interfaces: one to the switch that is the dataplane (Cisco 6509 in RENCI's case) and one that leads either to a management network or to the public internet, to allow connection with ORCA actors. ORCA site authority for Euca will be deployed on the Eucalyptus master node and it must have

  1. connectivity to other ORCA actors
  2. connectivity to Euca slivers so it can install guests

Error: Macro Image(RENCI-Euca.png) failed
Attachment 'wiki:Eucalyptus-1.6.2-Setup: RENCI-Euca.png' does not exist.

This is achieved by creating a bridge on each node with a known name. This example uses 'sliverbr' although the name is not important, as it is not known to ORCA and is hard-wired into Eucalyptus through a patch. The following procedure must be performed on each node (master and client). This presumes the eth1 on the physical host is the interface that leads into the management network or to the public internet. It must not have a configured IP address. It can be an 802.1q VLAN interface.

  1. Create a bridge and add eth1 into it, then configure the bridge to be the default interface
    $ brctl add sliverbr
    $ brctl addif sliverbr eth1
    $ ifconfig sliverbr <public or management IP address> netmask <netmask> 
    $ route add default gw <default gw via the bridge interface>
    

In Ubuntu this can be accomplished by replacing eth1 configuration in /etc/network/interfaces file with the following:

auto sliverbr
iface sliverbr inet static
	bridge_ports eth1
	bridge_stp off
	bridge_maxwait 0
	address <ip address>
	netmask <netmak>
	gateway <gateway>

and rebooting.

Install Eucalyptus on master node from source

Now that everything is working it is time to re-install the Eucalyptus master from source. Download the source code for 1.5.2 and follow the build instructions. It is advisable to build it in $EUCALYPTUS=/opt/eucalyptus to keep it out of the way of a packaged install. Pay attention to dependencies required to build it. Once built, install it, restart it and test access to the portal, then VM creation again. You can reuse the configuration file from the stock install by moving it to $EUCALYPTUS/etc/eucalyptus/eucalyptus.conf.

Note that this procedure invalidates any previous configuration you had, so you have to establish new user credentials and upload new images from which VMs are created.

On Ubuntu 9.04 we had an issue with stock DHCP server that would not start properly after installing Eucalyptus master from source. It manifested itself by VMs being unreachable (in 'running' state). Log inspection (cc.log on master) revealed that dhcpd would not start when required. Our solution was to build a dhcp server from source and install it in a different location from the stock dhcpd. Then eucalyptus.conf had to be modified to reflect the new location of dhcpd.

Installing ORCA-related patches on master node

There are two patches - one for the VM creation template (to allow creation of VMs with more than one interface), the other to enable to specify the VLAN tag to be used for a particular security group.

  1. Install the updated VM creation template on client nodes by replacing files gen_kvm_libvirt_xml and gen_libvirt_xml in Eucalyptus. In Ubuntu/Debian they can be found under $EUCALYPTUS/usr/share/eucalyptus. The two files are attached to this page.
  2. Install the patch (vlan.patch attached to this page) for Eucalyptus security group VLAN forcing on master node. Note that the user doing make and make install must have $JAVA_HOME, $EUCALYPTUS and $EUCALYPTUS_SRC defined and ant and java executables must be on the $PATH.
    $ cd eucalyptus-1.5.2/clc
    $ patch -p2 < vlan.patch
    $ make; make install
    
    Restart the cloud controller and the portal, try the following as a regular user:
    $ euca-add-group -d testvlan vlan22
    $ euca-run-instances -g vlan22 <usual parameters from above>
    

If this works, you should see that 'eucabr22' bridge has been created on every host and a 802.1q tagged interface (typycally eth0.22) was created and is part of that bridge. If VLAN id 22 is enabled on the switch between all hosts, then you should be able to reach the new VM on the IP address indicated by Eucalyptus and it will be on the private VLAN 22.

Configuring ORCA to control the Eucalyptus cluster

ORCA site authority must run from a container running on the Euca master node (otherwise the site authority has no access to the newly created VMs). Stand up an ORCA container with at least the Euca site authority. Here is the relevant sample piece of the actor_configs/config.xml:

                <actor>
                        <type>site</type>
                        <name>duke-vm-site</name>
                        <guid>9b12d036-23e7-11df-b3a3-000c29b1c193</guid>
                        <pools>
                                <pool>
                                        <type>duke.vm</type>
                                        <label>Eucalyptus Virtual Machine (DUKE)</label>
                                        <description>A virtual machine</description>
                                        <units>10</units>
                                        <start>2010-01-30T00:00:00</start>
                                        <end>2011-01-30T00:00:00</end>
                                        <handler path="ec2/handler.xml" />
                                        <attributes>
                                                <attribute>
                                                        <key>resource.memory</key>
                                                        <label>Memory</label>
                                                        <value>128</value>
                                                        <unit>MB</unit>
                                                        <type>integer</type>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.cpu</key>
                                                        <label>CPU</label>
                                                        <value>1/2 of 2GHz Intel Xeon</value>
                                                        <type>String</type>
                                                </attribute>
                                        </attributes>
                                        <properties>
                                                <property name="ip.list" value="192.168.206.3/24" />
                                                <property name="ip.subnet" value="255.255.255.0" />
                                                <property name="ip.gateway" value="192.168.206.1" />
                                                <property name="data.subnet" value="255.255.0.0" />
                                        </properties>
                                </pool>
                        </pools>
                        <controls>
                                <control type="duke.vm" class="orca.policy.core.SimpleVMControl" />
                        </controls>
                </actor>

Note that this presumes an install where $ORCA_HOME contains the configuration files and they are not packaged in the webapp.

Once the container is up and running, you need to acquire credentials for ORCA from Eucalyptus. Login to the Eucalyptus portal, create a user for ORCA, export its credentials, which come in a zip file.

First test the credentials by unzipping them into $HOME/.euca, sourcing the .euca/XXX/eucarc file and making sure you can communicate with Eucalyptus using euca- tools. Create a keypair that ORCA will use (euca-add-keypair).

Now place the contents of the zip file under $ORCA_HOME/ec2 on the head node. Note that the zip file has a structure to it, which needs to be ignored. Simply copy the files from the lowest level of the zip file hierarchy into the $ORCA_HOME/ec2. Copy the generated ssh key (from euca-add-keypair) into the same directory. Modify the $ORCA_HOME/ec2/eucarc file as follows:

#EUCA_KEY_DIR=$(dirname $(readlink -f ${BASH_SOURCE}))
export AMI_NAME=emi-6E7412EE
export EC2_SSH_KEY=orca-key-renci
export EC2_INSTANCE_TYPE=m1.small

(comment out the first line, add $AMI_NAME - the image to be used, $EC2_SSH_KEY and $EC2_INSTANCE_TYPE for ORCA to use). Note that AMI_NAME must have a default kernel and initrd image associated with it in Eucalyptus - they are currently not specified explicitly.

NOTE: For Bella 2.0 ORCA Euca authority logs into the VM, turns off DHCP and installs BEN DNS server into /etc/resolv.conf. This may need to be modified in handlers/ec2/resources/scripts/prepare-net.sh

Running Eucalyptus/EC2 handler tests

Undoing a packaged install

When things don't seem to work, fear not, there is a way to start from scratch (note this is ONLY for DEB packaged installs, not installs from source):

  1. Stop the euca daemons:

[HN]

$ /etc/init.d/eucalyptus-cc stop
$ /etc/init.d/eucalyptus-cloud stop

[CN]

$ /etc/init.d/eucalyptus-nc stop
  1. Remove eucalyptus packages (including config directories, if possible)

[HN]

$ dpkg --purge eucalyptus-cloud
$ dpkg --purge eucalyptus-cc
$ dpkg --purge eucalyptus-gl
$ dpkg --purge eucalyptus-common
$ dpkg --purge eucalyptus-javadeps

[CN]

$ dpkg --purge eucalyptus-nc
$ dpkg --purge eucalyptus-gl
$ dpkg --purge eucalyptus-common
  1. Remove user eucalyptus from the system
    $ userdel -r eucalyptus
    $ groupdel eucalyptus
    
  2. Remove remnants of config and log directories
    $ rm -rf /etc/eucalyptus
    $ rm -rf /var/log/eucalyptus
    
  3. Sometimes you may need to fix dpkg state
    $ vi /var/lib/dpkg/statoverride
    

and remove the line that mentions 'eucalyptus'

  1. Start over

References

KVM Networking

Euca install on Jaunty

euca-group-add bug

Getting started with Eucalyptus