Version 110 (modified by ibaldin, 8 years ago)

--

Setting up Euca 2.0 with NEuca patches

Overview

This document covers installation of Eucalyptus 2.0.[0-2] from source in MANAGED-NOVLAN mode of operation. You must have several physical hosts at your disposal, one designated as a head node and the rest as worker nodes. Worker nodes will have to run either KVM or Xen to be able to create virtual machines. In the described setup the head node does not run Eucalyptus VM instances and therefore does not have to run Xen or KVM. Xen/KVM configuration is not covered in this document as it is heavily distribution and hardware dependent.

Eucalyptus does not have strong dependencies on the distribution. The selected distribution should have a kernel version >= 2.6.22 to properly support tgt package for iSCSI configuration and should ideally offer 'out-of-the-box' use of Xen or KVM (depending on your hardware).

Installation contains several basic steps:

  1. Building and installing Euca master and worker nodes from NEuca patched sources
  2. Configuring master and worker nodes
  3. Registering worker nodes with master
  4. Testing installation

Basic assumptions

  1. There is a single master/head node and a number of worker nodes all with identical basic OS installations. Master node will run eucalyptus cloud controller and cluster controller. Worker nodes will run eucalyptus node controller.
  2. Worker nodes have a working hypervisor (Xen or KVM) with libvirt installed on top of it.
  3. Hypervisor is not installed/disabled on the master node.
  4. All nodes are part of a single broadcast domain which contains no DHCP servers.
  5. All code is built on the master node.
  6. Eucalyptus users will operate from the head node (initially). This is where eucalyptus user tools are installed and this is the host from which VMs are initially accessible.
  7. All worker nodes have secondary network interfaces (not used for primary connectivity and by Eucalyptus) that can be used by NEuca for e.g. VLAN attachment.

Canonical NEuca network configuration

NEuca canonical deployment diagram

Installation prerequisites

  • Make sure ssh keys for root user are shared across the hosts (i.e. that the root on master node can perform a passwordless login in any worker node).
    root@master$ ssh-keygen
    root@master$ cd .ssh/
    root@master$ cat id_dsa.pub >> authorized_keys
    root@master$ scp authorized_keys root@worker1:~/.ssh/
    root@master$ scp authorized_keys root@worker2:~/.ssh/
    
  • Download source code packages for Eucalyptus to master node:
    • Eucalyptus 2.0.[0-2] source, offline version
    • Eycalyptus 2.0.[0-2] source dependencies
    • Euca2ools 1.3.1 source
    • Euca2ools 1.3.1 source dependencies
  • Install software build prerequisites from installation instructions Step 1 on master node. Pay attention to the software prerequisites in Step 1, it is preferable to install Sun's Java 1.6.x or above over OpenJDK, since ORCA relies on Java 1.6 and does not work with OpenJDK.

Installing master and worker nodes

Next step is to build Eucalyptus

  • Define EUCALYPTUS to be /opt/eucalyptus-2.0:
    root@master$ export EUCALYPTUS=/opt/eucalyptus-2.0
    
  • Start from Step 2 and follow installation instructions through step 3.
  • As additional step 3d build an additional depdency - iniparser3.0b library (attached to this page):
    root@master$ export INIPARSER_HOME=$EUCALYPTUS/packages/iniparser3.0b
    root@master$ cd /root
    root@master$ tar -zxf iniparser3.0b-neuca0.1.tar.gz
    root@master$ cd iniparser3.0b; make; make install
    root@master$ export LD_LIBRARY_PATH=$INIPARSER_HOME/lib:$LD_LIBRARY_PATH
    
  • Apply NEuca patch attached to this page prior to building Eucalyptus (prior to running ./configure) in step 4:
    root@master$ cd $EUCALYPTUS_SRC
    root@master$ patch -p0 < ../neuca-0.1.patch
    
  • As step 5, copy this tree (/opt/eucalyptus-2.0) to all worker nodes. After this step you should have a built Eucalyptus 2.0 tree on the master node (/opt/eucalyptus-2.0) and each of the worker nodes (/opt/eucalyptus-2.0).
    • If you don't have rsync installed, a combination of tar and ssh works to copy the Eucalyptus tree just as well:
      root@master$ tar -cf - $EUCALYPTUS | ssh root@worker1 tar -xf - -C /
      
  • On master node modify the configuration of the dynamic linker to help locate iniparser library. Create file /etc/ld.so.conf.d/iniparser.conf:
    root@master$ echo /opt/eucalyptus-2.0/packages/iniparser3.0b/lib > /etc/ld.so.conf.d/iniparser.conf
    root@master$ ldconfig
    root@master$ ldconfig -p | grep iniparser
    
  • The last command above should return output similar to this:
    root@master$ ldconfig -p | grep iniparser
    	libiniparser.so.0 (libc6,x86-64) => /opt/eucalyptus-2.0/packages/iniparser3.0b/lib/libiniparser.so.0
    	libiniparser.so (libc6,x86-64) => /opt/eucalyptus-2.0/packages/iniparser3.0b/lib/libiniparser.so
    
  • Copy this /etc/ld.so.conf.d/iniparser.conf to all worker nodes and run ldconfig command:
    root@master$ scp /etc/ld.so.conf.d/iniparser.conf root@worker1:/etc/ld.so.conf.d/iniparser.conf
    root@master$ ssh root@worker2 ldconfig
    

Configuration

This setup assumes that a single NIC in each physical host is dedicated to Eucalyptus functions. Other NICs can be used for extra data-plane connectivity. The MANAGED-NOVLAN mode places no requirements for the supported network switches. To properly setup the network configuration it is important to know two things:

  1. The name of the physical interface on master node that has access to worker nodes (and in this case also is public; it is named eth0)
  2. The name of the bridge on worker nodes that KVM or XEN uses. In this case it is also eth0, because Xen is used as an example. In case of KVM it is typically br0. Check your setup!
  • Perform step 6 a and b of the installation instructions (add user 'eucalyptus' to all nodes; configure hypervisor if not yet configured)
  • On worker nodes initialize the configuration file (note that instances directory does not have to be /usr/local/eucalyptus; you can configure it to be anything you want and this step will create and initialize it):
    root@worker$ /opt/eucalyptus-2.0/usr/sbin/euca_conf -d /opt/eucalyptus-2.0 --hypervisor kvm --instances /opt/eucalyptus-2.0-instances --user eucalyptus --setup
    
  • Initialize config file on master node (reset $EUCALYPTUS to /opt/eucalyptus-2.0):
    root@master$ $EUCALYPTUS/usr/sbin/euca_conf -d $EUCALYPTUS --setup
    root@master$ $EUCALYPTUS/usr/sbin/euca_conf -d $EUCALYPTUS --enable cloud --enable walrus --enable sc
    
  • Make sure master node has dhcpd installed (typically /usr/sbin/dhcpd or /usr/sbin/dhcpd3). It must not be configured. The worker nodes cannot deal with multiple DHCP servers on the same subnet in this Eucalyptus mode. The only DHCP server on that subnet must be the one used by Eucalyptus. It is important to know if DHCPD needs to be run as user root or dhcpd. Apparmor, if running, may need to be configured to allow dhcpd to read/write files from custom locations (e.g. $EUCALYPTUS/var/run/eucalyptus/net ).
  • Read networking configuration notes. Examples here presume master node has a single interface (eth0) that is both public (allows anyone to reach the host) and is used to communicate to the rest of the cluster, while worker nodes use bridge named 'eth0'. Your setup will be different.
  • Modify the configuration of the master node ($EUCALYPTUS/etc/eucalyptus/eucalyptus.conf; only the relevant stanzas are shown; make sure no other VNET_MODE statements are uncommented in the file). Note that in the shown configuration the 10.100.x.x range is used internally by Eucalyptus, while 192.168.203.x range is 'public' (even though it comes from a reserved RFC1918 address space):
    VNET_PUBINTERFACE="eth0"
    VNET_PRIVINTERFACE="eth0"
    VNET_DHCPDAEMON="/usr/sbin/dhcpd3"
    
    VNET_MODE="MANAGED-NOVLAN"
    VNET_SUBNET="10.100.0.0"
    VNET_NETMASK="255.255.0.0"
    VNET_DNS="192.168.201.254"
    VNET_ADDRSPERNET="32"
    VNET_PUBLICIPS="192.168.203.31 192.168.203.32 192.168.203.33 192.168.203.34 192.168.203.35 192.168.203.36 192.168.203.37 192.168.203.38 192.168.203.39 192.168.203.40"
    
  • Modify the configuration of worker nodes ($EUCALYPTUS/etc/eucalyptus/eucalyptus.conf). VNET_BRIDGE must be the name of the bridge on worker nodes that is used by Xen or KVM. In this example it is eth0:
    VNET_MODE="MANAGED-NOVLAN"
    VNET_BRIDGE="eth0"
    

Installing user tools

You must have a recent (1.3) version of user tools. Older versions do not support necessary command-line options for NEuca. As a starting point you can install Eucalyptus user tools on the head/master node, following these instructions.

Running and testing Eucalyptus

  • Start Eucalyptus on master node for the first time and observe the log files for any errors (note that the cloud controller may take more than a minute to complete its initialization):
    root@master$ $EUCALYPTUS/etc/init.d/eucalyptus-cloud start
    root@master$ $EUCALYPTUS/etc/init.d/eucalyptus-cc start
    
  • Test that you can access the cloud portal at https://master.node:8443/
  • Start Eucalyptus on worker nodes
    root@worker1$ $EUCALYPTUS/etc/init.d/eucalyptus-nc start
    
  • On master node perform registration of components and cluster
  • Login through the portal, create user, download user credentials. Install user credentials into the regular (non-root) account on the master node. The host must have euca2ools-1.3.1 installed from source or binary.
  • Try it out
    user@somehost$ euca-describe-availability-zones verbose
    
  • Download one of the stock Eucalyptus images (in your cluster Eucalyptus web portal, click on 'Extras')
  • Upload the selected image into Eucalyptus. This example uses euca-ubuntu-9.04-x86_64, but you may pick any other available. Be sure to select the right kernel/ramdisk to upload - Xen kernels will not work on KVM installations and vice-versa. This example puts images into bucket named 'iamges', and kernels and ramdisks into a bucket named 'kernels'. This is a matter of preference only.
    $ tar -zxf euca-ubuntu-9.04-x86_64.tar.gz
    $ cd euca-ubuntu-9.04-x86_64/kvm-kernel
    $ euca-bundle-image -i vmlinuz-2.6.28-11-generic --kernel true
    $ euca-upload-bundle -b kernels -m /tmp/vmlinuz-2.6.28-11-generic.manifest.xml
    $ euca-register kernels/vmlinuz-2.6.28-11-generic.manifest.xml
    $ euca-bundle-image -i initrd.img-2.6.28-11-generic --ramdisk true
    $ euca-upload-bundle -b kernels -m /tmp/initrd.img-2.6.28-11-generic.manifest.xml
    $ euca-register kernels/initrd.img-2.6.28-11-generic.manifest.xml
    $ cd ..
    $ euca-bundle-image -i ubuntu.9-04.x86-64.img
    $ euca-upload-bundle -b images -m /tmp/ubuntu.9-04.x86-64.img.manifest.xml
    $ euca-register images/ubuntu.9-04.x86-64.img.manifest.xml
    $ euca-describe-images
    
  • Generate an SSH key pair using euca-add-keypair, then try to create an instance using euca-run-instances (be sure to use --addressing private option at first)
  • Once basic VM instantiation is verified, you can try providing --user-data-file option to euca-run-instances command with a NEuca INI file, as described in the overview section.

Pitfalls

Where to look

Logfiles are typically located in the installation subtree ($EUCALYPTUS/var/log/eucalyptus). Look for error logs. Check this page.

Euca cloud controller

Cloud controller wont run

This problem manifests itself with eucalyptus-cloud dying. Inspect the log files ($EUCALYPTUS/var/log/eucalyptus/cloud-error.log). If it complains about being unable to run tgt, install tgt:

  • Install a binary package if possible. On Debian/Ubuntu, do:
    root@master$ aptitude search tgt
    
    On RHEL/CentOS, do:
    root@master$ yum install -y scsi-target-utils
    
  • Download and install tgt software if it is missing. This software manages ISCSI volumes for EBS.
    root@master$ make ISCSI=1; make install
    

Make sure tgtd is started at boot time!

It may also complain about running vblade or other executables. The solution is similar, binary packages usually exist.

Attaching volumes doesn't work

If you are experiencing failures with performing a euca-attach-volume, you have a few things to check.

  • Is tgt installed? If no, follow the installation procedure in "Cloud controller wont run"
  • Are the open-iscsi tools installed on the workers? On Debian/Ubuntu, do:
    root@master$ aptitude search open-iscsi
    
    On RHEL/CentOS, do:
    root@master$ yum install -y iscsi-initiator-utils
    
  • Are the Crypt::OpenSSL::Random and Crypt::OpenSSL::RSA perl modules present? On Debian/Ubuntu, do:
    root@master$ aptitude search libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl
    
    On RHEL/CentOS, do:
    root@master$ rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
    root@master$ yum install -y perl-Crypt-OpenSSL-RSA perl-Crypt-OpenSSL-Random
    

Using the correct JDK

  • If your Java 1.6 installation is non-standard, you need to update $EUCALYPTUS/etc/init.d/eucalyptus-cloud to define JAVA_HOME in its opening part and updating PATH to include $JAVA_HOME/bin

Cluster controller

Resetting cluster controller state

If you've made significant changes to the eucalyptus.conf or simply want to flush the state of the cluster controller you can do that by

root@master$ $EUCALYPTUS/etc/init.d/eucalyptus-cc stop
root@master$ $EUCALYPTUS/etc/init.d/eucalyptus-cc cleanstart

DHCP does not work

Most common problem seems to be that VMs startup properly but Eucalyptus either fails to update the dhcpd configuration file or fails to (re)start dhcpd. All these problems are related to file and or user permissions. Consult Cluster Controller logfiles (on master node $EUCALYPTUS/var/log/eucalyptus/cc.log and look for string "DHCP"). Culprits may be running dhcpd as a wrong user (set through eucalyptus.conf) or Apparmor interfering and preventing dhcpd from creating/modifying files.

See step 6e of installation instructions.

KVM/Xen on worker nodes

Centos 5.5/KVM

  • Symlink /use/libexec/qemu-kvm to /usr/bin/kvm
  • Create a group called 'libvirt', add user 'eucalyptus' into that group. Modify /etc/libvirt/libvirt.conf and restart libvirtd
    # This is restricted to 'root' by default.
    unix_sock_group = "libvirt"
    unix_sock_rw_perms = "0770"
    auth_unix_rw = "none"
    
  • SCSI disks are not supported in KVM that comes with Centos 5.5, so need to specify 'ide' bus in KVM template ($EUCALYPTUS/usr/share/eucalyptus/gen_kvm_libvirt_xml):
            <disk type='file'>
                <source file='BASEPATH/disk'/>
                <target dev='sda' bus='ide' />
            </disk>
    
    
  • It is best to ignore virbr0 created by qemu/libvirt at boot by default and create a separate bridge br0 for Eucalyptus to use (modify $EUCALYPTUS/etc/eucalyptus/eucalyptus.conf VNET_BRIDGE="br0"):
    • /etc/sysconfig/network-scripts/ifcfg-br0:
      # Euca default bridge
      DEVICE=br0
      TYPE=Bridge
      ONBOOT=yes
      BOOTPROTO=static
      IPADDR=192.168.201.43
      NETMASK=255.255.255.0
      GATEWAY=192.168.201.1
      
    • /etc/sysconfig/network-scripts/ifcfg-eth0:
      DEVICE=eth0
      TYPE=Ethernet
      BRIDGE=br0
      ONBOOT=yes
      

Debian/Ubuntu XEN

  • /etc/network/interfaces (eth1 is a bridge created by XEN):
    auto eth1
    #allow-hotplug eth1
    iface eth1 inet static
    address 192.168.203.21
    netmask 255.255.255.0
    gateway 192.168.203.1
    
  • /etc/xend/xend-config.sxp
    (network-script 'network-bridge netdev=eth1')
    

Attachments