Using NEuca with Orca

Overview

This page contains notes about how NEuca is to be used in Orca. A canonical deployment puts the container with Euca site authority on the Euca head node so the site handler can ping instances as they come up (it is not possible from elsewhere). Camano 3.0+ has an option to turn off ping testing and relaxes this constraint.

NEuca Site Authority

To use NEuca, you need a NEuca-enabled Eucalyptus cluster and an Orca site authority actor configured to interact with NEuca. Authority configuration files differ between releases. Here is the Bella2.2 authority configuration for a standalone NEuca site with an EX3200 switch for a backplane. For Camano 3.0+ see this example.

THe rest of this section explains various parts of this file and additional configuration files needed.

Eucalyptus Keys

The path to ec2.keys in the config file is important. It must point to the location which has the eucalyptus keys and eucarc. In the example config file, the ec2 keys are stored in /opt/orca/ec2. To get the keys for your eucalyptus account, you have to log in to your Eucalyptus portal with the euca username, say, 'orca'. Go to Credentials tab and then click on 'Download Credentials' under 'Credentials ZIP-file'. Unzip the contents of this zip file (which will have a name like euca2-<euca username>-x509.zip) in the ec2.keys directory.

Associate a private key to your euca account, which can be used to log in to any instances brought up. This key can be generated by issuing the following command after sourcing eucarc.

$ euca-add-keypair keyName

Store the output of this command in a file with name as keyName and put it in the ec2.keys directory.

Bella 2.2 instructions

Edit eucarc in that directory as follows. Comment out the first line by inserting '#' at the beginning of the line. Add the following lines to eucarc. The AMI_NAME must correspond to the emi-id of a neuca-enabled image. Replace XXXXXX with the correct id . The EC2_SSH_KEY property must point to the keyName file. EC2_USE_PUBLIC_ADDRESSING should be set to true (boolean, not string) if you want Eucalyptus to use a pool of available public IP addresses, false otherwise. If not set, defaults to 'false'. EC2_PING_RETRIES defines how many times to try to ping the instance after it is in 'running' state before tearing it down and exiting with an error. If unset, defaults to 60 (with 1 second sleep interval at each step).

export AMI_NAME=emi-XXXXXX
export EC2_SSH_KEY=keyName
export EC2_USE_PUBLIC_ADDRESSING=true
export EC2_PING_RETRIES=60

Camano 3.0+ instructions

Edit eucarc as follows: comment out the first line by inserting '#' at the beginning of the line. All shell variables from Bella-2.2 in the above section (AMI_NAME, EC2_SSH_KEY etc.) have moved to ec2.site.properties file. See below.

NEuca Handler

The NEuca handler supports creating VMs using the NEuca extension. The current version of the handler supports the following:

  • multiple network interfaces (Bella)
  • ip addresses per network interface (Bella)
  • an instance configuration script (Bella)
  • installation of user SSH public key (Bella)
  • configuration of Shorewall DNAT port-forwarding proxy (Camano)
  • configuration of ImageProxy (Camano)

The support for these features is controlled by one or more properties, which are described below. Static site-dependent properties can be passed to the handler either via ec2.site.properties file (see below), while dynamic properties are passed by ORCA at run-time.

Network Interface Configuration (dynamically configured from site NDL and request NDL)

NEuca allows reservation requests to control any network interface other than eth0, which is reserved internally. Each network interface must at least be associated with a VLAN tag, and can optionally specify an IP address and a subnet mask (in / notation, e.g, 1.2.3.4/24).

To specify configuration for eth1, the following properties must be passed to the site authority:

unit.eth1.vlan.tag=20 
unit.eth1.mode=vlan 
unit.eth1.hosteth=eth0  // eth0 is assumed to be the interface on the physical host to which VM needs to attach its eth1. This is dependent on the deployment

If the VM should contain one more interface, then it can be specified by passing:

unit.eth2.vlan.tag=21 
unit.eth2.mode=vlan 
unit.eth2.hosteth=eth0 

Instead of specifying interface-specific VLAN tag, the handler also supports specifying a VLAN tag using the unit.vlan.tag property. In this case the machine can have only one network interface (eth1):

unit.vlan.tag=20
unit.vlan.hosteth=eth0

When both unit.vlan.tag and unit.eth1.vlan.tag are specified, the latter takes precedence.

Instead of attaching VM interfaces to VLANs on physical hosts, it is also possible to attach them to plain host interfaces. In this case only one option is available:

unit.eth1.mode=phys
unit.eth1.hosteth=eth0 

The handler also accepts IP address of the VM interface in by specifying the propert(ies):

unit.eth1.ip=1.2.3.4/24

Instance Configuration Script (can be dynamic or static)

To pass an instance configuration script pass the contents (not the location) of the script in the unit.instance.config property:

unit.instance.config=echo "hello world"

Guest SSH Public Key

By default all NEuca VMs are created to authorize SSH connections using the site authority's private key. To enable the guest to connect as root using its own key, pass the guest's SSH public key in config.ssh.key:

config.ssh.key=GUEST_PUBLIC_SSH_KEY

Static properties/ec2.site.properties

See sample ec2.site.properties for Bella 2.2 and sample ec2.site.properties for Camano 3.0+ files for more details.

Eucanet handler

Usually a NEuca cluster has an ORCA-controlled switch that ethX (ethX != eth0) of all worker nodes are connected to. In order to perform things like site embedding (embedding a topology in a single NEuca site), you must configure another resource pool for VLANs (eg. unc-net-site or renci-net-site) and use the appropriate handler that will configure VLANs on the switch as needed. Usually the pool is operated by the same authority actor as NEuca. The handler definition should look something like this (click for the larger example):

 <!-- For Bella 2.2 you have to specify a site-specific handler (e.g. renci.euca.net.xml) 
 For Camano 3.0+  a unified handler can be used -->
<handler path="providers/euca-sites/unified.euca.net.xml">
     <properties>
           <!-- name of the site that helps locate appropriate property definitions (Camano 3.0+) -->
            <property name="euca.site" value="renci" />
            <!-- credentials for logging into the switch  (Bella 2.2) -->
            <property name="eucanet.credentials" value="/opt/orca/config/eucanet.cred.properties" />
       </properties>
</handler>

Properties

The file named as eucanet.credentials property above needs to exist and needs to define at least the following properties (NOTE: replace 'sitename' in the names of the propeties below with the value of the 'euca.site' property from above [e.g. in this case it would be renci.euca.router and renci.euca.router.type]) :

router.user=Username of the user authorized on the switch
router.password=Password of the user authorized on the switch
sitename.euca.router=FQDN or IP of the router used for Euca dataplane
sitename.euca.router.type=One of 'ex3200', 'Cisco6509' or 'Cisco3400' without the quotes

'sitename'.euca.router.type should be the type and 'sitename'.euca.router should be the IP address or name of router interconnecting Eucalyptus nodes in the dataplane.

FlowVisor handler

If instead of a traditional vlan-based switch your site uses an OpenFlow switch, you must install FlowVisor? and pair it up with the switch and then configure OpenFlow handler in orca instead of the eucanet handler:

<handler path="providers/flowvisor/handler.xml">
    <properties>
        <property name="flowvisor.properties" value="/opt/orca/config/flowvisor.properties" />
    </properties>
</handler>

In this mode ORCA will automatically define vlans using a combination of FlowVisor flowspaces and an OpenFlow controller that is started by ORCA on demand for each vlan.

In addition you must install NOX or Floodlight OpenFlow controller on the same host as the ORCA AM and enable it to start the controller by properly defining the configuration properties described below.

Properties

The file named as flowvisor.properties property above needs to exist and needs to define at least the following properties:

# this assumes flowvisor is running on the same host as the AM - doesn't have to be that way
flowvisor.url=https://localhost:8080/xmlrpc
flowvisor.user=fvadmin
flowvisor.passwd=somepassword

# define the port range for floodlight or NOX to use when they start up
fvctrl.first.port=50000
fvctrl.last.port=54999
fvctrl.host=hostname on which controller will run
# type is 'floodlight' or 'nox'. we recommend floodlight
fvctrl.type=floodlight

# if nox is used, define where it is
nox.core.exec=/opt/nox/bin/nox_core
# if floodlight is used, define where it is
floodlight.jar=/opt/floodlight/floodlight.jar

Accessing VMs Created by NEuca

Each VM created by NEuca is represented as an Unit object in the ConcreteSet? of the VM reservation. The unit.manage.ip property can be used to access the management ip of the VM. If the guest specified an SSH key, then it can use it to connect to the machine at address unit.manage.ip.