Version 47 (modified by ibaldin, 8 years ago)

--

Using NEuca with Orca

Overview

This page contains notes about how NEuca is to be used in Orca. A canonical deployment puts the container with Euca site authority on the Euca head node so the site handler can ping instances as they come up (it is not possible from elsewhere). Camano 3.0+ has an option to turn off ping testing and relaxes this constraint.

NEuca Site Authority

To use NEuca, you need a NEuca-enabled Eucalyptus cluster and an Orca site authority actor configured to interact with NEuca.

To define an actor to interact with NEuca, you can use the following template:

    <actor>
            <type>site</type>
            <name>YOUR_ACTOR_NAME</name>
             <guid>GENERATE_YOUR_ACTOR_GUID</guid>
            <pools>
                <pool>
                    <type>YOUR_RESOURCE_TYPE_NAME</type>
                    <label>Eucalyptus Virtual Machine</label>
                    <description>A virtual machine</description>
                    <units>10</units>
                    <start>2010-01-30T00:00:00</start>
                    <end>2011-01-30T00:00:00</end>
                    <handler path="ec2/handler.xml">
                        <properties>
                            <!-- 
                            By default the handler assumes that the keys are under $ORCA_HOME/ec2. 
                            If you want the handler to use keys from a different location, specify it here.
                            Note: it must be an absolute path.
                            -->
                            <!-- <property name="ec2.keys" value="path_to_keys_dir" /> -->
                            <!-- You can pass additional site-specific properties, like 
                            unit.vlan.hosteth (global host interface on which to creave vlans) 
                            unit.ethX.hosteth (guest-ethX specific host interface to create vlans)
                            unit.ethX.mode (guest-ethX-specific mode for attaching that interface to host interface; phys or vlan) -->
                            <property name="ec2.site.properties" value="absolute_path_to_properties_file_typically_$ORCA_HOME/config/ec2.site.properties />
                        </properties>
                    </handler>
                </pool>
            </pools>
            <controls>
                <control type="YOUR_RESOURCE_TYPE_NAME" class="orca.policy.core.SimpleVMControl" />
            </controls>
        </actor>

Please replace the text in caps with text appropriate for your setup. You probably also want to change the units and the start/end parameters for the resource type.

Example NEuca Site Authority

The following is an example config file containing a NEuca Site authority actor ('unc-vm-site') corresponding to an Eucalyptus (NEuca enabled) installation at UNC. This config file also contains an OPTIONAL network site authority actor ('unc-net-site') to control a network switch attached to the Eucalyptus installation. Both the actors are NDL-enabled, which means that the substrate NDL description has been specified by the property substrate.file. Eg. <property name="substrate.file" value="/opt/orca/ndl/uncvmsite.rdf" /> The actors talk to a remote broker actor ('ndl-broker') and delegate resources to this broker.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration xmlns="http://issg.cs.duke.edu/sharp/boot">
        <actors>
                <actor>
                        <type>site</type>
                        <name>unc-vm-site</name>
                        <guid>a01ca9fd-3bcf-4e4c-b4bf-4ce1b9820785</guid>
                        <description>UNC Euca site authority</description>
                        <pools>
                                <pool factory="orca.boot.inventory.NdlResourcePoolFactory">
                                        <type>unc.vm</type>
                                        <label>Eucalyptus Virtual Machine (UNC)</label>
                                        <description>A virtual machine</description>
                                        <units>10</units>
                                        <start>2010-01-30T00:00:00</start>
                                        <end>2011-01-30T00:00:00</end>
                                        <handler path="ec2/handler.xml">
                                                <properties>
                                                        <property name="ec2.keys" value="/opt/orca/ec2" />
                                                        <property name="ec2.site.properties" value="/opt/orca/config/ec2.site.properties" />
                                                </properties>
                                        </handler>

                                        <attributes>
                                                <attribute>
                                                        <key>resource.domain</key>
                                                        <type>String</type>
                                                        <value>uncvmsite</value>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.memory</key>
                                                        <label>Memory</label>
                                                        <value>128</value>
                                                        <unit>MB</unit>
                                                        <type>integer</type>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.cpu</key>
                                                        <label>CPU</label>
                                                        <value>1/2 of 2GHz Intel Xeon</value>
                                                        <type>String</type>
                                                </attribute>
                                        </attributes>
                                        <properties>
                                                <!-- site NDL description -->
                                                <property name="substrate.file" value="/opt/orca/ndl/uncvmsite.rdf" />
                                        </properties>
                                </pool>
                        </pools>
                        <controls>
                                <control type="unc.vm" class="orca.policy.core.SimpleVMControl" />
                        </controls>
                </actor>

                <actor>
                        <type>site</type>
                        <name>unc-net-site</name>
                        <guid>22d0adec-22c1-488a-9356-a908346c1ded</guid>
                        <description>UNC NET authority</description>
                        <pools>
                                <pool factory="orca.boot.inventory.NdlResourcePoolFactory">
                                        <type>unc.vlan</type>
                                        <label>UNC NET VLAN</label>
                                        <description>A VLAN over UNC NET</description>
                                        <units>5</units>
                                        <start>2010-01-30T00:00:00</start>
                                        <end>2011-01-30T00:00:00</end>
                                        <handler path="controllers/ben/gec9/unc.euca.net.xml" />
                                        <attributes>
                                                <attribute>
                                                        <key>resource.domain</key>
                                                        <type>String</type>
                                                        <value>uncnet</value>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.class.invfortype</key>
                                                        <type>Class</type>
                                                        <value>orca.controllers.ben.broker.NDLVlanInventory</value>
                                                </attribute>
                                        </attributes>
                                        <properties>
                                                <property name="vlan.tag.start" value="16" />
                                                <property name="vlan.tag.end" value="20" />
                                                <!-- site ndl file -->
                                                <property name="substrate.file" value="/opt/orca/ndl/uncNet.rdf" />
                                        </properties>
                                </pool>
                        </pools>
                        <controls>
                                <control type="unc.vlan" class="orca.policy.core.VlanControl" />
                        </controls>
                </actor>

        </actors>
        <topology>
                <edges>
                        <edge>
                                <from name="ndl-broker" guid="25bc9111-9b41-46ab-a96b-3c87f574cfde" type="broker" >
                                        <location protocol="soapaxis2" url="http://geni-ben.renci.org:11080/orca/services/ndl-broker" />
<certificate>
MIICbTCCAdagAwIBAgIETDtgYzANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJVUzELMAkGA1UE
CBMCTkMxDzANBgNVBAcTBkR1cmhhbTENMAsGA1UEChMEb3JjYTEQMA4GA1UECxMHc2hpcmFrbzEt
MCsGA1UEAxMkMjViYzkxMTEtOWI0MS00NmFiLWE5NmItM2M4N2Y1NzRjZmRlMB4XDTEwMDcxMjE4
MzUxNVoXDTIwMDcwOTE4MzUxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5DMQ8wDQYDVQQH
EwZEdXJoYW0xDTALBgNVBAoTBG9yY2ExEDAOBgNVBAsTB3NoaXJha28xLTArBgNVBAMTJDI1YmM5
MTExLTliNDEtNDZhYi1hOTZiLTNjODdmNTc0Y2ZkZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAqcyS60d5t9c3eEud529hYmD/0BrIHGkEevwAtqBb7FFD1X98SB1G8y7gzxplt0xr2Hm72Et+
01qB7YgT6XQHWfJQQW7RUZEnrDbGsS0v6bffY291eeDVd0ZCH1ogzPDlyMqdhSGKsstqZd0CYc2E
zRFNngOIytBu1m59Jr6/FqsCAwEAATANBgkqhkiG9w0BAQUFAAOBgQCpFKta+1JitcfPbti3x3Tj
WqqINj2f/MzwTVZbxV1eW6gLrwc3FRTX8RgAfqn2sl9Igxfzb+GbQbhY2j5iyBsEV90eKjQQitgv
KUA1IpJqVMYiGSohX2jL+uXEK7bujv9eRyNG82Rp+ouWCrDKo7kOVLh/iSD1s8Mrk03/wd3qfw==
</certificate>
                                </from>
                                <to name="unc-net-site" type="site" />
                                <rset>
                                        <type>unc.vlan</type>
                                        <units>10</units>
                                </rset>
                        </edge>
                        <edge>
                                <from name="ndl-broker" guid="25bc9111-9b41-46ab-a96b-3c87f574cfde" type="broker">
                                        <location protocol="soapaxis2" url="http://geni-ben.renci.org:11080/orca/services/ndl-broker" />
<certificate>
MIICbTCCAdagAwIBAgIETDtgYzANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJVUzELMAkGA1UE
CBMCTkMxDzANBgNVBAcTBkR1cmhhbTENMAsGA1UEChMEb3JjYTEQMA4GA1UECxMHc2hpcmFrbzEt
MCsGA1UEAxMkMjViYzkxMTEtOWI0MS00NmFiLWE5NmItM2M4N2Y1NzRjZmRlMB4XDTEwMDcxMjE4
MzUxNVoXDTIwMDcwOTE4MzUxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5DMQ8wDQYDVQQH
EwZEdXJoYW0xDTALBgNVBAoTBG9yY2ExEDAOBgNVBAsTB3NoaXJha28xLTArBgNVBAMTJDI1YmM5
MTExLTliNDEtNDZhYi1hOTZiLTNjODdmNTc0Y2ZkZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAqcyS60d5t9c3eEud529hYmD/0BrIHGkEevwAtqBb7FFD1X98SB1G8y7gzxplt0xr2Hm72Et+
01qB7YgT6XQHWfJQQW7RUZEnrDbGsS0v6bffY291eeDVd0ZCH1ogzPDlyMqdhSGKsstqZd0CYc2E
zRFNngOIytBu1m59Jr6/FqsCAwEAATANBgkqhkiG9w0BAQUFAAOBgQCpFKta+1JitcfPbti3x3Tj
WqqINj2f/MzwTVZbxV1eW6gLrwc3FRTX8RgAfqn2sl9Igxfzb+GbQbhY2j5iyBsEV90eKjQQitgv
KUA1IpJqVMYiGSohX2jL+uXEK7bujv9eRyNG82Rp+ouWCrDKo7kOVLh/iSD1s8Mrk03/wd3qfw==
</certificate>

                                </from>
                                <to name="unc-vm-site" type="site" />
                                <rset>
                                        <type>unc.vm</type>
                                        <units>10</units>
                                </rset>
                        </edge>
                </edges>
        </topology>
</configuration>

Eucalyptus Keys

The path to ec2.keys in the config file is important. It must point to the location which has the eucalyptus keys and eucarc. In the example config file, the ec2 keys are stored in /opt/orca/ec2. To get the keys for your eucalyptus account, you have to log in to your Eucalyptus portal with the euca username, say, 'orca'. Go to Credentials tab and then click on 'Download Credentials' under 'Credentials ZIP-file'. Unzip the contents of this zip file (which will have a name like euca2-<euca username>-x509.zip) in the ec2.keys directory.

Associate a private key to your euca account, which can be used to log in to any instances brought up. This key can be generated by issuing the following command after sourcing eucarc.

$ euca-add-keypair keyName

Store the output of this command in a file with name as keyName and put it in the ec2.keys directory.

Bella 2.2 instructions

Edit eucarc in that directory as follows. Comment out the first line by inserting '#' at the beginning of the line. Add the following lines to eucarc. The AMI_NAME must correspond to the emi-id of a neuca-enabled image. Replace XXXXXX with the correct id . The EC2_SSH_KEY property must point to the keyName file. EC2_USE_PUBLIC_ADDRESSING should be set to true (boolean, not string) if you want Eucalyptus to use a pool of available public IP addresses, false otherwise. If not set, defaults to 'false'. EC2_PING_RETRIES defines how many times to try to ping the instance after it is in 'running' state before tearing it down and exiting with an error. If unset, defaults to 60 (with 1 second sleep interval at each step).

export AMI_NAME=emi-XXXXXX
export EC2_SSH_KEY=keyName
export EC2_USE_PUBLIC_ADDRESSING=true
export EC2_PING_RETRIES=60

Camano 3.0+ instructions

Edit eucarc as follows: comment out the first line by inserting '#' at the beginning of the line. All shell variables from Bella-2.2 in the above section (AMI_NAME, EC2_SSH_KEY etc.) have moved to ec2.site.properties file. See below.

Notes on deploying NEuca Authority

If you are deploying the NEuca authority in the same machine where Eucalyptus head node resides, a different port number has to be used by the tomcat. This is because the default port used by our tomcat (Port 8080) is already in use by Eucalyptus. To change the default port from 8080 to another unused port number, say 11080, the following needs to be changed. All changes are with respect to the webapp/ directory. In package/webapp/conf/server.xml , change "<Connector port="8080" maxHttpHeaderSize..." to "<Connector port="11080" maxHttpHeaderSize..." . In ant/build.properties, change "target.port=8080" to "target.port=11080" . Change any relevant soapaxis2 url in actor_configs/config.xml for OTHER actors talking to the NEuca actors. Edit one more tomcat file - If your tomcat installation is at /opt/orca/tomcat, edit /opt/orca/tomcat/conf/server.xml by changing "<Connector port="8080" maxHttpHeaderSiz.." to "<Connector port="11080" maxHttpHeaderSiz.."

NOTE: Starting with Bella 2.2 ORCA uses port 11080 by default in Tomcat.

NEuca Handler

The NEuca handler supports creating VMs using the NEuca extension. The current version of the handler supports the following:

  • multiple network interfaces (Bella)
  • ip addresses per network interface (Bella)
  • an instance configuration script (Bella)
  • installation of user SSH public key (Bella)
  • configuration of Shorewall DNAT port-forwarding proxy (Camano)

The support for these features is controlled by one or more properties, which are described below. Static site-dependent properties can be passed to the handler either via ec2.site.properties file (see below), while dynamic properties are passed by ORCA at run-time.

Network Interface Configuration (dynamically configured from site NDL and request NDL)

NEuca allows reservation requests to control any network interface other than eth0, which is reserved internally. Each network interface must at least be associated with a VLAN tag, and can optionally specify an IP address and a subnet mask (in / notation, e.g, 1.2.3.4/24).

To specify configuration for eth1, the following properties must be passed to the site authority:

unit.eth1.vlan.tag=20 
unit.eth1.mode=vlan 
unit.eth1.hosteth=eth0  // eth0 is assumed to be the interface on the physical host to which VM needs to attach its eth1. This is dependent on the deployment

If the VM should contain one more interface, then it can be specified by passing:

unit.eth2.vlan.tag=21 
unit.eth2.mode=vlan 
unit.eth2.hosteth=eth0 

Instead of specifying interface-specific VLAN tag, the handler also supports specifying a VLAN tag using the unit.vlan.tag property. In this case the machine can have only one network interface (eth1):

unit.vlan.tag=20
unit.vlan.hosteth=eth0

When both unit.vlan.tag and unit.eth1.vlan.tag are specified, the latter takes precedence.

Instead of attaching VM interfaces to VLANs on physical hosts, it is also possible to attach them to plain host interfaces. In this case only one option is available:

unit.eth1.mode=phys
unit.eth1.hosteth=eth0 

The handler also accepts IP address of the VM interface in by specifying the propert(ies):

unit.eth1.ip=1.2.3.4/24

Instance Configuration Script (can be dynamic or static)

To pass an instance configuration script pass the contents (not the location) of the script in the unit.instance.config property:

unit.instance.config=echo "hello world"

Guest SSH Public Key

By default all NEuca VMs are created to authorize SSH connections using the site authority's private key. To enable the guest to connect as root using its own key, pass the guest's SSH public key in config.ssh.key:

config.ssh.key=GUEST_PUBLIC_SSH_KEY

Proxy configuration (statically configured)

The handler support configuring a proxy for the created instance for situations when instances are created within a private address space separated from the public Internet. Currently SHOREWALL-DNAT proxy is supported. The following properties are used by the handler (typically specified in ec2.site.properties, see below):

  • Whether proxy should be used at all (true|false)
    ec2.use.proxy=true
    
  • The type of proxy (currently supported types: 'SHOREWALL-DNAT')
    proxy.type=SHOREWALL-DNAT
    
  • IP address of proxy host
    proxy.proxy.ip=geni-test.renci.ben
    
  • Username on the proxy authorized to make configuration changes
    proxy.user=orca
    
  • Filename containing private SSH key of the authorized user (absolute path)
    proxy.ssh.key=/opt/orca/config/orca-proxy-ssh-key
    
  • Path to shorewall scripts on proxy
    proxy.script.path=/opt/shorewall-scripts
    

Static properties/ec2.site.properties

See sample ec2.site.properties for Bella 2.2 and sample ec2.site.properties for Camano 3.0+ files for more details.

Eucanet handler

Usually a NEuca cluster has an ORCA-controlled switch that ethX (ethX != eth0) of all worker nodes are connected to. In order to perform things like site embedding (embedding a topology in a single NEuca site), you must configure another authority actor for VLANs (eg. unc-net-site or renci-net-site) and use the appropriate handler that will configure VLANs on the switch as needed. The handler definition should look something like this (also above in the larger example):

                                        <!-- For Bella 2.2 you have to specify a site-specific handler (e.g. renci.euca.net.xml) 
                                        For Camano 3.0+  a unified handler can be used -->
                                        <handler path="providers/euca-sites/unified.euca.net.xml">
                                                <properties>
                                                        <!-- name of the site that helps locate appropriate property definitions (Camano 3.0+) -->
                                                        <property name="euca.site" value="renci" />
                                                        <!-- credentials for logging into the switch  (Bella 2.2) -->
                                                        <property name="eucanet.credentials" value="/opt/orca/config/eucanet.cred.properties" />
                                                </properties>
                                        </handler>

Accessing VMs Created by NEuca

Each VM created by NEuca is represented as an Unit object in the ConcreteSet? of the VM reservation. The unit.manage.ip property can be used to access the management ip of the VM. If the guest specified an SSH key, then it can use it to connect to the machine at address unit.manage.ip.