Version 20 (modified by ibaldin, 9 years ago)

--

Using NEuca in Orca

Overview

This page contains notes about how NEuca is to be used in Orca.

NEuca Site Authority

To use NEuca, you need a NEuca-enabled Eucalyptus cluster and an Orca site authority actor configured to interact with NEuca.

To define an actor to interact with NEuca, you can use the following template:

    <actor>
            <type>site</type>
            <name>YOUR_ACTOR_NAME</name>
             <guid>GENERATE_YOUR_ACTOR_GUID</guid>
            <pools>
                <pool>
                    <type>YOUR_RESOURCE_TYPE_NAME</type>
                    <label>Eucalyptus Virtual Machine</label>
                    <description>A virtual machine</description>
                    <units>10</units>
                    <start>2010-01-30T00:00:00</start>
                    <end>2011-01-30T00:00:00</end>
                    <handler path="ec2/handler.xml">
                        <properties>
                            <!-- 
                            By default the handler assumes that the keys are under $ORCA_HOME/ec2. 
                            If you want the handler to use keys from a different location, specify it here.
                            Note: it must be an absolute path.
                            -->
                            <!-- <property name="ec2.keys" value="path_to_keys_dir" /> -->
                        </properties>
                    </handler>
                </pool>
            </pools>
            <controls>
                <control type="YOUR_RESOURCE_TYPE_NAME" class="orca.policy.core.SimpleVMControl" />
            </controls>
        </actor>

Please replace the text in caps with text appropriate for your setup. You probably also want to change the units and the start/end parameters for the resource type.

Example NEuca Site Authority

The following is an example config file containing a NEuca Site authority actor ('unc-vm-site') corresponding to an Eucalyptus (NEuca enabled) installation at UNC. This config file also contains an OPTIONAL network site authority actor ('unc-net-site') to control a network switch attached to the Eucalyptus installation. Both the actors are NDL-enabled, which means that the substrate NDL description has been specified by the property substrate.file. Eg. <property name="substrate.file" value="/opt/orca/ndl/uncvmsite.rdf" /> The actors talk to a remote broker actor ('ndl-broker') and delegate resources to this broker.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration xmlns="http://issg.cs.duke.edu/sharp/boot">
        <actors>
                <actor>
                        <type>site</type>
                        <name>unc-vm-site</name>
                        <guid>a01ca9fd-3bcf-4e4c-b4bf-4ce1b9820785</guid>
                        <description>UNC Euca site authority</description>
                        <pools>
                                <pool factory="orca.boot.inventory.NdlResourcePoolFactory">
                                        <type>unc.vm</type>
                                        <label>Eucalyptus Virtual Machine (UNC)</label>
                                        <description>A virtual machine</description>
                                        <units>10</units>
                                        <start>2010-01-30T00:00:00</start>
                                        <end>2011-01-30T00:00:00</end>
                                        <handler path="ec2/handler.xml">
                                                <properties>
                                                        <property name="ec2.keys" value="/opt/orca/ec2" />
                                                </properties>
                                        </handler>

                                        <attributes>
                                                <attribute>
                                                        <key>resource.domain</key>
                                                        <type>String</type>
                                                        <value>uncvmsite</value>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.memory</key>
                                                        <label>Memory</label>
                                                        <value>128</value>
                                                        <unit>MB</unit>
                                                        <type>integer</type>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.cpu</key>
                                                        <label>CPU</label>
                                                        <value>1/2 of 2GHz Intel Xeon</value>
                                                        <type>String</type>
                                                </attribute>
                                        </attributes>
                                        <properties>
                                                <!-- site NDL description -->
                                                <property name="substrate.file" value="/opt/orca/ndl/uncvmsite.rdf" />
                                        </properties>
                                </pool>
                        </pools>
                        <controls>
                                <control type="unc.vm" class="orca.policy.core.SimpleVMControl" />
                        </controls>
                </actor>

                <actor>
                        <type>site</type>
                        <name>unc-net-site</name>
                        <guid>22d0adec-22c1-488a-9356-a908346c1ded</guid>
                        <description>UNC NET authority</description>
                        <pools>
                                <pool factory="orca.boot.inventory.NdlResourcePoolFactory">
                                        <type>unc.vlan</type>
                                        <label>UNC NET VLAN</label>
                                        <description>A VLAN over UNC NET</description>
                                        <units>5</units>
                                        <start>2010-01-30T00:00:00</start>
                                        <end>2011-01-30T00:00:00</end>
                                        <handler path="controllers/ben/gec9/unc.euca.net.xml" />
                                        <attributes>
                                                <attribute>
                                                        <key>resource.domain</key>
                                                        <type>String</type>
                                                        <value>uncnet</value>
                                                </attribute>
                                                <attribute>
                                                        <key>resource.class.invfortype</key>
                                                        <type>Class</type>
                                                        <value>orca.controllers.ben.broker.NDLVlanInventory</value>
                                                </attribute>
                                        </attributes>
                                        <properties>
                                                <property name="vlan.tag.start" value="16" />
                                                <property name="vlan.tag.end" value="20" />
                                                <!-- site ndl file -->
                                                <property name="substrate.file" value="/opt/orca/ndl/uncNet.rdf" />
                                        </properties>
                                </pool>
                        </pools>
                        <controls>
                                <control type="unc.vlan" class="orca.policy.core.VlanControl" />
                        </controls>
                </actor>

        </actors>
        <topology>
                <edges>
                        <edge>
                                <from name="ndl-broker" guid="25bc9111-9b41-46ab-a96b-3c87f574cfde" type="broker" >
                                        <location protocol="soapaxis2" url="http://geni-ben.renci.org:11080/orca/services/ndl-broker" />
<certificate>
MIICbTCCAdagAwIBAgIETDtgYzANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJVUzELMAkGA1UE
CBMCTkMxDzANBgNVBAcTBkR1cmhhbTENMAsGA1UEChMEb3JjYTEQMA4GA1UECxMHc2hpcmFrbzEt
MCsGA1UEAxMkMjViYzkxMTEtOWI0MS00NmFiLWE5NmItM2M4N2Y1NzRjZmRlMB4XDTEwMDcxMjE4
MzUxNVoXDTIwMDcwOTE4MzUxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5DMQ8wDQYDVQQH
EwZEdXJoYW0xDTALBgNVBAoTBG9yY2ExEDAOBgNVBAsTB3NoaXJha28xLTArBgNVBAMTJDI1YmM5
MTExLTliNDEtNDZhYi1hOTZiLTNjODdmNTc0Y2ZkZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAqcyS60d5t9c3eEud529hYmD/0BrIHGkEevwAtqBb7FFD1X98SB1G8y7gzxplt0xr2Hm72Et+
01qB7YgT6XQHWfJQQW7RUZEnrDbGsS0v6bffY291eeDVd0ZCH1ogzPDlyMqdhSGKsstqZd0CYc2E
zRFNngOIytBu1m59Jr6/FqsCAwEAATANBgkqhkiG9w0BAQUFAAOBgQCpFKta+1JitcfPbti3x3Tj
WqqINj2f/MzwTVZbxV1eW6gLrwc3FRTX8RgAfqn2sl9Igxfzb+GbQbhY2j5iyBsEV90eKjQQitgv
KUA1IpJqVMYiGSohX2jL+uXEK7bujv9eRyNG82Rp+ouWCrDKo7kOVLh/iSD1s8Mrk03/wd3qfw==
</certificate>
                                </from>
                                <to name="unc-net-site" type="site" />
                                <rset>
                                        <type>unc.vlan</type>
                                        <units>10</units>
                                </rset>
                        </edge>
                        <edge>
                                <from name="ndl-broker" guid="25bc9111-9b41-46ab-a96b-3c87f574cfde" type="broker">
                                        <location protocol="soapaxis2" url="http://geni-ben.renci.org:11080/orca/services/ndl-broker" />
<certificate>
MIICbTCCAdagAwIBAgIETDtgYzANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJVUzELMAkGA1UE
CBMCTkMxDzANBgNVBAcTBkR1cmhhbTENMAsGA1UEChMEb3JjYTEQMA4GA1UECxMHc2hpcmFrbzEt
MCsGA1UEAxMkMjViYzkxMTEtOWI0MS00NmFiLWE5NmItM2M4N2Y1NzRjZmRlMB4XDTEwMDcxMjE4
MzUxNVoXDTIwMDcwOTE4MzUxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5DMQ8wDQYDVQQH
EwZEdXJoYW0xDTALBgNVBAoTBG9yY2ExEDAOBgNVBAsTB3NoaXJha28xLTArBgNVBAMTJDI1YmM5
MTExLTliNDEtNDZhYi1hOTZiLTNjODdmNTc0Y2ZkZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAqcyS60d5t9c3eEud529hYmD/0BrIHGkEevwAtqBb7FFD1X98SB1G8y7gzxplt0xr2Hm72Et+
01qB7YgT6XQHWfJQQW7RUZEnrDbGsS0v6bffY291eeDVd0ZCH1ogzPDlyMqdhSGKsstqZd0CYc2E
zRFNngOIytBu1m59Jr6/FqsCAwEAATANBgkqhkiG9w0BAQUFAAOBgQCpFKta+1JitcfPbti3x3Tj
WqqINj2f/MzwTVZbxV1eW6gLrwc3FRTX8RgAfqn2sl9Igxfzb+GbQbhY2j5iyBsEV90eKjQQitgv
KUA1IpJqVMYiGSohX2jL+uXEK7bujv9eRyNG82Rp+ouWCrDKo7kOVLh/iSD1s8Mrk03/wd3qfw==
</certificate>

                                </from>
                                <to name="unc-vm-site" type="site" />
                                <rset>
                                        <type>unc.vm</type>
                                        <units>10</units>
                                </rset>
                        </edge>
                </edges>
        </topology>
</configuration>

Eucalyptus Keys

The path to ec2.keys in the config file is important. It must point to the location which has the eucalyptus keys and eucarc. In the example config file, the ec2 keys are stored in /opt/orca/ec2. To get the keys for your eucalyptus account, you have to log in to your Eucalyptus portal with the euca username, say, 'orca'. Go to Credentials tab and then click on 'Download Credentials' under 'Credentials ZIP-file'. Unzip the contents of this zip file (which will have a name like euca2-<euca username>-x509.zip) in the ec2.keys directory.

Associate a private key to your euca account, which can be used to log in to any instances brought up. This key can be generated by issuing the following command after sourcing eucarc.

$ euca-add-keypair keyName

Store the output of this command in a file with name as keyName and put it in the ec2.keys directory.

Edit eucarc in that directory as follows. Comment out the first line by inserting '#' at the beginning of the line. Add the following lines to eucarc. The AMI_NAME must correspond to the emi-id of a neuca-enabled image. Replace XXXXXX with the correct id . The EC2_SSH_KEY property must point to the keyName file. EC2_USE_PUBLIC_ADDRESSING should be set to true (boolean, not string) if you want Eucalyptus to use a pool of available public IP addresses, false otherwise. If not set, defaults to 'false'. EC2_PING_RETRIES defines how many times to try to ping the instance after it is in 'running' state before tearing it down and exiting with an error. If unset, defaults to 60 (with 1 second sleep interval at each step).

export AMI_NAME=emi-XXXXXX
export EC2_SSH_KEY=keyName
export EC2_USE_PUBLIC_ADDRESSING=true
export EC2_PING_RETRIES=60

Notes on deploying NEuca Authority

If you are deploying the NEuca authority in the same machine where Eucalyptus head node resides, a different port number has to be used by the tomcat. This is because the default port used by our tomcat (Port 8080) is already in use by Eucalyptus. To change the default port from 8080 to another unused port number, say 11080, the following needs to be changed. All changes are with respect to the webapp/ directory. In package/webapp/conf/server.xml , change "<Connector port="8080" maxHttpHeaderSize..." to "<Connector port="11080" maxHttpHeaderSize..." . In ant/build.properties, change "target.port=8080" to "target.port=11080" . Change any relevant soapaxis2 url in actor_configs/config.xml for OTHER actors talking to the NEuca actors. Edit one more tomcat file - If your tomcat installation is at /opt/orca/tomcat, edit /opt/orca/tomcat/conf/server.xml by changing "<Connector port="8080" maxHttpHeaderSiz.." to "<Connector port="8080" maxHttpHeaderSiz.."

Testing NEuca from tools/cmdline

You can use a test config file - tools/cmdline/tests/euca.xml to test the NEuca handler without deploying webapp. The steps are the following. Make a new directory tools/cmdline/ec2 . Copy the contents of your ec2.keys dir (See section on Eucalyptus keys) into tools/cmdlime/ec2 . Edit tools/cmdline/tests/euca.xml by uncommenting the line with 'ec2.keys' and providing the ABSOLUTE path to tools/cmdline/ec2. For example, that line might look like : <property name="ec2.keys" value="/home/orca/orca-trunk/trunk/tools/cmdline/ec2" /> . Set "emulation=false" in tools/cmdline/config/container.properties. Edit ant/tests.xml . Go to the section that starts with '<target name="test.euca">' and change the following property "<property name="leaseLength" value="30" />" to "<property name="leaseLength" value="300" />". In tools/cmdline, run 'ant get.packages'. Then run 'ant test.euca' . This should fire up one instance on your euca-installation. If things go fine, you should be able to login to the instance at the ip specified in unit.manage.ip in the output using your key - <keyName>. Do ssh -i <keyName> root@<unit.manage.ip>.

NOTE: handler scripts (start.sh/stop.sh that invoke ec2 scripts) produce log file /tmp/ec2.handler.log on failure that can contain useful information).

NEuca Handler

The NEuca handler supports creating VMs using the NEuca extension. The current version of the handler supports the following:

  • multiple network interfaces
  • ip addresses per network interface
  • an instance configuration script
  • user SSH public key

The support for these features is controlled by one or more properties, which are described below.

Network Interface Configuration

NEuca allows reservation requests to control any network interface other than eth0, which is reserved internally. Each network interface must at least be associated with a VLAN tag, and can optionally specify an IP address and a subnet mask (in / notation, e.g, 1.2.3.4/24).

NOTE: the support for specifying an IP address in the ORCA handler is still incomplete.

To specify configuration for eth1, the following properties must be passed to the site authority:

unit.eth1.vlan.tag= 20
unit.eth1.mode=vlan
unit.eth1.hosteth=eth0  // eth0 is assumed to be the interface on the physical host to which VM needs to attach its eth1. This is dependent on the deployment

If the VM should contain one more interface, then it can be specified by passing:

unit.eth2.vlan.tag=21
unit.eth2.mode=vlan
unit.eth2.hosteth=eth0

Instead of specifying interface-specific VLAN tag, the handler also supports specifying a VLAN tag using the unit.vlan.tag property. In this case the machine can have only one network interface (eth1):

unit.vlan.tag=20
unit.vlan.hosteth=eth0

When both unit.vlan.tag and unit.eth1.vlan.tag are specified, the latter takes precedence.

Instead of attaching VM interfaces to VLANs on physical hosts, it is also possible to attach them to plain host interfaces. In this case only one option is available:

unit.eth1.mode=phys
unit.eth1.hosteth=eth0

The handler also accepts IP address of the VM interface in by specifying the propert(ies):

unit.eth1.ip=1.2.3.4/24

Instance Configuration Script

To pass an instance configuration script pass the contents (not the location) of the script in the unit.instance.config property:

unit.instance.config=echo "hello world"

Guest SSH Public Key

By default all NEuca VMs are created to authorize SSH connections using the site authority's private key. To enable the guest to connect as root using its own key, pass the guest's SSH public key in config.ssh.key:

config.ssh.key=GUEST_PUBLIC_SSH_KEY

Accessing VMs Created by NEuca

Each VM created by NEuca is represented as an Unit object in the ConcreteSet? of the VM reservation. The unit.manage.ip property can be used to access the management ip of the VM. If the guest specified an SSH key, then it can use it to connect to the machine at address unit.manage.ip.