Deploying an Authority (Aggregate Manager/AM)

Overview

An ORCA Authority actor runs on behalf of an infrastructure provider (e.g., a cloud site) and exports its resources through ORCA. This section covers deployment steps for a typical ORCA site authority (AM) running an OpenStack or Eucalyptus cloud site. The setup presumes that the container with ORCA is deployed on the head node of the OpenStack or Eucalyptus cluster (although this is not a firm requirement).

The configuration of an ORCA site authority for OpenStack/Eucalyptus consists of the following steps:

  • Set up an OpenStack or Eucalyptus cluster
  • Optionally set up xCAT (if you want to do baremetal node provisioning)
  • Set up Image Proxy
  • Optionally setup ssh DNAT Proxy. This component is needed only if the cloud site has no publicly routable IP addresses to assign to VMs.
  • Create an ORCA configuration directory ($ORCA_HOME) and populate it with configuration files describing the cloud site and selected policies.
  • Download and customize Tomcat servlet engine (container)
  • Deploy ORCA into Tomcat container

Deploying an OpenStack/xCAT/Eucalyptus/NEuca authority actor

Deployment Overview

Here you can look at ExoGENI OpenStack/xCAT hardware configurations. Alternatively, a less expensive option for OpenStack/Eucalyptus cluster hardware configuration.

  • The ORCA container is typically set up on the OpenStack/xCAT/Eucalyptus head node. It can be set up on any host with a route to the head node.
  • Image Proxy is typically set up on the head node. It can be set up on any host with a route to the head node. Eucalyptus user tools (euca2ools) must be installed on this host.
  • ssh DNAT proxy must be set up on a host with a publicly routable IP address and a route to the head node

For example in the simplest case, when a head node has a publicly routable IP address and a pool of public IP addresses to give out to VMs, the ssh DNAT Proxy is not needed and the ORCA container and Image Proxy can be installed on the head node.

For creating slice dataplane (Layer 2 network connecting nodes within a slice) within a cluster or between clusters, ORCA can use one of the following approaches:

  • The cluster has a network switch that ORCA is empowered to create VLANs on (currently we support Juniper EX3200, Cisco 65xx and 34xx families). It is relatively easy to write a new set of driver tasks for a new switch for ORCA.
  • The cluster is given a pool of VLANs by the network administrator

Each OpenStack/Eucalyptus cluster is usually managed by two ORCA actors - one for creating intra-site topologies (named xxx-vm-site), which manages both the cluster and the switch (ExoGENI uses a more sophisticated configuration of actors). The other (named xxx-net-site) is used for connecting to connecting slivers from this site to other sites at Layer 2, if the cluster has access to a dynamic circuit network (NLR, ION, ESnet) either via direct peering or via a pool of static vlans.

Setup xCAT

Follow standard instructions.

Setup OpenStack/Eucalyptus

We have modified Eucalyptus and OpenStack to be more friendly to network experimenters (we call these modifications 'NEuca'). For OpenStack, follow these instructions. For Eucalyptus, follow instructions for setting up Eucalyptus with NEuca patches at Eucalyptus 2.x setup with NEuca.

There are instructions on how to use NEuca with ORCA, which we will be referring to throughout this document, so it is useful to read through it.

Additional components

We have developed several components that make a cloud site more ORCA- and GENI- friendly. These components are:

  • Image Proxy - downloads and caches shared VM images from the network using HTTP/FTP/bittorrent, and registers them for use within the cloud site. The Image Proxy allows ORCA users to reference images from virtual appliance servers or distribute custom VM images for use at multiple cloud sites.
  • DNAT Proxy - permits public SSH access to VMs/slivers even when they do not have public IP addresses (e.g., for a cluster that is hosted behind a firewall).

Image Proxy is a mandatory component, while DNAT proxy is optional. Both components are set up separately and configured for use by ORCA through the $ORCA_HOME/ec2/ec2.site.properties file. The following sections describe how to set up these components. Be sure to match the right versions of components to your ORCA release.

Image Proxy

When a user requests a VM from a cloud infrastructure service, it must specify a filesystem image, kernel (optional) and ramdisk (optional) for the VM. The Image Proxy allows ORCA users to name these objects with URLs, and deploy images at multiple cloud sites easily, without manual registration. Swarming protocols such as BitTorrent? can be used to distribute images to many sites efficiently. Follow instructions on https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=ImageProxy to set up and run Image proxy. To configure ORCA to use the Image proxy, follow instructions on ImageProxy with ORCA.

Image Proxy is typically deployed into a separate Axis2 container on the head node. If not, it can be deployed on a separate host that

  • Has a routable path to the head node
  • Has Eucalyptus user tools installed

Image Proxy with Eucalyptus/NEuca

ssh DNAT Proxy Tunneling and Using Shorewall

When you need management access to VM instances created in a private address space separated from the public Internet, ssh proxy tunneling can be used. We support Shorewall-DNAT proxy for this purpose. Install and run Shorewall on a machine (the NAT host) that is accessible via the public Internet by following instructions at Shorewall setup. To use Shorewall with ORCA, follow instructions for Shorewall configuration for ORCA.

The DNAT Proxy must be installed on a host that has a publicly routable IP address and has a route to the head node. DNAT Proxy is only needed if the head node has no publicly routable IP address or has no public IP addresses to give out to the VMs.

DNAT Proxy

Note that the DNAT proxy is intended only for management access to VMs in a slice. Dataplane communications are separate and do not require proxying.

ORCA Configuration

As described at the top of this section, you should select a host to run a container with ORCA authority actor(s). All configuration actions are then limited to this host.

Create a MySQL database for ORCA

Follow these instructions.

Prepare $ORCA_HOME directory

Follow instructions on how to set up ORCA configuration directory structure, generate one GUID for the new container and two GUIDs and certificates for new actors (make note of actor GUIDs).

Be sure to enable remote actor registry in container.properties file.

After that create additional directories for storing site properties and credentials (ec2.site.properties, ec2.cred.properties files) and Euca site resource description files (in NDL-OWL):

$ mkdir $ORCA_HOME/ndl
$ mkdir $ORCA_HOME/ec2

OpenStack/Eucalyptus credentials

Use 'nova_manage' to create user 'orca' in OpenStack or create user 'orca' or similar in your Eucalyptus cluster portal. In Eucalyptus, go to the portal and download the users credentials zip file. Unzip the contents euca credentials zip file into $ORCA_HOME/ec2.

$ cd $ORCA_HOME/ec2
$ unzip ~/euca2-orca-x509.zip 

For Eucalyptus comment out the first line in $ORCA_HOME/ec2/eucarc (ORCA uses native EC2 tools to talk to Eucalyptus, rather then eucalyptus user tools; the first line confuses EC2 tools):

#EUCA_KEY_DIR=$(dirname $(readlink -f ${BASH_SOURCE}))

Note that in OpenStack?, the file name is 'novarc' instead of 'eucarc'

Generate a key-pair for ORCA for the OpenStack/Eucalyptus 'orca' user created above. The name of this keypair is used later to populate the "ec2.ssh.key" property in ec2.site.properties file below.

$ source $$ORCA_HOME/ec2/eucarc
$ euca-add-keypair orca
$ cat <output_previous_command> > $$ORCA_HOME/ec2/orca

Generate and store resource representations for the OpenStack/Eucalyptus Site

Generate the NDL resource description of the cluster site and store it in $ORCA_HOME/ndl. Example of a site NDL resource description can be found here. Consult RENCI staff on how to generate this. Let ORCA_SRC be the root of the downloaded ORCA source. Actor config.xml file will reference this file later.

$ cp $ORCA_SRC/network/src/main/resources/orca/network/rencivmsite.rdf $ORCA_HOME/ndl/.
$ cp $ORCA_SRC/network/src/main/resources/orca/network/renciNet.rdf $ORCA_HOME/ndl/.

$ORCA_HOME/config/ec2.site.properties

Modify orca/trunk/handlers/ec2/ec2.site.sample.properties for your installation. For the ssh DNAT proxy section, see this document. For the Image proxy section, see "Handler Integration" in this document. Name this file 'ec2.site.properties' and place it in $ORCA_HOME/config .

$ cp $HOME/ec2.site.sample.properties $ORCA_HOME/config/ec2.site.properties 

$ORCA_HOME/config/xcat.site.properties

Modify orca/trunk/handlers/xcat/xcat.site.sample.properties for your installation.

Interfacing ORCA with a backplane switch

In order to interface ORCA to a backplane switch, a handler must be configured and a configuration file file must be created that tells Orca the type of switch, its IP address and the credentials to use when configuring it.

  • For Cisco and Juniper switches, the eucanet handler must be configured with eucanet.cred.properties file as described in the section "Eucanet handler" . The eucanet.cred.properties file must be placed in $ORCA_HOME/config.
  • For OpenFlow-enabled switches, FlowVisor must be installed and paired with the switch and in this case flowvisor handler must be configured and a flowvisor.properties file must be created and placed in $ORCA_HOME/config. Please consult the FlowVisor handler section for more information.

$ORCA_HOME/config/config.xml

An example of a configuration file for a container with site authority actors managing an OpenStack/Eucalyptus/NEuca cluster and a network switch can be found here. Please modify this file to tailor to your installation. At a minimum update the GUIDs (use the GUIDs generated above for each actor), names and descriptions of the two actors. Verify that

  • substrate.file
  • ec2.site.properties
  • ec2.keys
  • eucanet.credentials

have the correct absolute paths in your environment. Name this file 'config.xml' and place it in $ORCA_HOME/config

$ cp $HOME/euca-m.renci.ben-config.xml $ORCA_HOME/config/config.xml 

Note that the example file defines two authority actors for a Euca site - one for VMs and internal VLANs (renci-vm-site) and one for external VLANs (renci-net-site). If your site's dataplane switch does not have access to NLR, ION or another programmable network service, you do not need the *-net-site actor. A *-vm-site can create embedded topologies of VMs within a single Euca cluster without creating connections to the outside world.

Deploy ORCA

ORCA 3.x

ORCA 3.x series is run inside a Tomcat container. A container must be configured on a particular port with particular security and ORCA webapp must be deployed into it. Multiple containers (running on different ports) are possible on the same host. Download the binary ORCA webapp war file.

ORCA 4.x

ORCA 4.x series uses an embedded Jetty container and does not require Tomcat. It can be built and started like a regular application.

Some troubleshooting tips

Attachments