Introduction to ORCA

Overview

ORCA is a software framework and open-source platform for managing a programmatically controllable shared substrate, which may contain any combination of servers, storage, networks, or other components. This class of systems is often called cloud computing or utility computing.

The ORCA software is deployed as a control framework for a prototype GENI facility. We see GENI as an ambitious futuristic vision of cloud networks as a platform for research in network science and engineering.

An ORCA deployment is a dynamic collection of interacting control servers (actors) that work together to provision and configure resources for each guest according to the policies of the participants. The actors represent various stakeholders in the shared infrastructure: substrate providers, resource consumers (e.g., GENI experimenters), and brokering intermediaries that coordinate and federate substrate providers and offer their resources to a set of consumers.

ORCA is based on the foundational abstraction of resource leasing. A lease is a contract involving a resource consumer, a resource provider, and one more brokering intermediaries. Each actor may manage large numbers of independent leases involving different participants.

This document is meant to present a brief overview of ORCA architecture, principles and operation. For more information please refer to the Further Reading section below.

Terminology

  • Substrate - a collection of resources under specific administrative ownership. Examples of substrate are servers (virtualizable or not), network links, data sets, measurement resources.
  • Sliver - a smallest unit of some resource that is independently programmable and/or independently controllable in some resource-specific fashion. Each sliver is granted from a single substrate provider. An example of a sliver can be a Virtual Machine created inside a cloud of a specific provider.
  • Slices are groupings of multiple slivers from multiple substrate providers. Every sliver is allocated for exactly one slice.
  • Guest - a distributed software environment running within collection of slivers configured to order, possibly from different substrate providers. Some guests will be long-running services that require different amounts of resources at different stages of execution. The guests may range from virtual desktops to complex experiments to dynamic instantiations of distributed applications and network services.

Principles of operation

There are three basic actor roles in the architecture, representing the providers, consumers, and intermediaries respectively. There can be many instances of each actor type, e.g., representing different substrate providers or resource consumers.

  • Authority or Aggregate Manager (AM). An authority actor controls access to some subset of the substrate components. It corresponds directly to the aggregate manager (AM) in GENI. Typically, an authority controls some set of infrastructure resources in a particular site, autonomous system, transit domain, administrative domain, or component aggregate comprising a set of servers, storage units, network elements, or other components under common ownership and control. Terminology note: the term site or site authority (e.g., a cluster site or hosting center) is often used to refer to a substrate authority/AM, as a result of our roots in virtual cloud computing. For network substrates we are using the term domain authority more often when it is appropriate.
  • Slice/Service Manager (SM) or Slice Controller. This actor is responsible for creating, configuring, and adapting one or more slices. It runs on behalf of the slice owners to build each slice to meet the needs of a guest that inhabits the slice. Terminology note: this actor was originally called a Service Manager in SHARP (and in the Shirako code) because the guest was presumed to be a service. As GENI has developed, we have adopted the term Slice Manager (SM) because the actor’s role is to control the slice, rather than the guest itself, and because in GENI the guest is an experiment rather than a service. A Slice Controller is a plugin module to an SM actor, with a control policy for slices managed by that SM.
  • Broker. A broker mediates resource discovery and arbitration by controlling the scheduling of resources at one or more substrate providers over time. It may be viewed as a service that runs within a GENI clearinghouse. A key principle in ORCA is that the broker can have specific allocation power delegated to it by one or more substrate authorities, i.e., the substrate providers “promise” to abide by allocation decisions made by the broker with respect to their delegated substrate. This power enables the broker to arbitrate resources and coordinate allocation across multiple substrate providers, as a basis for federation and scheduling of complex slices across multiple substrate aggregates. Brokers exercise this power by issuing tickets that are redeemable for leases.

Orca Architecture

Authorities delegate resources to brokers (one authority can delegate resources to one or more brokers). Brokers hold the promised resources until SMs request them. Brokers issue tickets for resources from different authorities to an SM, which redeems those tickets at AMs (Authorities). AMs instantiate the resource slivers and pass the control of them to the SM. Pluggable control and access policies help mediate these transactions. Query interfaces allow actors to query the state of other actors.

Slices are used to group resources together. A slice in a Service Manager represents resources owned by a particular user (e.g. for the purpose of an experiment). If a Service Manager acquires resources from multiple brokers, they are not aware of each other. A slice in an authority represents the resources given by this authority to a particular service manager slice. Notice that architecturally in ORCA only the Service Manager knows the exact composition of the experimenter's slice. All other actors may have only a partial view of the experimenter's slice.

Naming

Actors in ORCA are referred to by names and GUIDs. Each actor must be identifier by a unique name and GUID.

Slices are referred to by names and GUIDs. Those are normally automatically generated by various actors:

  • The GUID for a lease is selected by the SM that requested it. The lease properties are a union of the resource type properties, derived by the broker from the containing resource pool, and configuration properties specified by the requesting SM.
  • The GUID for a sliver is assigned by the granting AM and is returned in any lease covering the sliver.
  • The GUID for a slice is assigned by the SM that wishes to create a slice for the purpose of grouping its leases. Creating a slice is not a privileged operation. The creating SM may also attach properties to the slice.

Plugins

The core of ORCA is neutral to the types of resources, their control policies and ways of controlling them (e.g. instantiating slivers). These details are implemented via a number of plugins of different types. Plugins are registered with the actor and are upcalled on various events. There are four plugin interfaces of primary interest to integrators and operators:

  • Controllers. Each actor invokes a policy controller module in response to periodic clock ticks. Clocked controllers can monitor lease status or external conditions and take autonomous action to respond to changes. Shirako provides APIs for policy controllers to iterate collections of leases, and monitor and generate events on leases. Any calendar-based state is encapsulated in the controllers. Controllers may also create threads to receive instrumentation streams and/or commands from an external source.
  • ResourceControl. At an authority/AM, the mapping (also called binding or embedding) of slivers onto components is controlled by an assignment or resource control policy. The policy is implemented in a plugin module implementing the !IResourceControl interface. ResourceControl is indexed and selectable by resource type, so requests for slivers of different types may have different policies, even within the same AM.
  • Resource handlers. The authority/AM actor upcalls a handler interface to setup and teardown each sliver. Resource handlers perform any substrate-specific configuration actions needed to implement slivers. The handler interface includes a probe method to poll the current status of a sliver, and modify to adjust attributes of a sliver.
  • Guest handlers. The SM leasing engine upcalls a handler interface on each sliver to join it to a slice and cleanup before leave from the slice. Guest handlers are intended for guest-specific actions such as installing layered software packages or user keys within a sliver, launching experiment tasks, and registering roles and relationships for different slivers in the slice (contextualization). Of course, some slivers might not be programmable or user-customizable after setup: such slivers do not need a guest handler.

All these plugins can be specified at in the actor configuration file immediately prior to deployment.

Implementation

ORCA is implemented as a webapp intended to run inside a Tomcat Java servlet engine. A webapp is packaged as a webapp WAR file. Internally the webapp implements a container in which one or more ORCA actors run. Actors can communicate with other actors across multiple containers. Actors digitally sign their communications using self-signed certificates (although using certificates issued by a commercial CA is also possible). SSL is not used. We believe that state-changing commands or actions must be signed so that actions are non-repudiable and actors can be made accountable for their actions. SSL alone is not sufficient for this purpose. Given that we are concerned with message integrity and authenticity, rather than privacy, SSL is not necessary.

ORCA currently uses a slightly modified version of Tomcat 5.5 available here. Starting with Camano 3.1 it is possible to run ORCA inside Jetty.

Most of the ORCA code is written in Java, although substrate handlers (parts of code responsible for creating and destroying slivers of different kinds) are implemented as a combination of Ant scripts, Java tasks and bash scripts. ORCA user tools that speak to its GENI and ProtoGENI AM API-compliant controller are written in Python.

The code is organized as a series of Maven projects under a common tree and can be compiled and built using only a few steps, assuming software prerequisites are met. At this time a pre-packaged distribution is also available. A new user can either download the binary WAR file or the download and compile the source release of the code (we strongly recommend using the latest available stable release), create configuration files and deploy the (downloaded or built) WAR file into the Tomcat servlet engine.

Currently ORCA supports a number of different substrate types: Eucalyptus private cloud clusters (with NEuca extensions), NLR Sherpa dynamic VLAN service, BEN multi-layered optical network and a number of testbeds.

ORCA delegation

Configuration

Configuring ORCA requires creating several configuration files:

  • container.properties file which defines the basic properties of the entire container. A sample file is available here
  • an XML file which contains the configuration for the actors that will run inside this container. It describes the actors in this container and their connections with each other and actors in other containers (the actor topology).
  • authority actors require a file describing the substrate available to them expressed in NDL-OWL - an expressive language based on Semantic Web technologies. There are a number of sample files are available, and we can help create new files for new sites.

The container.properties file is stored outside the webapp and is read at webapp startup (either when the webapp is deployed into a running Tomcat engine, or when the Tomcat engine is restarted) and therefore can be modified at any time without affecting other parts of ORCA.

A default actor configuration XML file is included with the binary distribution of the webbapp and the source code. This file starts a single-container emulation of three basic ORCA actors - an authority, a broker and a service manager. Emulation refers to the fact that ORCA does not operate on real substrate and only goes through the motions of managing reservations. An alternative configuration XML file can be created and placed outside of ORCA, in which case the default file will be ignored. Also, if you are rebuilding ORCA from source, you can package it with your own version of the actor configuration XML file into the WAR file.

A relatively new feature of ORCA is an ORCA actor registry - a separate component that is run as a service for all ORCA users. If desired, actors in any container can register with this registry and thus automatically become visible to all other publicly available actors. This feature allows for a much simpler configuration process, especially the actor topology part. In many cases using the ORCA Actor Registry makes the actor topology definition in the actor configuration XML file unnecessary.

Operation

A deployed ORCA container can be accessed through a browser-friendly portal. In the current implementation, this Web portal provides a common GUI interface to all of the actors in a single container. A container owner can manage all the actors in her container through the web portal. This includes creating resource delegations from authorities, claiming resources from brokers and creating resource reservations from service managers. A standard ORCA distribution includes an XMLRPC controller compliant with GENI AM and ProtoGENI AM APIs that can be accessed via XMLRPC tools. This controller can be started when a container that has the service manager actor is deployed. Other controllers exist and are possible to create with any desirable interface (GUI or command-line).

ORCA includes advanced recovery features that allow individual containers to shut down and come back up while the actors within retain their state. Recovery can be turned on or off when a container is restarted, depending on the need.

Further reading

Attachments