Version 26 (modified by ibaldin, 8 years ago)


Introduction to ORCA


Orca is a software framework and open-source platform to manage a programmatically controllable shared substrate, which may include servers, storage, networks, or other components. This class of systems is often called cloud computing or utility computing.

The Orca software is deployed as a control framework for a prototype GENI facility. We see GENI as an ambitious futuristic vision of cloud networks as a platform for research in network science and engineering.

An Orca deployment is a dynamic collection of interacting control servers (actors) that work together to provision and configure resources for each guest according to the policies of the partici- pants. The actors represent various stakeholders in the shared infrastructure: substrate providers, resource consumers (e.g., GENI experimenters), and brokering intermediaries that coordinate and federate substrate providers and offer their resources to a set of consumers.

Orca is based on the foundational abstraction of resource leasing. A lease is a contract involving a resource consumer, a resource provider, and one more brokering intermediaries. Each actor may manage large numbers of independent leases involving different participants.

This document is meant to present a brief overview of ORCA arhictecture, principles and operation. For more information please refer to the Further Reading section below.


  • Substrate - a collection of resources under specific administrative ownership. Examples of substrate are servers (virtualizable or not), network links, data sets, measurement resources.
  • Sliver - a smallest unit of some resource that is independently programmable and/or indepen- dently controllable in some resource-specific fashion. Each sliver is granted from a single substrate provider. An example of a sliver can be a Virtual Machine created inside a cloud of a specific provider.
  • Slices are groupings of multiple slivers from multiple substrate providers.
  • Guest - a distributed software environment running within collection of slivers configured to order, possibly from different substrate providers. Some guests will be long-running services that require different amounts of resources at different stages of execution. The guests may range from virtual desktops to complex experiments to dynamic instantiations of distributed applications and network services.

Principles of operation

There are three basic actor roles in the architecture, representing the providers, consumers, and intermediaries respectively. There can be many instances of each actor type, e.g., representing different substrate providers or resource consumers.

  • Authority or Aggregate Manager (AM). An authority actor controls access to some subset of the substrate components. It corresponds directly to the aggregate manager (AM) in GENI. Typically, an authority controls some set of infrastructure resources in a particular site, autonomous system, transit domain, administrative domain, or component aggregate comprising a set of servers, storage units, network elements, or other components under common ownership and control. Terminology note: the term site or site authority (e.g., a cluster site or hosting center) is often used to refer to a substrate authority/AM, as a result of our roots in virtual cloud computing. For network substrates we are using the term domain authority more often when it is appropriate.
  • Slice/Service Manager (SM) or Slice Controller. This actor is responsible for creating, configuring, and adapting one or more slices. It runs on behalf of the slice owners to build each slice to meet the needs of a guest that inhabits the slice. Terminology note. This actor was originally called a service manager in SHARP (and in the Shirako code) because the guest was presumed to be a service. As GENI has developed, we have adopted the term slice controller because the actor’s role is to control the slice, rather than the guest itself, and because in GENI the guest is an experiment rather than a service. Slice Manager (also SM) is also OK since the “controller” is properly speaking a plugin module to the actor itself.
  • Broker. A broker mediates resource discovery and arbitration by controlling the scheduling of resources at one or more substrate providers over time. It may be viewed as a service that runswithin a GENI clearinghouse. A key principle in Orca is that the broker can have specific allocation power delegated to it by one or more substrate authorities, i.e., the substrate providers “promise” to abide by allocation decisions made by the broker with respect to their delegated substrate. This power enables the broker to arbitrate resources and coordinate allocation across multiple substrate providers, as a basis for federation and scheduling of complex slices across multiple substrate aggregates. Brokers exercise this power by issuing tickets that are redeemable for leases.

Orca Architecture

Authorities delegate resources to brokers (one authority can delegate resources to several brokers). Brokers hold the promised resources until Service Managers request them. Brokers issue tickets for resources from different authorities to a Service Manager and a Service Manager redeems those tickets at Authorities. Authorities instantiate the resource slivers and pass the control of them to the Service Manager. Pluggable control and access policies help mediate these transactions. Query interfaces allow actors to asynchronously determine the state of other actors.

Slices are used to group resources together. A slice in a Service Manager represents resources owned by a particular user (e.g. for the purpose of an experiment). A slice in a Broker represents a portion of slivers given to a particular service manager slice from this Broker. If a Service Manager acquires resources from multiple brokers, they are not aware of each other. A slice in an authority represents the resources given by this authority to a particular service manager slice. Notice that architecturally in ORCA only the Service Manager knows the exact composition of the experimenter's slice. All other actors may have only a partial view of the experimenter's slice.


Actors in ORCA are referred to by names and GUIDs. Each actor must be identifier by a unique name and GUID.

Slices are referred to by names and GUIDs. Those are normally automatically generated by various actors:

  • The GUID for a lease is selected by the SM that requested it. The lease properties are a union of the resource type properties, derived by the broker from the containing resource pool, and configuration properties specified by the requesting SM.
  • The GUID for a sliver is assigned by the granting AM and is returned in any lease covering the sliver.
  • The GUID for a slice is assigned by the SM that wishes to create a slice for the purpose of grouping its leases. Creating a slice is not a privileged operation. The creating SM may also attach properties to the slice.


The core of ORCA is neutral to the types of resources, their control policies and ways of controlling them (e.g. instantiating slivers). These details are implemented via a number of plugins of different types. Plugins are registered with the actor and are upcalled on various events. There are four plugin interfaces of primary interest to integrators and operators:

  • Controllers. Each actor invokes a policy controller module in response to periodic clock ticks. Clocked controllers can monitor lease status or external conditions and take autonomous action to respond to changes. Shirako provides APIs for policy controllers to iterate collections of leases, and monitor and generate events on leases. Any calendar-based state is encapsulated in the controllers. Controllers may also create threads to receive instrumentation streams and/or commands from an external source.
  • ResourceControl?. At an authority/AM, the mapping (also called binding or embedding) of slivers onto components is controlled by an assignment or resource control policy. The policy is implemented in a plugin module implementing the IResourceControl interface. ResourceControl? is indexed and selectable by resource type, so requests for slivers of different types may have different policies, even within the same AM.
  • Resource handlers. The authority/AM actor upcalls a handler interface to setup and teardown each sliver. Resource handlers perform any substrate-specific configuration actions needed to implement slivers. The handler interface includes a probe method to poll the current status of a sliver, and modify to adjust attributes of a sliver.
  • Guest handlers. The SM leasing engine upcalls a handler interface on each sliver to join it to a slice and cleanup before leave from the slice. Guest handlers are intended for guest-specific actions such as installing layered software packages or user keys within a sliver, launching experiment tasks, and registering roles and relationships for different slivers in the slice (contextualization). Of course, some slivers might not be programmable or user-customizable after setup: such slivers do not need a guest handler.

All these plugins can be specified at in the actor configuration file immediately prior to deployment.


ORCA is implemented as a webapp intended to run inside a Tomcat Java servlet engine. A webapp implements a container in which one or more ORCA actors run. Actors can communicate with other actors across multiple containers. Actors digitally sign their communications using self-signed certificates (using certificates issued by a commercial CA is also possible). SSL is not used. We believe that state-changing commands or actions must be signed so that actions are non-repudiable and actors can be made accountable for their actions. SSL alone is not sufficient for this purpose. Given that we are concerned with message integrity and authenticity, and not privacy, SSL is not necessary either.

ORCA currently uses a slightly modified version of Tomcat 5.5 avaiable here. Off-the-shelf Tomcat or other servlet engine (like Jetty) will not work.

Further reading