Version 24 (modified by ibaldin, 8 years ago)

--

Introduction to ORCA

Overview

Orca is a software framework and open-source platform to manage a programmatically controllable shared substrate, which may include servers, storage, networks, or other components. This class of systems is often called cloud computing or utility computing.

The Orca software is deployed as a control framework for a prototype GENI facility. We see GENI as an ambitious futuristic vision of cloud networks as a platform for research in network science and engineering.

An Orca deployment is a dynamic collection of interacting control servers (actors) that work together to provision and configure resources for each guest according to the policies of the partici- pants. The actors represent various stakeholders in the shared infrastructure: substrate providers, resource consumers (e.g., GENI experimenters), and brokering intermediaries that coordinate and federate substrate providers and offer their resources to a set of consumers.

Orca is based on the foundational abstraction of resource leasing. A lease is a contract involving a resource consumer, a resource provider, and one more brokering intermediaries. Each actor may manage large numbers of independent leases involving different participants.

This document is meant to present a brief overview of ORCA arhictecture, principles and operation. For more information please refer to the Further Reading section below.

Terminology

  • Substrate - a collection of resources under specific administrative ownership. Examples of substrate are servers (virtualizable or not), network links, data sets, measurement resources.
  • Sliver - a smallest unit of some resource that is independently programmable and/or indepen- dently controllable in some resource-specific fashion. Each sliver is granted from a single substrate provider. An example of a sliver can be a Virtual Machine created inside a cloud of a specific provider.
  • Slices are groupings of multiple slivers from multiple substrate providers.
  • Guest - a distributed software environment running within collection of slivers configured to order, possibly from different substrate providers. Some guests will be long-running services that require different amounts of resources at different stages of execution. The guests may range from virtual desktops to complex experiments to dynamic instantiations of distributed applications and network services.

Principles of operation

There are three basic actor roles in the architecture, representing the providers, consumers, and intermediaries respectively. There can be many instances of each actor type, e.g., representing different substrate providers or resource consumers.

  • Authority or Aggregate Manager (AM). An authority actor controls access to some subset of the substrate components. It corresponds directly to the aggregate manager (AM) in GENI. Typically, an authority controls some set of infrastructure resources in a particular site, autonomous system, transit domain, administrative domain, or component aggregate comprising a set of servers, storage units, network elements, or other components under common ownership and control. Terminology note: the term site or site authority (e.g., a cluster site or hosting center) is often used to refer to a substrate authority/AM, as a result of our roots in virtual cloud computing. For network substrates we are using the term domain authority more often when it is appropriate.
  • Slice/Service Manager (SM) or Slice Controller. This actor is responsible for creating, configuring, and adapting one or more slices. It runs on behalf of the slice owners to build each slice to meet the needs of a guest that inhabits the slice. Terminology note. This actor was originally called a service manager in SHARP (and in the Shirako code) because the guest was presumed to be a service. As GENI has developed, we have adopted the term slice controller because the actor’s role is to control the slice, rather than the guest itself, and because in GENI the guest is an experiment rather than a service. Slice Manager (also SM) is also OK since the “controller” is properly speaking a plugin module to the actor itself.
  • Broker. A broker mediates resource discovery and arbitration by controlling the scheduling of resources at one or more substrate providers over time. It may be viewed as a service that runswithin a GENI clearinghouse. A key principle in Orca is that the broker can have specific allocation power delegated to it by one or more substrate authorities, i.e., the substrate providers “promise” to abide by allocation decisions made by the broker with respect to their delegated substrate. This power enables the broker to arbitrate resources and coordinate allocation across multiple substrate providers, as a basis for federation and scheduling of complex slices across multiple substrate aggregates. Brokers exercise this power by issuing tickets that are redeemable for leases.

Orca Architecture

Authorities delegate resources to brokers (one authority can delegate resources to several brokers). Brokers hold the promised resources until Service Managers request them. Brokers issue tickets for resources from different authorities to a Service Manager and a Service Manager redeems those tickets at Authorities. Authorities instantiate the resource slivers and pass the control of them to the Service Manager. Pluggable control and access policies help mediate these transactions. Query interfaces allow actors to asynchronously determine the state of other actors.

Slices are used to group resources together. A slice in a Service Manager represents resources owned by a particular user (e.g. for the purpose of an experiment). A slice in a Broker represents a portion of slivers given to a particular service manager slice from this Broker. If a Service Manager acquires resources from multiple brokers, they are not aware of each other. A slice in an authority represents the resources given by this authority to a particular service manager slice. Notice that architecturally in ORCA only the Service Manager knows the exact composition of the slice. All other actors may have only a partial view of the slice.

Naming

Actors in ORCA are referred to by names and GUIDs. Each actor must be identifier by a unique name and GUID.

Slices are referred to by names and GUIDs. Those are normally automatically generated by various actors:

  • The GUID for a lease is selected by the SM that requested it. The lease properties are a union of the resource type properties, derived by the broker from the containing resource pool, and configuration properties specified by the requesting SM.
  • The GUID for a sliver is assigned by the granting AM and is returned in any lease covering the sliver.
  • The GUID for a slice is assigned by the SM that wishes to create a slice for the purpose of grouping its leases. Creating a slice is not a privileged operation. The creating SM may also attach properties to the slice.

Further reading

Attachments