This page is intended to reflect our current understanding of how the various components (physical hardware, VMs, actors running in different containers etc) are networked. It is not intended to say anything about how the network is to be configured for a particular experiment/slice (referred to as the data plane below).

Definitions and Terminology

Plane
Loosely, a set of endpoints with mutual connectivity.
Management plane - a plane with layer 3 (IP) connectivity
A site authority (aggregate manager) uses the management plane to interact with its substrate components, and possibly with slivers hosted on its substrate components.

Eg 1 The management address space for the BEN domain, which includes the dom-0 machines in each Eucalyptus cluster

Eg 2 The space of private management IP addresses assigned by Eucalyptus to slivers created on the site.

In our world, multiple aggregates may share the same management plane, but ignore that for now.

Data plane
Every slice has its own data plane by which slivers within the slice interact with each other. Constructing the data plane is the central goal of this project. All else is bookkeeping. Note that data plane is by no means layer 3. It may be layer 2 or even layer 1.

Eg Virtual machines within a slice on a shared subnet/VLAN across multiple Eucalyptus sites.

Control plane - a plane with layer 3 (IP) connectivity
The GENI control plane consists of actors that interact with each other through public IP space. They include authorities (aggregate managers), clearinghouses and the controller for each slice (guest controller or service manager in Orca).

By these definitions, each plane is independently managed. However, some entities may need to be on more than one plane. The mapping from components/entities to the planes they are on are discussed next, with notes on our particular implementation.

Mapping

Physical components
These belong to the management plane.

In our implementation, this is the BEN-wide management plane, pre-divided by site (RENCI is 192.168.201.x, Duke is 192.168.202.x etc). Physical machines in our case have two interface cards. One of them is assigned the BEN management plane address and the other is not allocated an address.

Virtual machines
These belong to the data plane of their slice.

In our implementation, they also belong to the management plane of the Eucalyptus cluster they were created by. Note that this is a different management plane than the BEN-wide management plane. I propose to call this the site management plane (as opposed to the BEN management plane). These VMs will have two interfaces, one that is connected to a site-wide VDE VLAN (the site management plane). The other interface will be connected to the slice's data plane by the setup handler. All the site management planes can use the same address range, nominally 192.168.300.x, because they will never need to talk to each other or to a Eucalyptus instance of a different site.

Aggregate Manager / Site Authority
These belong to the management plane(s) of the site they are managing as well as the GENI control plane.

In our implementation, the BEN Aggregate Manager (that configures the DTNs, fiber switches, routers etc) is on the GENI control plane and the BEN management plane. This can be trivially done by IP aliasing when the manager first comes up.

The Aggregate manager responsible for running the eucalyptus handlers and talking to the Eucalyptus cluster controller is (at least conceptually) distinct from the BEN aggregate manager. The Site Aggregate Manager needs to be on the GENI Control plane, the BEN management plane and the site management plane. We can add it to the site management plane simply by installing a route when it first comes up.

Note: We may choose to ignore the distinction between the BEN management plane and the site management plane by giving each Eucalyptus cluster a known distinct IP address range to choose from. This requires more coordination at site creation time, but makes debugging easier. I personally prefer this approach. I am yet unclear on how the VDE VLAN that Eucalyptus creates affects this.

Experiment manager / Guest controller
This belongs to the GENI control plane as well as the data plane of every slice it creates or is responsible for.

In our experiments so far, the guest controller ran within the same subnet as the slice data plane so we were able to ignore this entirely. Plugging the guest controller into the data plane needs further discussion. This will depend on how the data plane is created (VLAN with strict access control, best effort subnet, public IPs) However, I suggest that this is basically the same problem as connecting data planes from two different slices together. In other words, we need a gateway.

Clearing house
This clearly belongs to the GENI control plane. It gets a public IP address.

Creating the data plane

Creating the data plane for a slice is of course the goal, but the details on how we do that is a subject for a different document, currently not wikied.