Substrate models

1. Substrate description model

This is the detailed resource and topology model that is used to describe a physical substrate including network topology and edge resources (compute and storage). ORCA AMs use this model to drive their internal resource allocation policies and resource configuration (setup/teardown handler invocations). The model also specifies which elements of the structure are exposed externally (e.g., in resource advertisements or delegations), and which are kept hidden.

a. Domain Service Description (domain.owl)

The class hierarchy is defined in the diagram below. A substrate (domain) is defined as a collection of PoPs. Each PoP has a geographical location and a collection of network devices and/or edge resources (e.g., a cloud provider site). The Class Domain also has a property NetworkService which could have a number of ServiceElement. The ServiceElements? will be made visible in external advertisements (see below).

  • AccessMethod: e.g. ORCAActor, or GENI AM API.
  • Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined.
  • ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS.
  • AggregateManager: e.g. the URL of its aggregate manager.

b. Compute Resource Description:

The top-level class hierarchy is shown in the attached image ndl-compute.png. Three subclass hierarchies are defined, and geni related sub classes are defined in geni.owl:

  • Features:
    • VMM (XEN, KVM, VServer or other virtualization technology, including None)
    • OS (Linux, Windows)
    • ServerSize: Defined by the size of CPU, Memory, Storage. Quantifies the size of a server.
      • MediumServerSize
      • LargeServerSize
      • SmallServerSize
    • Vendor
  • ComputeElement
    • ClassifiedServer : defined by the serverSize and using popular cloud provisioning technologies. Each can be distinguished by number and type of CPU cores, amount of RAM or storage and support for different virtualization technologies.
      • LargeServer
      • MediumServer
      • SmallServer
      • UnitServer
      • EC2M1Large
      • EC2C1Medium
      • EC2M1Small
      • PlanetLabNode
      • ProtoGeniNode

For example, the EC2M1Small is defined that has the EC2CPUCore (an EC2 core equivalent of AMD 1.7Ghz) , 1 cpu core count, 128M memory, 2G storage, and support both Xen and KVM VMM virtualization. See geni.owl for details. This allows describing a server/server cloud instance to host an instance of the VM instance via the property VM-in-Server which is a sub-property of AdaptationProperty. We note these are both subclasses of the Class ClassifiedServer and the instance members of the class ClassifiedServerInstance.

  • ClassifiedServerInstance: has above ClassifiedServer classes as its instance members. This enables a specific server cloud to claim the type of VM node it can support.

  • ComputeComponentElement:
    • VMImage - Image that is or can be instantiated on the component.
    • CPU: {EC2CPUCore} - CPU type equivalent. E.g. Amazon/EC CPU core equivalent is a 1.7Ghz AMD.
    • DiskImage -
    • Memory -
  • NetworkElement
    • Server
      • ServerCloud
        • PlanetLabCluster
        • ProtoGeniCluster
        • EucalyptusCluster

A specific server cluster is defined by the virtualize property constraints on the ClassifiedServer. For Example, PlanetLabCluster virtualize PlanetLabNode. This indicates that a cluster belonging to class PlanetLabCluster? can produce PlanetLabNode? types.

c. Network topology and resource description: extension of the original NDL

Examples

Simple connection over Ethernet between two servers

Multi-layered substrate description showing adaptations between fiber, lambda/DWDM and Ethernet layers

Each BEN PoP has a Polatis fiber switch, an Infinera DTN and a Cisco 6509 connected to each other:

Attachments