Version 62 (modified by chase, 8 years ago)


NDL-OWL Models in ORCA

We describe the models in the life cycle of resource reservations and the resource allocation policy and stitching workflow implementation. The implementation details refer to the Camano generation of ORCA code.

We need a set of unified semantic schemas for representing the data models needed in the life circle of resource reservation. We develop NDL-OWL - an extension of the Network Description Language using OWL. We use a number of tools to create and manipulate NDL-OWL ontologies. We deliberately stay away from the procedural programming model in favor of a more flexible semantic query-based programming approach to implement the policies for resource allocation, path computation, and topology embedding applications.

There are at least 5 types of models to be defined, that are circulated among the ORCA actors (authority/AM, broker and slice manager).

1. Substrate description model

This is the substrate-specific detailed resource and topology model that is used by the owner of the substrate to describe its physical resources including edge (compute and storage) resource and network topology. It also describes the domain services exposed to the broker(s).

a. Domain Service Description (domain.owl)

The class hierarchy is defined in the diagram below. A substrate (domain) is defined as a collection of PoPs. Each PoP, with a geographical location, is a collection of network devices and/or data center. The Class Domain also has a property NetworkService which could have a number of ServiceElement. This information would be passed to the advertisement RDF.

  • AccessMethod: e.g. ORCAActor, or GENI AM API.
  • Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined.
  • ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS, that will be delegated to the broker.
  • AggregateManager: e.g. the URL of its aggregate manager.

b. Compute Resource Description:

The top-level class hierarchy is shown in the attached image ndl-compute.png. Three subclass hierarchies are defined, and geni related sub classes are defined in geni.owl:

  • Features:
    • VMM (XEN, KVM, VServer or other virtualization technology, including None)
    • OS (Linux, Windows)
    • ServerSize: Defined by the size of CPU, Memory, Storage. Quantifies the size of a server.
      • MediumServerSize
      • LargeServerSize
      • SmallServerSize
    • Vendor
  • ComputeElement
    • ClassifiedServer : defined by the serverSize and using popular cloud provisioning technologies. Each can be distinguished by number and type of CPU cores, amount of RAM or storage and support for different virtualization technologies
      • LargeServer
      • MediumServer
      • SmallServer
      • UnitServer
      • EC2M1Large
      • EC2C1Medium
      • EC2M1Small
      • PlanetLabNode
      • ProtoGeniNode

For example, the EC2M1Small is defined that has the EC2CPUCore (an EC2 core equivalent of AMD 1.7Ghz) , 1 cpu core count, 128M memory, 2G storage, and support both Xen and KVM VMM virtualization. See geni.owl for details. This allows describing a server/server cloud instance to host an instance of the VM instance via the property VM-in-Server which is a sub-property of AdaptationProperty.

  • ClassifiedServerInstance: has above ClassifiedServer classes as its instance members. This enables a specific server cloud to claim the type of VM node it can support.

  • ComputeComponentElement:
    • VMImage - Image that is or can be instantiated on the component.
    • CPU: {EC2CPUCore} - CPU type equivalent. E.g. Amazon/EC CPU core equivalent is a 1.7Ghz AMD.
    • DiskImage -
    • Memory -
  • NetworkElement
    • Server
      • ServerCloud
        • PlanetLabCluster
        • ProtoGeniCluster
        • EucalyptusCluster

A specific server cluster is defined by the virtualize property constraints on the ClassifiedServer. For Example, PlanetLabCluster virtualize PlanetLabNode. This indicates that a cluster belonging to class PlanetLabCluster? can produce PlanetLabNode? types.

c. Network topology and resource description: extension of the original NDL

2. Substrate delegation model

This is the abstract model that is used by the substrate manager (AM) to delegate its available services and resources to external brokers, e.g., a GENI clearinghouse. This mode should allow multiple abstraction levels, as different AM may want to expose different levels of resource and topology description of its substrate. The model is obtained online when a substrate stands up; ideally this can be automated. The model contains two types of information:

  • Domain network service.
    • AccessMethod: e.g. ORCAActor, or GENI AM API.
    • Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined.
    • ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS, that will be delegated to the broker.
    • AggregateManager: e.g. the URL of the AM.
  • Domain topology abstraction: Currently, the whole domain is abstracted to a node, a network device with following information:
    • Switching matrix: capability (ethernet, IP, etc.), label swapping capability (i.e., VLAN translation for ethernet switching)
    • Border interfaces: connectivity to neighboring domains, bandwidth and available label set (e.g. VLAN)

3. Slice request model

This defines a top-level Reservation object to describe a particular lease reservation (term, etc.). It describes the specifics of the user's request. Often this is represented in the form of a virtual topology with specific resources at the edges. It might be generated by unspecified experiment control tools. In our implementation, it is generated by a controller module in the slice manager (SM) after interpreting the user's request in an ad hoc format.

In our current implementation for the multi-domain setting, this SM controller performs inter-domain path or topology computation and automatically breaks the user request into domain-specific sub-requests.

  • The topology request is defined as a collection of bounded or unbounded connections. The end node of the connection can specify the amount of requested edge resource type (e.g. number of VMs or other slivers).
  • In redeeming the ticket to a specific site, the SM controller dynamically creates a sub-request to ask for a sliver.

4. Slice reservation model (Not implemented yet)

This is used by the clearing house (brokers) to return resource reservation description to the SM controller so that the controller can use it to talk to related substrate manager to redeem tickets. This model should be able to describe the interdependency relationship among the slivers so that the controller can stitch to a slice.

5. Slice manifest model (Not implemented yet)

This is used to describe the access method, state, and other post-configuration information of the reserved slivers.