Version 68 (modified by yxin, 8 years ago)


NDL-OWL Models in ORCA

This page describes the semantic web models used in the life cycle of resource reservations (leases), and in ORCA implementations for resource allocation policy and stitching workflow. The implementation details refer to the Camano release of ORCA code.

We use a set of unified semantic schemas (ontologies) for representing the data models to describe resources in a reservation across the stages of its life cycle. We develop NDL-OWL - an extension of the Network Description Language using OWL. We use a number of tools to create and manipulate NDL-OWL ontologies. NDL-OWL gives us a flexible semantic query-based programming approach to implement the policies for resource allocation, path computation, and topology embedding. It enables these functions to be coded generically to operate on declarative specifications, rather than baking in assumptions about the resources.

There are at least 5 types of models to be defined. ORCA actors (authority/AM, broker and slice manager/SM) pass these representations as they interact to stand up reservations at multiple substrate aggregates and stitch them into an end-to-end slice.

1. Substrate description model

This is the detailed resource and topology model that is used to describe a physical substrate including network topology and edge resources (compute and storage). ORCA AMs use this model to drive their internal resource allocation policies and resource configuration (setup/teardown handler invocations). The model also specifies which elements of the structure are exposed externally (e.g., in resource advertisements or delegations), and which are kept hidden.

a. Domain Service Description (domain.owl)

The class hierarchy is defined in the diagram below. A substrate (domain) is defined as a collection of PoPs. Each PoP has a geographical location and a collection of network devices and/or edge resources (e.g., a cloud provider site). The Class Domain also has a property NetworkService which could have a number of ServiceElement. The ServiceElements? will be made visible in external advertisements (see below).

  • AccessMethod: e.g. ORCAActor, or GENI AM API.
  • Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined.
  • ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS.
  • AggregateManager: e.g. the URL of its aggregate manager.

b. Compute Resource Description:

The top-level class hierarchy is shown in the attached image ndl-compute.png. Three subclass hierarchies are defined, and geni related sub classes are defined in geni.owl:

  • Features:
    • VMM (XEN, KVM, VServer or other virtualization technology, including None)
    • OS (Linux, Windows)
    • ServerSize: Defined by the size of CPU, Memory, Storage. Quantifies the size of a server.
      • MediumServerSize
      • LargeServerSize
      • SmallServerSize
    • Vendor
  • ComputeElement
    • ClassifiedServer : defined by the serverSize and using popular cloud provisioning technologies. Each can be distinguished by number and type of CPU cores, amount of RAM or storage and support for different virtualization technologies.
      • LargeServer
      • MediumServer
      • SmallServer
      • UnitServer
      • EC2M1Large
      • EC2C1Medium
      • EC2M1Small
      • PlanetLabNode
      • ProtoGeniNode

For example, the EC2M1Small is defined that has the EC2CPUCore (an EC2 core equivalent of AMD 1.7Ghz) , 1 cpu core count, 128M memory, 2G storage, and support both Xen and KVM VMM virtualization. See geni.owl for details. This allows describing a server/server cloud instance to host an instance of the VM instance via the property VM-in-Server which is a sub-property of AdaptationProperty. We note these are both subclasses of the Class ClassifiedServer?' and the instance members of the class 'ClassifiedServerInstance?.

  • ClassifiedServerInstance: has above ClassifiedServer classes as its instance members. This enables a specific server cloud to claim the type of VM node it can support.

  • ComputeComponentElement:
    • VMImage - Image that is or can be instantiated on the component.
    • CPU: {EC2CPUCore} - CPU type equivalent. E.g. Amazon/EC CPU core equivalent is a 1.7Ghz AMD.
    • DiskImage -
    • Memory -
  • NetworkElement
    • Server
      • ServerCloud
        • PlanetLabCluster
        • ProtoGeniCluster
        • EucalyptusCluster

A specific server cluster is defined by the virtualize property constraints on the ClassifiedServer. For Example, PlanetLabCluster virtualize PlanetLabNode. This indicates that a cluster belonging to class PlanetLabCluster? can produce PlanetLabNode? types.

c. Network topology and resource description: extension of the original NDL

2. Substrate delegation model

This is the abstract model to advertise an aggregate's resources and services externally. ORCA AMs use this representation to delegate advertised resources to ORCA broker(s). AMs may also use this representation to describe resources in response to queries (e.g., ListResources?) or to advertise resources to a GENI clearinghouse. This mode should allow multiple abstraction levels, as different AMs may want to expose different levels of resource and topology description of its substrate. The model is generated automatically from the substrate description model (above) when a substrate stands up. The model contains two types of information:

  • Domain network service.
    • AccessMethod: e.g. ORCAActor, or GENI AM API.
    • Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined.
    • ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS, that will be delegated to the broker.
    • AggregateManager: e.g. the URL of the AM.
  • Domain topology abstraction: Currently, the domain's topology is abstracted to a node, a network device with following information:
    • Switching matrix: capability (ethernet, IP, etc.), label swapping capability (i.e., VLAN translation for ethernet switching)
    • Border interfaces: connectivity to neighboring domains, bandwidth and available label set (e.g. VLAN)

3. Slice request model

This is the abstract model to represent user resource requests. A typical request might be a virtual topology with specific resources at the edges, generated by unspecified experiment control tools. In our implementation, the representation is produced by a controller module in the slice manager (SM) after interpreting the user's request in an ad hoc format.

In the current implementation for the multi-domain setting, this SM controller performs inter-domain path or topology computation based on a description of the global topology, and automatically breaks the user request into sub-requests for each domain.

  • The topology request is defined as a collection of bounded or unbounded connections. The end node of the connection can specify the amount of requested edge resource type (e.g. number of VMs or other slivers).
  • In redeeming the ticket to a specific site, the SM controller dynamically creates a sub-request to ask for a sliver.

4. Slice reservation model (Not implemented yet)

This is the abstract model used by ORCA brokers to return resource tickets to the SM controller. Each ticket contains a specifier for one or more slivers from a single AM named in the ticket. The SM controller obtains the slivers by redeeming these tickets to the AMs (redeem tickets). This model must describe the interdependency relationships among the slivers so that the SM controller can drive stitching for each sliver into the slice. Currently, ORCA uses the substrate delegation model: the ticket contains a resource type and unit count for the resources promised in the ticket, together with the complete original advertisement for the containing resource pool from the AM.

5. Slice manifest model (Not implemented yet)

This is the abstract model to describe the access method, state, and other post-configuration information of the reserved slivers. Currently, the required information (e.g., IP addresses, node keys) are returned as type-specific properties on the lease, and are not integrated into the semantic web representations.