Changes between Version 72 and Version 73 of NDL-OWL

Show
Ignore:
Timestamp:
07/22/11 16:38:17 (8 years ago)
Author:
ibaldin (IP: 152.54.9.21)
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • NDL-OWL

    v72 v73  
    1919 ORCA actors (authority/AM, broker and slice manager/SM) pass these representations as they interact to stand up reservations at multiple substrate aggregates and stitch them into an end-to-end slice. 
    2020 
    21 == 1. Substrate description model == 
    22 This is the detailed resource and topology model that is used to describe a physical substrate including network topology and edge resources (compute and storage).  ORCA AMs use this model to drive their internal resource allocation policies and resource configuration (setup/teardown handler invocations).  The model also specifies which elements of the structure are exposed externally (e.g., in resource advertisements or delegations), and which are kept hidden.  
    23  '''a. Domain Service Description ([source:orca/trunk/network/src/main/resources/orca/network/schema/domain.owl domain.owl])'''  
    24       The class hierarchy is defined in the diagram below. A substrate (domain) is defined as a collection of !PoPs. Each PoP has a geographical location and a collection of network devices and/or edge resources (e.g., a cloud provider site). The Class ''Domain'' also has a property ''!NetworkService'' which could have a number of ''!ServiceElement''.  The ServiceElements will be made visible in external advertisements (see below). 
    25              * !AccessMethod: e.g. ORCAActor, or GENI AM API. 
    26              * Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined. 
    27              * !ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS.    
    28              * !AggregateManager: e.g. the URL of its aggregate manager. 
    2921 
    30 [[Image(ndl-domain.png)]] 
    3122 
    32    '''b. Compute Resource Description:''' 
    33  * [source:orca/trunk/network/src/main/resources/orca/network/schema/compute.owl compute.owl] 
    34  * [source:orca/trunk/network/src/main/resources/orca/network/schema/geni.owl geni.owl] 
    3523 
    36        The top-level class hierarchy is shown in the attached image ''ndl-compute.png''. Three subclass hierarchies are defined, and geni related sub classes are defined in ''geni.owl'': 
    37         * Features: 
    38                 * VMM (XEN, KVM, VServer or other virtualization technology, including None) 
    39                 * OS (Linux, Windows) 
    40                 * !ServerSize: Defined by the size of CPU, Memory, Storage. Quantifies the size of a server. 
    41                       * !MediumServerSize 
    42                       * !LargeServerSize 
    43                       * !SmallServerSize 
    44                  * Vendor 
    45         * !ComputeElement 
    46                 * !ClassifiedServer : defined by the serverSize and using popular cloud provisioning technologies. Each can be distinguished by number and type of CPU cores, amount of RAM or storage and support for different virtualization technologies. 
    47                      * !LargeServer  
    48                      * !MediumServer 
    49                      * !SmallServer  
    50                      * !UnitServer 
    51                      * EC2M1Large 
    52                      * EC2C1Medium 
    53                      * EC2M1Small 
    54                      * !PlanetLabNode 
    55                      * !ProtoGeniNode 
    56                  
    57                 For example, the ''EC2M1Small'' is defined that has the ''EC2CPUCore'' (an EC2 core equivalent of AMD 1.7Ghz) , 1 cpu core  count, 128M memory, 2G storage, and support both ''Xen'' and ''KVM'' VMM virtualization. See [source:orca/trunk/network/src/main/resources/orca/network/schema/geni.owl geni.owl] for details. This allows describing a server/server cloud instance to host an instance of the  VM instance via the property ''VM-in-Server'' which is a sub-property of ''!AdaptationProperty''.  
    58                 We note these are both subclasses of the Class ''!ClassifiedServer'' and the instance members of the class ''!ClassifiedServerInstance''. 
    59                                 
    60    [[Image(ndl-geni-ComputeElement.png)]] 
    6124 
    62                 * !ClassifiedServerInstance: has above ''!ClassifiedServer'' classes as its instance members. This enables a specific server cloud to claim the type of VM node it can support. 
    63   
    64                 * !ComputeComponentElement:  
    65                     * VMImage - Image that is or can be instantiated on the component. 
    66                     * CPU: {EC2CPUCore} - CPU type equivalent. E.g. Amazon/EC CPU core equivalent is a 1.7Ghz AMD. 
    67                     * !DiskImage -  
    68                     * Memory -  
    6925 
    70         *  !NetworkElement 
    71               * Server 
    72                   * !ServerCloud 
    73                       * !PlanetLabCluster  
    74                       * !ProtoGeniCluster 
    75                       * !EucalyptusCluster 
    76   
    77          A specific server cluster is defined by the ''virtualize'' property constraints on the ''!ClassifiedServer''. For Example, ''!PlanetLabCluster'' ''virtualize'' ''!PlanetLabNode''.  This indicates that a cluster belonging to class ''PlanetLabCluster'' can produce ''PlanetLabNode'' types.           
    78   
    79 [[Image(ndl-geni-Server.png)]] 
    8026 
    81    '''c. Network topology and resource description''': extension of the original NDL 
    82          * [source:orca/trunk/network/src/main/resources/orca/network/schema/topology.owl topology.owl] 
    83          * [source:orca/trunk/network/src/main/resources/orca/network/schema/layer.owl layer.owl] 
    84          * [source:orca/trunk/network/src/main/resources/orca/network/schema/ethernet.owl ethernet.owl], [source:orca/trunk/network/src/main/resources/orca/network/schema/ip4.owl ipv4.owl], [source:orca/trunk/network/src/main/resources/orca/network/schema/dtn.owl dtn.owl] 
    85  
    86 == 2. Substrate delegation model == 
    87         This is the abstract model to advertise an aggregate's resources and services externally.  ORCA AMs use this representation to delegate advertised resources to ORCA broker(s).  AMs may also use this representation to describe resources in response to queries (e.g., ListResources) or to advertise resources to a GENI clearinghouse.  This mode should allow multiple abstraction levels, as different AMs may want to expose different levels of resource and topology description of its substrate.  The model is generated automatically from the substrate description model (above) when a substrate stands up.  The model contains two types of information: 
    88        * Domain network service. 
    89              * !AccessMethod: e.g. ORCAActor, or GENI AM API. 
    90              * Topology: Topology abstraction level exposed to outside. Right now, only node abstraction is defined. 
    91              * !ResourceType: This is inferred via a list of defined available resource label set, e.g. 32 VMs, 100 VLANS, that will be delegated to the broker.    
    92              * !AggregateManager: e.g. the URL of the AM. 
    93        * Domain topology abstraction: Currently, the domain's topology is abstracted to a node, a network device with following information: 
    94                * Switching matrix: capability (ethernet, IP, etc.), label swapping capability (i.e., VLAN translation for ethernet switching)  
    95                * Border interfaces: connectivity to neighboring domains, bandwidth and available label set (e.g. VLAN)  
    96  
    97 == 3. Slice request model == 
    98  * [source:orca/trunk/network/src/main/resources/orca/network/schema/request.owl request.owl] 
    99          
    100         This is the abstract model to represent user resource requests.  
    101         A typical request might be a virtual topology with specific resources at the edges, generated by unspecified experiment control tools.  In our implementation, the representation is produced by a controller module in the slice manager (SM) after interpreting the user's request in an ad hoc format. 
    102          In the current implementation for the multi-domain setting, this SM controller performs inter-domain path or topology computation based on a description of the global topology, and automatically breaks the user request into sub-requests for each domain.         
    103  
    104         * The topology request is defined as a collection of bounded or unbounded connections. The end node of the connection can specify the amount of requested edge resource type (e.g. number of VMs or other slivers).  
    105         * In redeeming the ticket to a specific site, the SM controller dynamically creates a sub-request to ask for a sliver. 
    106  
    107 == 4. Slice reservation model (Not implemented yet) ==  
    108         This is the abstract model used by ORCA brokers to return resource tickets to the SM controller.  Each ticket contains a specifier for one or more slivers from a single AM named in the ticket.  The SM controller obtains the slivers by redeeming these tickets to the AMs ('''redeem tickets''').  This model must describe the interdependency relationships among the slivers so that the SM controller can drive stitching for each sliver into the slice.  Currently, ORCA uses the substrate delegation model: the ticket contains a resource type and unit count for the resources promised in the ticket, together with the complete original advertisement for the containing resource pool from the AM. 
    109    
    110 == 5. Slice manifest model (Not implemented yet) == 
    111         This is the abstract model to describe the access method, state, and other post-configuration information of the reserved slivers.  Currently, the required information (e.g., IP addresses, node keys) are returned as type-specific properties on the lease, and are not integrated into the semantic web representations. 
    112