Version 8 (modified by ibaldin, 8 years ago)

--

Deploying a Service Manager

Overview

Service manager is the actor representing ORCA users. It can run a number of controller plugins that implement different resource management policies. Controller plugins can exhibit different interfaces - programmatic or GUI. Currently the two most common interfaces are GENI AM API and ProtoGENI AM API. ORCA XMLRPC Controller plugin implements both of those. By installing an ORCA Service Manager actor in a container you gain access to this controller and can request resources from other ORCA actors.

Installation can be performed on any host with a public access to the Internet (to allow communication with other ORCA actors and remote actor registry).

Deploying ORCA Container with a Service Manager actor

Running controller plugins

Login to ORCA portal of the container you just deployed (typically http://hostname:11080/orca), go to the 'User' tab, then click on 'Start Controller'. Select 'XMLRPC Controller' from the menu and click 'Create'.

Using GENI AM API controller plugin

This plugin exports a GENI AM API-compliant XMLRPC API that can be exercised using Python scripts located under $ORCA_SRC/controllers/xmlrpc/resources/scripts.

  1. Open a terminal and navigate to the directory, ORCA_HOME/controllers/xmlrpc/resources/scripts, which has simple python client scripts that invoke methods to list resources, create slivers, check sliver status, delete slivers etc. Relative to ORCA_HOME, do
    $ cd controllers/xmlrpc/resources/scripts
    
  2. To list the available resources, run the following command. This should output the available resources in the form of abstract NDL description of the Eucalyptus substrate.
    $ python ListResources.py
    
  3. To create a sliver, run the 'createSliver' script. The provided script reads in a NDL resource request from a file called 'id-mp-Request2.rdf'. The example request corresponds to creating 3 virtual machines connected by 3 internal vlans in a triangle topology, with each vm having two interfaces and talking to the other two vms on a separate interface.
    $ python createSliver.py
    ...
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 5063cd58-75cf-4c58-8824-cf86d329b9d9 | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 7c1185e0-337b-490d-92f2-119d7f96a367 | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: b9550c8e-de60-4504-8eb2-563d96f18c35 | Resource Type: uncEuca.vlan | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 4f0ddc63-6e1a-4b55-8547-c2522abd7be1 | Resource Type: uncEuca.vlan | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: c7816bf2-0555-4e59-8502-6ed203dede9d | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 33f84b75-0874-444c-9809-a017b799a868 | Resource Type: uncEuca.vlan | Resource Units: 1 ]
    ...
    
  4. Once createSliver returns, it outputs the slice UID. You should use the slice UID to operate on the instantiated sliver - to check status, delete etc. Open a file called 'sliceID.txt' and paste the slice UID into it. For the above example,
    $ cat sliceID.txt
    05d30971-2b1c-4a52-817c-bc192a878a8b
    $
    
  5. To check the status of the sliver, run the 'sliverStatus' script. This script assumes that the slice UID is in the file 'sliceID.txt'. This should output the status of each individual resource and the overall sliver status.
    $ python sliverStatus.py
    
  6. To delete the sliver, run the 'deleteSliver' script. This script assumes that the slice UID is in the file 'sliceID.txt'.
    $ python deleteSliver.py
    

Attachments