Version 99 (modified by anirban, 8 years ago)



Software prerequisites

Build ORCA from source

Understanding container configuration: ORCA_HOME, ORCA_LOCAL, and all that

Actor configuration

Deploying GENI-ORCA

Orca actors run within containers (JVMs). We assume that your orca container is a Tomcat web application server.

Install Tomcat and create the initial database

1. Download the Tomcat tar file. ORCA uses a customized Tomcat, so using standard Tomcat is not recommended

2. Create a directory for the install. We install into /opt/orca/tomcat, and put the orca-related configuration files into /opt/orca. Since the configuration files are there, /opt/orca is ORCA_HOME. Note: the ORCA source may be somewhere else.

3. Untar the contents of the tomcat tar file into its install directory (e.g., $ORCA_HOME).

4. Edit tomcat/ and to set CATALINA_HOME to point to the tomcat directory. Also in and set ORCA_HOME to the directory with the container configuration files (e.g., /opt/orca).

export ORCA_HOME=/opt/orca

5. Edit $ORCA_HOME/tomcat/conf/server.xml. Ensure that port 11080 is used by tomcat and not the default 8080:

    <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
    <!-- Changing default port to 11080 for conflict with Eucalyptus -->
    <Connector port="11080" maxHttpHeaderSize="8192"
               maxThreads="150" minSpareThreads="5" maxSpareThreads="75"
               enableLookups="false" redirectPort="8443" acceptCount="100"
               connectionTimeout="20000" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
     to 0 -->

6. Edit tomcat/conf/server.xml and tomcat/server/classes/webauth.xml to change references to '/shirako' to the new location of the directory you created (e.g., /opt/orca). Only do this step if you are using webauth authentication. If you are not sure, then you are not. Webauth authentication requires significant setup from your identity provider.

7. Execute tomcat/

8. Make sure MySQL is running on your system.

NOTE 2: For Mac OS X you can use fink to install mysqld and then

$ cd /sw/bin
$ sudo ./mysqld_safe

This will make sure mysqld is running as long as you don't exit this shell

9. Create a mysql database and populate it These instructions help set up a local Tomcat container with an example inventory of substrate that allows ORCA to run in emulation mode. This step is highly recommended before running with real resources to make sure your setup is correct.

10. Give ANT more heap space. This can be added to ~/.profile

$ export ANT_OPTS="-XX:MaxPermSize=512m -Xms40m -Xmx1024m"

Generating the GENI-ORCA container configuration files

The actor-independent configuration files for the container reside in ORCA_HOME. These config files include a security configuration, with certificates and a keystore.

1.2 and following releases (Bella, Camano etc)

Starting from release 1.2 ORCA no longer packages config/ or runtime/ directories into the webapp. Note also that in the initial release of Bella (2.0) has a directory called webapp2/ for the new configuration framework. If your source pool has a directory webapp2/, then use webapp2/ instead of webapp/ in the instructions below.

1. Generate a security configuration.

$ cd tools/config
$ ant security.create.admin.config

This command creates a directory called runtime/ in the current directory (tools/config) and places the generated files in there. NOTE: if there is an existing runtime/ directory, first move it out of the way.

2. Define ORCA_HOME and copy the configuration directories to $ORCA_HOME.

$ export ORCA_HOME=/opt/orca
$ cp -r runtime $ORCA_HOME
$ cp -r scripts $ORCA_HOME

3. Copy to $ORCA_HOME and then customize (if necessary; for emulation nothing should be necessary) the container configuration files ( etc.) under webapp/config:

$ cd $ORCA_SRC/webapp
$ cp -r config $ORCA_HOME
$ vi $ORCA_HOME/config/

4. Configure actors for your container by customizing a config.xml. For emulation mode, copy config-all-local.xml to config.xml. The config.xml is then installed to ORCA_LOCAL by generating and deploying the webapp (see below). Edit the new config.xml so that NDL substrate file property of the Euca site actor points to the right file (download the attached uncvmsite.rdf NDL file and put it under $ORCA_HOME/ndl; figuring out the property name in the file and appropriate value for it is left as an exercise to the reader):

$ cd $ORCA_SRC/webapp/actor_configs
$ cp config-all-local.xml config.xml
$ vi config.xml

NOTE: About the config-all-local.xml configuration file: this file describes three actors (a broker, a slice manager and a Eucalyptus site authority) that go into the container. The authority actor automatically delegates 10 instances of virtual machines and 1000 instances of internal vlans to the broker.

1.1-alpha and previous releases

If your filesystem supports symbolic links, simply use the provided ant task:

$ ant prepare.use

This task will generate the admin security configuration and place links to it in the relevant directories. It will also initialize and link other directories needed to build/test drivers.

If your file system does not support symbolic links or you want more control over the process you can try the following steps:

1. Create the security configuration by going to tools/config and running

$ cd tools/config
$ ant security.create.admin.config

NOTE: Make sure there is no existing runtime directory before you run this command unless you know what you are doing.

2. Copy or softlink the resulting runtime/ directory to the orca/webapp directory.

Generating the GENI-ORCA web application

1. Create the webapp with this command:

$ cd $ORCA_SRC/webapp
$ mvn package

2. Deploy into an already running tomcat instance on the local machine by typing

$ cd $ORCA_SRC/webapp
$ ant deploy

Before packaging the webapp, the webapp config directory should be populated with configuration files for ORCA_LOCAL, as described above. Before deploying the webapp, the tomcat container should be configured with a proper ORCA_HOME populated with configuration files as described above.

3. Log into the portal at http://localhost:11080/orca with username "admin" and no password, unless you changed the defaults.

NOTE: as of Bella 2.0 container administrator login/password credentials are located in

4. If the app did not come up as expected, then the container (ORCA_HOME) and actors (ORCA_LOCAL) are likely misconfigured. There is no easy way to debug configuration errors. Check the container logs in tomcat/logs/orca*.log. You can add more information to these logs by editing the log4j properties in ORCA_HOME/config/, e.g., to add "%C %M %L" to the ConversionPattern to print the class, method, and line number for each log message. It is also possible to load config.xml directives into the running web GUI, under the admin tab, which gives additional error reporting through the GUI. Config processing is not atomic or idempotent: if config processing fails, some directives may have completed successfully.

Feature: in multi-container configurations claim processing for exported tickets (advertisements/delegations) must be done manually after the container stands up, since it requires inter-actor communication. Go to the broker tab, create an inventory slice, and claim the delegation, using the GUID for the exported ticket, which is visible under the site tab (view exported resources). This does not apply to emulation mode example

Feature: if you restart tomcat, it will recover your previous container state. If you redeploy, or undeploy and redeploy, it will reinitialize the container to its fresh state, and delete state about existing slices and reservations from the database. Note also that tomcat does not always stop cleanly: sometimes it is necessary to kill the process. This case may lead to problems on recovery due to an unidentified bug. If you don't need to recover, then re-deploy. If you want to be sure your tomcat is clean, you might first remove the tomcat/webapps/orca* directories before restarting the server and redeploying.

Quick Start emulation mode test

Videos attached to this page show how the system works once it is installed (no audio). A simple test with the default actor configuration described above is to

  1. Login to the ORCA portal
  2. Go to Admin tab and click 'View Actors'. Verify that 3 actors are Online.
  3. Go to User tab (Slice Manager). Click on 'Create Reservation'. Select 'Eucalyptus Virtual Machine' from the drop-down menu corresponding to 'Resource Pool'. Change the number of Units from 1 to something else (for anything greater than 10 you will receive only 10 units as that is the size of the initial delegation)
  4. Click 'Create' button
  5. Click 'View All Reservations'. Click refresh button of your browser until the reservation goes into 'Active' state.
  6. You can optionally close the reservation by selecting a check-box next to it and selecting 'Close' from the Action menu.

For a more sophisticated example that involves manually delegating resources from the site to the broker prior to creating a reservation through a slice manager:

  1. Stop tomcat
  2. Remove $ORCA_HOME/state_recovery.lock file (to make sure we wipe out inventory on the restart)
  3. Remove Orca webapp from $ORCA_HOME/tomcat/webapps
    $ cd $ORCA_HOME/tomcat/webapps
    $ rm -rf orca*
  4. Start tomcat again
  5. Edit $ORCA_SRC/webapp/actor_configs/config.xml file by commenting out the <rset> stanza in the topology section at the bottom:
                                    <from name="ndl-broker" type="broker" />
                                    <to name="euca-vm-site" type="site" />
  6. Package and deploy the webapp:
    $ cd $ORCA_SRC/webapp
    $ mvn package; ant deploy
  7. Login to ORCA portal
  8. Go to 'site' tab. Click on 'Create Slice'. Give a slice a name (no whitespaces). Click 'Create'
  9. Click on 'Export Resources'. Enter 10 (maximum available) number of units. Click 'Create'. This creates a delegation from the site to the broker.
  10. Click on 'View Exported Resources' - you should see your new slice. Click on 'manage' link on the right.
  11. In the screen with 'Reservation Details' copy into clipboard Reservation ID - we need to give this to the broker to claim this reservation.
  12. Claim the reservation on the broker side. Go to 'broker' tab. Click on 'Claim Resources'. Paste the reservation ID into the appropriate field and click 'Claim'
  13. Validate that the claim succeeded by clicking on 'View Inventory'
  14. Follow steps 3, 4 and 5 of the previous example to create a user reservation.

Testing the sample XML-RPC controller in emulation mode

This section summarizes how to test the xml-rpc controller in emulation mode. It assumes that you have already deployed the orca webapp in emulation mode into a running tomcat container using the above instructions. The steps are the following.

BIG FAT NOTE: Starting with Camano 3.0 the Python tools have been revised to take command line parameters like server URL, slice ID etc. These are no longer hard-coded or saved into files. With any of the scripts execute it with -h option to find out all possible command line parameters. The semantics of the scripts remain unchanged.

  1. Login to the ORCA portal
  2. Go to Admin tab and click 'View Actors'. Verify that 3 actors are Online. Go to Broker tab and click 'View Inventory'. Verify that there are 10 virtual machines and 1000 internal vlans.
  3. Go to User tab (Slice Manager). Click on 'Start Controller'. Select 'XML-RPC Controller' from the drop-down menu corresponding to 'Controller'.
  4. Click 'Create' button. This should start the xml-rpc controller and the xml-rpc server that responds to xml-rpc clients on a port (default: 20001)
  5. Open a terminal and navigate to the directory, ORCA_HOME/controllers/xmlrpc/resources/scripts, which has simple python client scripts that invoke methods to list resources, create slivers, check sliver status, delete slivers etc. Relative to ORCA_HOME, do
    $ cd controllers/xmlrpc/resources/scripts
  6. To list the available resources, run the following command. This should output the available resources in the form of abstract NDL description of the Eucalyptus substrate.
    $ python
  7. To create a sliver, run the 'createSliver' script. The provided script reads in a NDL resource request from a file called 'id-mp-Request2.rdf'. The example request corresponds to creating 3 virtual machines connected by 3 internal vlans in a triangle topology, with each vm having two interfaces and talking to the other two vms on a separate interface.
    $ python
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 5063cd58-75cf-4c58-8824-cf86d329b9d9 | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 7c1185e0-337b-490d-92f2-119d7f96a367 | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: b9550c8e-de60-4504-8eb2-563d96f18c35 | Resource Type: uncEuca.vlan | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 4f0ddc63-6e1a-4b55-8547-c2522abd7be1 | Resource Type: uncEuca.vlan | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: c7816bf2-0555-4e59-8502-6ed203dede9d | Resource Type: unc.vm | Resource Units: 1 ] 
    [   Slice UID: 05d30971-2b1c-4a52-817c-bc192a878a8b | Reservation UID: 33f84b75-0874-444c-9809-a017b799a868 | Resource Type: uncEuca.vlan | Resource Units: 1 ]
  8. Once createSliver returns, it outputs the slice UID. You should use the slice UID to operate on the instantiated sliver - to check status, delete etc. Open a file called 'sliceID.txt' and paste the slice UID into it. For the above example,
    $ cat sliceID.txt
  9. To check the status of the sliver, run the 'sliverStatus' script. This script assumes that the slice UID is in the file 'sliceID.txt'. This should output the status of each individual resource and the overall sliver status.
    $ python
  10. To delete the sliver, run the 'deleteSliver' script. This script assumes that the slice UID is in the file 'sliceID.txt'.
    $ python

Different methods to test new controllers

1. From the portal: Deploy the webapp. The controller would be packaged with it [Look at the pom files in the existing xmlrpc controller to see how it is packaged]. In the user tab of the portal, you can click "start controller". Your controller should show up on the menu and then if you click ok, your controller should start up. Look at tail -f logs/orca.log inside tomcat directory to see what's going on. The log4j outputs will be in logs/orca.log and system outs will be in logs/catalina.out [the mandate is not to use system.outs for debugging though]. This is the recommended way to test controllers.

2. From inside your IDE: A bunch of steps are needed to set this up. (a) In tools/config , do 'ant security.create.admin.config' if you haven't already done so. (b) cd tools/cmdline ; ln -s ../config/runtime runtime , (c) mvn install at the root of the source tree ; 'ant get.packages' at tools/cmdline , (d) Prepare a config file . This is similar to config.xml in webapp/actor_configs/ . Some old examples reside in tools/cmdline/tests/*.xml . You might first try to use the one in webapp/actor_configs/config.xml , (e) Inside your IDE, you have to set the 'working directory' as the tools/cmdline directory and run the YourControllerTest? file with the following arguments: config=<path_to_config_file> do.not.recover=true<how_long_you_want_theTest_to_run_Eg._600000> . You should be seeing your program control reaching the runTest() method in YourControllerTest? .

3. From command line: Follow steps 2(a) through 2(d) . Add your Controller test as an ant task in tools/cmdline/ant/tests.xml . See examples there (like the test.ben.interdomain task) . Then run 'ant test.yourcontroller' from tools/cmdline .