Version 14 (modified by yxin, 10 years ago)



1. Which directories/code to study? For instance, those for “mapper, setup/teardown, join/leave, policy plugins”?

ANSWER: All core Orca interfaces reside in the orca.shirako.api namespace.

Basic implementations of these interfaces can be found in the orca.shirako.common and orca.policy.common namespaces.

mapper == policy. Study the hierarchy of the orca.shirako.api.IPolicy interface

setup/teardown and join/leave handlers -> Study orca.shirako.plugins.config.Config and its inheritors.

actors -> Study the IActor interface.

orca.shirako.api can be found here:


orca.shirako.plugins.config can be found here:


orca.policy.core can be found here:


The Orca kernel is implemented in the orca.shirako.kernel interface.

handlers/* contains examples of handlers

drivers/* contains a set of drivers for the various hardware we use drivers/sample contains a sample driver. We also provide a driver template and a script to simplify starting to work on a driver. You can find some instructions about using the template here:

controllers/* shows examples of controllers

2. How can we instantiate ORCA’s actors?


See the configuration guide. In general, the guides on this page provide essential information needed to start with the project:

Note: some of the information on this page may be stale. We are currently in the process of updating it to reflect our recent changes.


1. Protocol centric? How to write actors or borrow/extend ORCA’s customized interface to be compatible with ORCA implementation?


1) orca.shirako.api: for actors and policy plugin 2) orca.controller.x: guest controller 3) orca.handlers.x: handler.xml 4) orca.driver.x: drivers


1. How to customize “property list” for new substrate, e.g., to include rspRspec?

Each ResourceSet? has ResourceData? object, which consists of 4 property lists:

  • request properties are sent to brokers
  • configuration properties are sent to site authorities
  • local properties are kept locally and used by policies and handlers
  • resource properties are pushed down

You can add your custom properties to some of the properties lists. For example, you could serialize the whole resource specification down to a String and attach it as a single propery. You could break the specification into multiple properties.

There are multiple examples of property passing in the code base:

  • some resource properties are initially extracted from the container XML configuration file.

Take a look at manage/boot to see how the initial source tickets are being created.

  • when a site exports resources to a broker or a broker writes a ticket to a service manager resource properties are included in the new ResourceSet?. See the extract() method of orca.shirako.core.BrokerPolicy?.
  • before a service manager makes a request to a broker it can append request properties to its request as parameters to the broker policy.

For an example see createReservation in test/unit/src/main/java/orca/tests/unit/main/

  • broker policies extract requst properties from incoming reservation requests and use them as additional parameters. To see how this is

done, take a look at:




  • service managers can set configuration properties before redeeming a reservation from a site. Configuration properties are accessible to the site's policy and to its setup and teardown handlers.
  • service managers can set local properties to a reservation before invoking its join/leave handler and these properties will become available to the handler.

2. How is slice managed in ORCA? For instance, when to start and stop a slice and what are operations performed in starting/stopping a slice?


Slivers start automatically as their setup completes (e.g., VM nodes are booted, VLANs are instantiated). The guest join hander is invoked for each sliver after its setup completes.

Guest handlers are invoked as slivers are started.

Guest handlers are invoked before slivers are destroyed, then the authority teardown handler is invoked.

3. Is there a mapper policy module within each actor? (Found in Shirako paper, but unable do so in code yet)


Mapper classes are all in shirako core. Look at everything that derives from IPolicy. The identifier "Mapper" has disappeared from the code.

Also, see the control document.

4. How do handler and driver interact? Is handler an abstract construct in ORCA? Can the configuration/parameters of ORCA handler describe wireless sensor network slicing and resources? How to reuse the current KanseiGenie? implementation of “slice manager”, “researcher portal”? As a driver?


Yes this is easy: see the documentation.


Does the handler or nodeAgent/adapter act as the slice manager in Kansei architecture? Is it necessary for Kansei to write wrapper /or simply implement ORCA-compatible drivers to facilitate function call between handler and slice manager? Can we implement some specific drivers that expose their API to the standardized handler?


There are examples of handlers and drivers under drivers/. See the answer to the previous question.


How to implement plugin polices at the three actors? How to use or implement the Plug-in extension modules in ORCA core to respond to resource request events? How to implement policy plugins for resource allocation and management at broker and site authority ? And resource request management at service manager?


The orca code base contains a number of policy examples. Starts from the IPolicy interface and follow its hierarchy. Take a look at the test/unit project for examples of testing a combination of 3 actor policies. orca/policy/* contains most of our broker and service manager policies. Site-level policy components can be found under cod/src/main/java/orca/cod/control


How does the service manager talk to the web/researcher portal (Velocity constructed web)?


Each Orca container provides a set of management API. The web portal interacts with the Orca container using the management API. The design of the management API provides for multiple ways to invoke them:

  • local function calls when running in the same JVM
  • SOAL when calling from another JVM.

At this point the local interface is fully supported, but the SOAP interface is rudimentary and not functional.

To get familiar with the management API take a look at the ManagerObject? class and its hierarchy.

The following projects implement the build of the management functionality:

core/manage manage/standard

You might also want to take a look at:

core/portal webapp