Version 1 (modified by varun, 10 years ago)

--

Getting Started with Distributed Containers

This guide describes the process of setting up ORCA to run actors in multiple containers. This will be necessary to allow remote Slice controllers to access resources managed by your Aggregate Manager. In this example we will be setting up an ORCA system with a remote slice controller and a local AM+Clearinghouse.

Before You Start

This guide assumes you have a working local soap-enabled container, which is how ORCA is set up out of the box. If not, please use the build instructions.

The Default Configuration

ORCA ships with a config file that sets up three actors in the same container that use soap to talk to each other. Before we start, let's take a look at it. Only the sections important to us are reproduced below.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration xmlns="http://issg.cs.duke.edu/sharp/boot">
  <global>
        <SNIPPED>
    <guid>583c10bfdbd326ba:-5cb5a50:114e1828ce0:-8000</guid>
    <locations>
      <location protocol="local" />
      <location protocol="soapaxis2" url="http://localhost:8080/orca" />
    </locations>
        <SNIPPED>
  </global>
  <actors>
    <actor>
      <name>site</name>
      <description>site</description>
              <SNIPPED>
    </actor>
    <actor>
      <name>broker</name>
      <description>broker</description>
              <SNIPPED>
    </actor>
    <actor>
      <name>service</name>
      <description>Service Manager</description>
        <SNIPPED>
    </actor>
  </actors>
  <topology>
    <edges>
      <edge>
        <from name="service" type="sm" />
        <to name="broker" type="agent">
	  <location protocol="soapaxis2" url="http://localhost:8080/orca/services/broker" />
	</to>
      </edge>
      <edge>
        <from name="broker" type="agent" />
        <to name="site" type="authority" >
	  <location protocol="soapaxis2" url="http://localhost:8080/orca/services/site" />
	</to>
        <rset>
          <type>1</type>
          <units>3</units>
        </rset>
      </edge>
    </edges>
  </topology>
</configuration>

There are three major parts to the config file. Changes need to be made to each of them in order to set up a distributed instance. Let's look at each section independently.

<global>

Every container has a unique guid, specified by the <guid> element. The <location> elements tell the container to create soap proxies and stubs for all the actors in the container according to the protocol and url specified. When running a distributed instance, you will have to use the containers real dns name or its IP address instead of localhost.

<actors>

The <actors> section contains one <actor> clause for every actor that will be a part of this container. Each <actor> clause contains every piece of information needed by the container to instantiate and describe the actor. Actors are described by a guid that should be unique system-wide. When running a distributed instance, any actor that will communicate over soap must have a known static guid. More details are in the next section.

<topology>

The <topology> section defines one edge from every actor that is going to request a resource to every actor that will delegate control over that resource. For example, there is an edge from the slice controller to the clearinghouse and from the clearinghouse to the aggregate manager. Every link specifies information about the <from> and <to> actors and for soap, where to find them.

Overview

In order to run a distributed instance, you will need to make several changes:

  • You will need more than one (in our example, 2) container config file. Each file will contain information about actors local to the container and the links between them and remote actors. In our example let's call the files container1.xml and container2.xml, running on host1.example.com and host2.example.com
  • Each container.xml file must have its dns-resolvable host name in the global section
  • Each actor must have a unique known guid, and a self-signed certificate with its public key.
  • Each actor must know the guid, name and certificate of any other actor it will communicate with. In our example, the slice controller will need to know the clearinghouse's guid, name and certificate an vice versa. Note that the slice controller doesn't need to know anything about the aggregate manager, this will be passed on by the broker. In fact, it is an error to draw an <edge> between a slice controller and aggregate manager.

The reason this is was not necessary in the case where all actors are local is that the container would take care of generating and passing along guids and certificates for us.

Creating a Distributed Instance

There are changes to be made in all three major sections of the config file

<global>

Make sure the two containers have different guids. Change this in the <guid> element. Change the location line to reflect the name of the machine instead of localhost. Keep in mind that this hostname must be resolvable by the other container. For our example, this would look like:

container1.xml:      <location protocol="soapaxis2" url="http://host1.example.com:8080/orca" />

and

container2.xml:      <location protocol="soapaxis2" url="http://host2.example.com:8080/orca" />

<actors>

Make sure only the actors that are going to be local to this container are being described. Remove the others.

For each actor in the system, give it a system-wide unique guid. You can generate a guid by running

ant guid

in $ORCA/tools/cmdline

In addition to giving each actor a guid, you will have to create a self-signed certificate. In $ORCA/tools/config, run

ant security.create.actor.config -Dactor=guid_of_the_actor

The certificates and config files will be stored under the runtime/ directory in the current directory. Be sure to copy or link it over to the webapp directory before deploying the container to tomcat.

You can export the Base64-encoded certificate by running

ant -emacs export64.actor.certificate -Dactor=guid_of_the_actor

in the same $ORCA/tools/config directory. Keep track of this certificate, you will need it for the topology section.

Finally, under each <actor> element, add the <guid> element with the guid for that actor. For example in container1.xml

    <actor>
      <name>service</name>
      <description>Service Manager</description>

      <guid>392a07ed-418c-4235-b4d2-c4fac0b4fbe0</guid>

        <SNIPPED>
    </actor>

<topology>

In the topology section, you will have to create edges between any actor in the local container and any remote actor, as well as edges between actors entirely within the local container. In our example this means that container1.xml will contain an edge between the local slice controller and the clearinghouse and container2.xml will contain an edge between the remote slice controller and the local clearinghouse as well as an edge between the local clearinghouse and local aggregate manager.

Each edge that crosses to a different container must contain the remote actors guid and certificate, whether the remote actor is in the <from> or <to> element. For example, in container1.xml:

      <edge>
        <from name="service" type="sm" />
        <to name="broker" type="agent" guid="ba45089a-328c-1235-a3d2-b7fac0b4fbe1">
	  <location protocol="soapaxis2" url="http://host2.example.com:8080/orca/services/broker" />
          <certificate>
CERTIFICATE\OF\REMOTE\ACTOR\IN\BASE64\ENCODING
          </certificate>
	</to>
      </edge>

And in container2.xml:

      <edge>
        <from name="service" type="sm" guid="392a07ed-418c-4235-b4d2-c4fac0b4fbe0"/>
          <location protocol="soapaxis2" url="http://host1.example.com:8080/orca/services/service" />
          <certificate>
CERTIFICATE\OF\REMOTE\ACTOR\IN\BASE64\ENCODING
          </certificate>
        </from>
        <to name="broker" type="agent" />
      </edge>
      <edge>
        <from name="broker" type="agent" />
        <to name="site" type="authority" >
	  <location protocol="soapaxis2" url="http://localhost:8080/orca/services/site" />
	</to>
        <rset>
          <type>1</type>
          <units>3</units>
        </rset>
      </edge>

Note that the edge between the clearinghouse and aggregate manager is left the same.

Final Notes

The files container1.xml and container2.xml are checked into the default pool for ease of following this guide. DO NOT USE these files without change. They are meant only as an example. You will still need to create guids, certificates and set the correct hostnames.

Also keep in mind that the default config file MUST be named config.xml. The container will not use your file if it is named something else.