New asynchronous inter-actor communications model

Overview

New RPC model. All communication between orca actors now happens without holding the kernel lock. The RPCManager manager manages both inbound and outbound RPCs. Outbound RPCs are posted to the orca threadpool and are sent asynchronously. Every actor has a queue for inbound RPCs. When an RPC arrives it is validated (access control, missing fields, etc.) and posted onto the actor's RPC queue. Actors drain their RPC queue in their tickHandler. Failed RPCs are posted back to the actors that generated them. Orca makes an attempt to determine the cause of the RPC failure (see RPCError). If the failure is caused by the network, the RPC is retried, potentially until the reservation fails/is closed/or expires. Failures not due to the network are considered fatal and fail the associated reservation.

All communication between actors works on the following model:

  • A calls B
  • B validates and queues the RPC
  • B sends A an empty message that confirms the reciept of the request
  • Some time later, B sends a message to A to complete the RPC request
  • If an RPC is not based on a reservation, the RPC has a sequence number that is echoed back in the response so that A can tie the response to the request that caused it

Reworked significantly the way proxies operate.

Changed the way we handle updateTicket and redeem:

  • updateTicket: no longer makes a call to the site to register the actor's certificate.

Instead, the actor validates the incoming ticket, extracts the Certificate of the site, and adds it to its keystore.

  • redeem: no longer assumes that the actor's cert is in the site's keystore.

Instead, the site extracts the ticket, validates it, extracts the Certificate of the holder, and adds it to its keystore. After doing this, the site checks the signature on the SOAP message.

  • actor.query() had to change is it is no longer synchronous. Provided a synchronous convenience method, which

should never be used on the actor main thread.