Foreword
In this document we will describe how to use the Kompact component-actor hybrid framework. We begin by introducing the model, which is novel in its hybrid nature, but should feel sufficiently familiar to previous users of other Actor frameworks (such as Akka, for example) or Kompics implementations. We then continue with a detailed tutorial for both local and distributed deployments of the Kompact implementation in the Rust language.
In addition to the tutorial style presentation in this book, many examples of Kompact uses can be found in the docs and in the benchmarks.
Getting Started
Setting up Rust
It is recommmended to run Kompact on a nightly version of the rust toolchain, but since 0.9
it also runs alright on stable Rust.
We recommend using the rustup tool to easily install the latest nightly version of rust and keep it updated. Instructions should be on screen once rustup is downloaded.
Using the nightly toolchain: Rustup can be configured to default to the nightly toolchain by running
rustup default nightly
.
Cargo
Add Kompact to your cargo project as a dependency:
[dependencies]
kompact = "LATEST_VERSION"
The latest version can be found on crates.io.
Github master
You can also point cargo to the latest Github master version, instead of a release. To do so add the following to your Cargo.toml instead:
[dependencies]
kompact = { git = "https://github.com/kompics/kompact" }
Hello World
With the above, you are set to run the simplest of Kompact projects, the venerable “Hello World”.
Create a new executable file, such as main.rs
, and write a very simple component that just logs “Hello World” at the info
level when it’s started and ignores all other messages and events:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
In order to start our component, we need a Kompact system, which we will create from a default configuration. And then we just wait for the component to do its work and shut the system down again:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
The await_termination()
call blocks the main-thread while the Kompact system is operating on its own thread pool. We will get into more details on scheduling and thread pools a bit later in this tutorial. For now it is sufficient to know, that once the Kompact system has been shut down by our HelloWorldComponent
using shutdown_async()
, the main-thread will eventually continue.
We can run this code, depending on how you set up your project, with:
cargo run --release
This will give us something like the following output:
lkroll $ cargo run --release --bin helloworld
Finished release [optimized] target(s) in 0.09s
Running `/Users/lkroll/Programming/Kompics/kompact/target/release/helloworld`
Jul 07 16:28:45.870 INFO Hello World!, ctype: HelloWorldComponent, cid: 804ed483-54d5-41ab-ad8f-145f90bc7b45, system: kompact-runtime-1, location: docs/examples/src/bin/helloworld.rs:17
We can see the “Hello World” being logged, alongside a bunch of other contextual information that is automatically inserted by the runtime, such as the type name of the component doing the logging (ctype
), the unique id of the component (cid
) which differentiates from other instances of the same type, the name of the Kompact system
, as well as the concrete location in the file where the logging statement occurs.
If we run in debug mode, instead of release, using the simple cargo run
we get a lot of additional output at the debug
level, concerning system and component lifecycle – more on that later.
Note: If you have checked out the examples folder and are trying to run from there, you need to specify the concrete binary with:
cargo run --release --bin helloworld
Introduction
In this section of the tutorial we will discuss the model assumptions and concepts that are underlying the implementation of Kompact. We will look at message-passing programming models, the ideas of actor references and channels with ports, and the notion of exclusive local state.
At a high level, Kompact is simply a merger of the Actor model of programming with the (Kompics) component model of programming. In both models light-weight processes with their own internal state communicate by exchanging discrete pieces of information (messages or events) instead of accessing shared memory structures or each other’s internal state.
While both models are formally equivalent, that is each model can be expressed in terms of the other, their different semantics can have significant impact on the performance of any implementation. Kompact thus allows programmers to express services and application in a mix of both models, thus taking advantage of their respective strengths and weaknesses as appropriate.
Components
In the Kompics component model, the term for a light-weight process with internal state is a “component”. This notion can be further subdivided into the process-part of a component, a component core, and the state-part of a component, which is called a component definition. The core basically just interacts with the runtime of the system, while the definition contains the state variables and behaviours, in the form of ports and event handlers, which we will discuss below.
The execution model of a component model always ensures that the state variables of a component definition can be accessed safely without any synchronisation.
In the Kompact implementation, a component definition is simply a Rust struct that contains a ComponentContext
as a field and implements the ComponentDefinition
trait, which is typically just derived automatically, as we saw in the “Hello World”-example:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
The component core itself is hidden from us in Kompact, but we can interact with it using the ComponentContext
field from within a component. When we actually instantiate a component as part of a Kompact system, we are given an Arc<Component>
, which is a combined reference to the component definition and core. The creation of this structure is what really happened when we invoked system.create(...)
in the “Hello World”-example:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
Events and Ports
Components communicate via events that are propagated along channels to a set of target components. The Kompics model is very strict about which events may travel on which channels, which it formalises with the concept of a port. Ports basically just state which events may travel in which way through the channels connected to it. You can think of a port as an API specification. If a component C provides the API of a port P, then it will accept all events of types that are marked as request in P, and it will only send events of the types that are marked as indication in P. Since components communicate with each other, the dual notion to providing a port is requiring it, and channels may only connect opposite variants of ports. That is, if one end of a channel is connected to a provided port of type P then the other side must be connected to a required port of type P. This setup ensures that messages which are sent through the channel are also accepted on the other side.
In Kompact each port is limited to a single indication and a single request type. If more types are needed in either direction, they must be wrapped into an enum, which is facilitated easily in Rust using the From
and Into
traits.
For example, a simplified version of Kompact’s internal ControlPort
, which could be defined something like this:
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum ControlEvent {
Start,
Stop,
Kill,
}
pub struct ControlPort;
impl Port for ControlPort {
type Indication = Never; // alias for the ! bottom type
type Request = ControlEvent;
}
It has a single request event of type ControlEvent
, which provides three different variants, invoked at particular points in a component’s lifecycle. It does not send any indication events, however, which is marked by the Never
type, which is uninhabited.
In order to react to the events of a port we must implement an trait appropriate trait for the direction of the events. For the control port above, for example, we might want to implement Provide<ControlPort>
to react to ControlEvent
instances. This could look as follows, for example:
impl Provide<ControlPort> for HelloWorldComponent {
fn handle(&mut self, event: ControlEvent) -> Handled {
match event {
ControlEvent::Start => {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
ControlEvent::Stop | ControlEvent::Kill => Handled::Ok,
}
}
}
This mechanism is similar to the concept of event handlers in the Kompics model, except that you can only have a single handler in Kompact and it is always (statically) subscribed. In this way the compiler can statically ensure that any component providing (or requiring) a port also accepts the appropriate events.
In Kompact, however, the ControlPort
is not exposed (anymore since version 0.10.0
), but instead we must implement the ComponentLifecycle
trait to react to (some of) its events, as we did in the HelloWorldComponent
example:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
Channels
We haven’t seen an example of channels, yet, but we will get there, when we are talking about local Kompact execution. Suffice to say here, that channels do not actually have a corresponding Rust struct in Kompact, but are simply a mental concept to think about how ports are connected to each other. Each port really just maintains a list of other ports it is connected to, and broadcasts all outgoing events to all of the connected ports. This is also why Kompact requires all events to implement the Clone
trait. If cloning of an event for each connected component would be too expensive, it can often be a good alternative to simply share it immutably behind an Arc
. Sharing events mutably behind an Arc<Mutex<_>>
is of course also possible, but generally discouraged, as contention on the Mutex
could drag down system performance significantly.
These “broadcast by default”-semantics are probably the most fundamental difference between the Kompics component model, and the Actor model we will talk about in the next section.
Actors
The Actor model is an old concept introduced by Carl Hewitt in 1973, but it only really has become popular with the Erlang language, as well as the Akka framework. In this model, the term for a light-weight process with internal state is “an actor”, though we will stick to calling the equivalents in Kompact a “component” to avoid having two names for the same thing.
Messages and References
Actors communicate via messages, which really are the same things as events, except they are addressed to a particular actor. This addressing is done via a concept called actor reference, which is a shareable datastruct that identifies an actor in a way that messages can be sent directly to it. In Erlang this is implemented as a “pid”, while in actor there is a class ActorRef
. Kompact also has a struct called ActorRef
, which fulfills the same purpose on a local Kompact system. However, as we will discuss later in more details, Kompact explicitly differentiates possible remote actors at the type level. References to them are instances of ActorPath
instead of ActorRef
.
In Kompact, actors are statically typed with respect to the messages they can receive. That is, whenever you are implementing the Actor
trait in Kompact for a component, you must specify a concrete Message
type (as an associated type). Consequently, references to actors are also typed, so senders can only send valid messages to an actor. Thus a Kompact component which implements Actor
with type Message = M;
for some type M
is referenced locally by an ActorRef<M>
. The same is not true for ActorPath
, as it is generally not possible to control what things are sent over a network, and so networked actors may have to deal with a wider range of possible incoming messages.
Actors and Components
In Kompact every component is an actor and vice versa. Both the Actor
(actually ActorRaw
) and the ComponentDefinition
trait need to be implemented in either case. But as we saw in the “Hello World”-example, the Actor
trait can simply be derived when it’s not used by a component:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
#[derive(ComponentDefinition, Actor)]
struct HelloWorldComponent {
ctx: ComponentContext<Self>,
}
impl HelloWorldComponent {
pub fn new() -> Self {
HelloWorldComponent {
ctx: ComponentContext::uninitialised(),
}
}
}
impl ComponentLifecycle for HelloWorldComponent {
fn on_start(&mut self) -> Handled {
info!(self.log(), "Hello World!");
self.ctx.system().shutdown_async();
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(HelloWorldComponent::new);
system.start(&component);
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_helloworld() {
main();
}
}
The derived code will produce an actor implementation with type Message = Never;
, indicating that no local messages can be sent to it. However, network messages still can, but will simply be discarded. This avoids a common issue encountered in Erlang, where unhandled messages keep queuing up on ports forever.
If, say, we wanted to implement an actor variant of the “Hello World”-example, we could do so by implementing the Actor
trait ourselves with some trivial type (e.g., the unit type ()
), as in:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::sync::Arc;
#[derive(ComponentDefinition)]
struct HelloWorldActor {
ctx: ComponentContext<Self>,
}
impl HelloWorldActor {
pub fn new() -> Self {
HelloWorldActor {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(HelloWorldActor);
impl Actor for HelloWorldActor {
type Message = ();
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
info!(self.ctx.log(), "Hello World!");
self.ctx().system().shutdown_async();
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are ignoring network messages for now.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let actor: Arc<Component<HelloWorldActor>> = system.create(HelloWorldActor::new);
system.start(&actor);
let actor_ref: ActorRef<()> = actor.actor_ref();
actor_ref.tell(()); // send a unit type message to our actor
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_actor_helloworld() {
main();
}
}
Of course, a unit message is not going to be produced by the Kompact runtime as a lifecycle event, so we must send it to our component after creating it, using the tell(...)
function:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::sync::Arc;
#[derive(ComponentDefinition)]
struct HelloWorldActor {
ctx: ComponentContext<Self>,
}
impl HelloWorldActor {
pub fn new() -> Self {
HelloWorldActor {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(HelloWorldActor);
impl Actor for HelloWorldActor {
type Message = ();
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
info!(self.ctx.log(), "Hello World!");
self.ctx().system().shutdown_async();
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are ignoring network messages for now.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let actor: Arc<Component<HelloWorldActor>> = system.create(HelloWorldActor::new);
system.start(&actor);
let actor_ref: ActorRef<()> = actor.actor_ref();
actor_ref.tell(()); // send a unit type message to our actor
system.await_termination();
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_actor_helloworld() {
main();
}
}
Just to point out some of the particularities described above, we have annotated some types in the previous example. You can see that our HelloWorldActor
is still created as an Arc<Component<HelloWorldActor>>
, and also that the actor reference it produces is appropriately typed as ActorRef<()>
.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin actor_helloworld
Internal State
Now that we have looked at the fundamental ideas of components and actors in isolation, let us look at something both our models share: The idea that every component/actor has its own internal state, which it has exclusive access to, without the need for synchronisation.
Access to internal state is what separates our components from being simple producers and consumers of messages and events, and makes them a powerful abstraction to build complicated systems, services, and applications with. But so far, our examples have not used any internal state at all – they simply terminated after the first event or message. In this chapter we will build something slightly less boring: a “Counter”.
A Counter Example
(The pun in the title is mostly intended ;)
In this example we will make use of the simplest of state variables, that is integer counters. We count both messages and events separately, to see how the models work together. Since state that is never read is totally useless, we will also allow the counters to be queried. In fact, we will simply consider any update also a query and always respond with the current count.
Messages
First we need to set up the message types and ports:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
We will use the same types both for the port and actor communication, so CountMe
and CurrentCount
are both events and messages.
Since we want to provide a counter service, we’ll say that CountMe
is going to be a request on the CounterPort
, and CurrentCount
is considered an indication. We could also design things the other way around, but this way it matches better with our “service” metaphor.
State
Our internal state is going to be the two counters, plus the component context and a provided port instance for CounterPort
:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
We also added a quick current_count()
function, which access our internal state constructs a CurrentCount
instance from it. This way, we can reuse the function for both event and message handling.
Counting Stuff
In addition to counting the CountMe
events and messages, we will also count control events incoming at the ControlPort
. However, we will not respond to those. As mentioned previously, control events are handled indirectly via the ComponentLifecycle
trait. On the other hand, for every CountMe
event we will respond with the current state of both counters.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
In the Kompics-style communication, we reply by simply triggering the CurrentCount
event on our counter_port
to whoever may listen. In the Actor-style, we need to know some reference to respond to. Since we are not responding to another component, but to the main-thread, we will use the Ask
-pattern provided by Kompact, which converts our response message into a future that can be blocked on, until the result is available. We will describe this pattern in more detail in a later section.
Sending Stuff
In order to count something, we must of course send some events and messages. We could do so in Actor-style by using tell(...)
as before, but this time we want to wait for a response as well. So instead we will use ask(...)
to automatically wrap our CountMe
into an Ask
instance as required by our actor’s implementation. In the Kompics-style, we can trigger on a port reference using system.trigger_r(...)
instead. Whenever we get a response, we print it using the system’s logger:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
There are two things worth noting here:
- We are never getting any responses from the Kompics-style communication. There simply isn’t anything subscribed to our port, so the responses we are sending are simply dropped immediately. Kompact does not provide an
Ask
-equivalent for ports, since maintaining two mechanisms to achieve the same effect is inefficient, and this communication pattern is very unusual for the Kompics model. - We are also not getting any feedback when the events sent to the port are being handled. In order to see them being handled at all, we added a
thread::sleep(...)
invocation there. Events and messages in Kompact do not share the same queues and there are no ordering guarantees between them. Quite the opposite, in fact: Kompact ensures a certain amount of fairness between the two mechanisms and by the default will try to handle one message for every event it handles. Thus, without the sleep, we would see between one (the start event) and 101 events being counted when the finalAsk
returns. Even like this, it’s not guaranteed that any or all events are handled before the sleep expires. It’s just very likely, if your computer isn’t terribly slow.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin counters
Conclusions
We have shown how Kompact handles internal state, and that it is automatically shared between the two different communication styles Kompact provides.
We have also seen, that there are no ordering guarantees between ports and message communication, something that is also true among different ports on the same component. It is thus important to remember that for applications, that require a certain sequence of events to be processed before proceeding, verifying completion must happen through the same communication style and even through the same port.
We will go through all the new parts introduced in this chapter again in detail in the following sections.
Local Kompact
In this section we will introduce those features of Kompact in detail, which do not use the networking subsystem.
In particular, we will talk about different styles of communication, as well as Kompact’s built-in timer facilities, before describing some of the advanced options for Kompact systems, such as schedulers, logging, and configuration in some detail.
Communication
In this chapter we are going to introduce Kompact’s communication mechanisms in detail. We will do so by building up a longer example: A worker pool that, given an array of data, aggregates the data with a given function, while splitting the work over a predetermined number of worker components. The entry point to the pool is going to be a Manager
component, which takes work requests and distributes them evenly over its worker pool, waits for the results to come in, aggregates the results, and finally responds to the original request. To make things a bit simpler, we will only deal with u64
arrays and aggregation results for now, and we will assume that our aggregation functions are both associative and commutative. It should be easy to see how a more generic version can be implemented, that accepts other data types than u64
and can avoid the commutativity requirement.
Messages and Events
To begin with we must decide what we want to send over events and what over messages, and how exactly our message/event types should look like.
Messages
For the incoming work assignments, we want some kind of request-response-style communication pattern, so we can reply once the aggregation is complete.
So we will use Actor communication for this part of the example, so that we can later use the “ask”-pattern again from the main-thread. For now, we know that the result of the work is a u64
, which we will wrap in a WorkResult
struct for clarity of purpose.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
We also know that we need to pass some data array and an aggregation function when we make a request for work to be done. Since we will want to share the data later with our workers, we’ll put it into an atomic reference, i.e. Arc<[u64]>
. For the aggregation function, we’ll simply pass a function pointer of type fn(u64, &u64) -> u64
, which is the signature accepted by the fold
function on an iterator. However, in order to start a fold
, we also need a neutral element, which depends on the aggregation function. So we add that to the work request as a field as well.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
We will also use message based communicaton for the work assignments for the individual workers in the pool. Since we want to send a different message to each worker, message addressing is a better fit here, than component broadcasting. The WorkPart
message is really basically the same as the Work
message, except that we add the range, that this particular worker is supposed to aggregate, to it.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Note: Both Actor messages and events must implement the
std::fmt::Debug
trait in Kompact. Since bothWork
andWorkPart
contain function pointers, which do not have a sensibleDebug
representation, we implement it manually instead of deriving and simply put a"<function>"
placeholder in its stead. Since our data arrays can be really big, we also only print their length.
Events
We will use Kompics-style communication for the results going from the workers to the manager. However, these are of the same type as the final result; a u64
wrapped in a WorkResult
. So all we have to do is add the std::clone::Clone
trait, which is required for events.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
We also need a port on which the WorkResult
can travel; let’s call it a WorkerPort
. Based on the naming, say a Worker
provides the WorkerPort
service, so messages from Worker
to Manager
are indications. Since we are using messages for requesting work to be done, we don’t need any request event on the WorkerPort
and will just use the empty Never
type again.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
State
Of course, we are going to need some component state to make this aggregation pool work out correctly.
Worker
Our workers are pretty much stateless, apart from the component context and the provided WorkerPort
instance. Both of these fields, we always simply initialise with the new()
function.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Manager
The manager is a bit more complicated, of course. First of all it needs to know how many workers to start in the first place, and then keep track of all their instances (Arc<Component<Worker>>
), so it can shut them down again later. We will also have the manager hang on to some actor references for each worker, so we don’t have to create a new one from the instances later. In fact, we will hold on to strong references (ActorRefStrong<WorkPart>
), since we know the workers are not going to be deallocated anyway until we remove them from our vector of instances.
Note: Strong actor references are a bit more efficient than weak ones (
ActorRef<_>
), as they avoid upgrading some internalstd::sync::Weak
instances tostd::sync::Arc
instances on every message. However, they prevent deallocation of the target components, and should thus be used with care in dynamic Kompact systems.
Additionally, we need to keep track of which request, if any, we are currently working on, so we can answer it later. We’ll make our lives easy for now, and only deal with one request at a time, but this mechanism can easily be extended for multiple outstanding requests with some additional bookkeeping.
We also need to put the partial results from each worker somewhere and figure out when we have seen all results, so we can combine them and reply to the request. We could simply keep a single u64
accumulator value around, into which we always merge as soon as we get a result from a worker. In that case, we would also need a field to keep track of how many responses we have already received, so we know when we are done. Since we won’t have too many workers, however, we will simply put all results into a vector, and consider ourselves to be done when the vector contains number_of_workers + 1
entries (we’ll get back to the +1
part later).
And finally, of course, we also need a component context and we need to require the WorkerPort
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Handlers
Now that we have set up all the messages, events, and the state, we need to actually implement the behaviours of the Manager
and the Worker
. That means we need to implement the Actor
trait for both components, to handle the messages we are sending, and we also need to implement the appropriate event handling traits: ComponentLifecycle
and Require<WorkerPort>
for the Manager
, and Provide<ControlPort>
and Provide<WorkerPort>
for the Worker
.
Worker
Actor
Since the worker is stateless, its implementation is really simple. It’s basically just a francy wrapper around a slice fold
. That is, whenever we get a WorkPart
message on our receive_local(...)
function from the Actor
trait, we simply take a slice of the range we were allocated to work on, and then call fold(msg.neutral, msg.merger)
on it to produce the desired u64
result. And then we simply wrap the result into a WorkResult
, which we trigger on our instance of WorkerPort
so it gets back to the manager.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Ports
We also need to provide implementations for ComponentLifecycle
and WorkerPort
, because they are expected by Kompact. However, we don’t actually want to do anything interesting with them. So we are going to use the ignore_lifecycle!(Worker)
macro to generate an empty ComponentLifecycle
implementation, and similarly use the ignore_requests!(WorkerPort, Worker)
macro to generate an empty Provide<WorkerPort>
implementation.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Manager
The manager needs to do three things: 1) Manage the worker pool, 2) split up work requests and send chunks to workers, and 3) collect the results from the workers, combine them, and reply with the final result.
We will deal with 1) in a ComponentLifecycle
handler, and with 3) in the handler for WorkerPort
, of course. For 2), however, we want to use the “ask”-pattern again, so we will look at that in the next section.
ComponentLifecycle
Whenever the manager is started (or restarted after being paused) we must populate our pool of workers and connect them appropriately. We can create new components from within an actor by using the system()
reference from the ComponentContext
. Of course, we must also remember to actually start the new components, or nothing will happen when we send messages to them later. Additionally, we must fill the appropriate state with component instances and actor references, as we discussed in the previous section.
Note: As opposed to many other Actor or Component frameworks, Kompact does not produce a hierarchical structure when calling
create(...)
from within a component. This is because Kompact doesn’t have such a strong focus on error handling and supervision as other systems, and maintaining a hierarchical structure is more complicated than maintaining a flat one.
When the manager gets shut down or paused we will clean up the worker pool completely. Even if we are only temporarily paused, it is better to reduce our footprint by cleaning up, than forgetting to do so and hanging on to all the pool memory while not running anyway.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Note: We are getting the required
ActorRefStrong<WorkPart>
by first callingactor_ref()
on a worker instance and then upgrading the result viahold()
from anActorRef<WorkPart>
toActorRefStrong<WorkPart>
. This returns aResult
, as upgrading is impossible if the component is already deallocated. However, since we are holding on to the actual instance of the component here as well, we know it’s not deallocated, yet, andhold()
cannot fail, so we simply callexpect(...)
to unwrap it.
Worker Port
Whenever we get a WorkResult
from a worker, we will temporarily store it in self.result_accumulator
, as long we have an outstanding request. After every new addition to the accumulator, we check if we have gotten all responses with self.result_accumulator.len() == (self.num_workers + 1)
(again, more on the + 1
later). If that is so, we will do the final aggregation on the accumulator via fold(work.neutral, work.merger)
and then reply(...)
to the outstanding request. Of course, we must also clean up after ourselves, i.e. reset the self.outstanding_request
to None
and clear out self.result_accumulator
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Ask
We have now mentioned multiple times that we want to use the “ask”-pattern again, as we did in the introduction briefly already. The “ask”-pattern is simply a mechanism to translate from message-based communication into a thread- or future-based model. It does so by coupling a request message with a future that is to be fulfilled with a response for the request. On the sending thread, the one with the future, the result can then be waited for at some point, when it is needed, via blocking, for example. The receiving Actor, on the other hand, gets a combination of the request message with a promise, an Ask
instance, that it can fulfill with the response at any later time. Thus, the Message
type for such an actor is not Request
but rather Ask<Request, Response>
.
Note: The futures returned by Kompact’s “ask”-API conform with Rust’s built-in async/await mechanism. On top of that Kompact offers some convenience methods that can be called on a
KFuture
to make the common case of blocking on the result easier.
Manager
The Message
type for the manager is thus Ask<Work, WorkResponse>
, which we already saw when describing its state. In order to access the actual Work
instance in our receive_local(...)
implementation, we use the Ask::request()
function.
We then must distribute the work more or less evenly over the available workers. If no workers are available, the manager simply does all the work itself. Otherwise we’ll figure out what constitutes and “equal share” (i.e. the stride
) and then use it to step through the indices into the data, producing sub-ranges, which we send to each worker immediately. Since it may sometimes happen due to rounding that we have a tiny bit of work left at the end, we just do that at the manager and put it directly into the self.result_accumulator
. This extra work, it the reason for the previously mentioned + 1
whenever we are considering the length of the self.result_accumulator
. It is simply the manager’s share of the work. In order to keep this length consistent, we will simply push the work.neutral
element whenever the manager actually doesn’t do any work. Finally, we need to remember to store the request in self.outstanding_request
so we can reply to it later when all responses have arrived.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Sending Work
When sending work to the manager from the main-thread, we can construct the required Ask
instance with ActorRef::ask(...)
. Since we only want to handle a single request at a time, we will immediately wait()
for the result of the future.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Note: For situations where the
Ask
instance is nested, for example, into an enum, Kompact offers theActorRef::ask_with
function. Instead of aRequest
value,ask_with
expects a function that takes aKPromise<Result>
and produces the Actor’sMessage
type. This also allows for customAsk
variants with more fields, for example.
System
In order to run any Kompact component, we need a KompactSystem
. The system manages the runtime variables, the thread pool, logging, and many aspects of the Kompact. Such a system is created from a KompactConfig
via the build()
function. The config instance allows customisation of many parameters of the runtime, which we will discuss in upcoming sections. For now, the default()
instance will do just fine. It creates a thread pool with one thread for each CPU core, as reported by num_cpus, schedules fairly between messags and events, and does some internal message/event batching to improve performance.
Note: As opposed to Kompics, in Kompact it is perfectly viable to have multiple systems running in the same process, for example with different configurations.
When a Kompact system is not used anymore it should be shut down via the shutdown()
function. Sometimes it is a component instead of the main-thread that must decide when to shut down. In that case, it can use self.ctx.system().shutdown_async()
and the main-thread can wait for this to complete with await_termination()
.
Note: Neither
shutdown_async()
norawait_termination()
has a particularly efficient implementation, as this should be a relatively rare thing to do in the lifetime of a Kompact system, and thus doesn’t warrant optimisation at this point. That also means, though, thatawait_termination()
should definitely not be used as a timing marker in a benchmark, as some people have done with the equivalent Akka API.
Tying Things Together
For our worker pool example, we will simply use the default configuration, and start a Manager
component with a configurable number of workers. We then create some data array of a configurable size and send a work request with it and an aggregation function to the manager instance. We’ll use simple addition with overflow as our aggregation function, which means our neutral element is 0u64
. The data array we’ll generate is simply the integers from 1
to data_size
, which means our aggregate will actually calculate a triangular number (modulo overflows, for which we probably don’t have enough memory for the data array anyway). Since that particular number has a much simpler solution, i.e. \( \sum_{k=1}^n k = \frac{n\cdot(n+1)}{2} \), we will also use an assertion to verify we are actually producing the right result (again, this probably won’t work if we actually do overflow during aggregration, but oh well...details ;).
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Now all we are missing are values for the two parameters; num_workers
and data_size
. We’ll read those from command-line so we can play around with them.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Clone, Debug)]
struct WorkResult(u64);
struct WorkerPort;
impl Port for WorkerPort {
type Indication = WorkResult;
type Request = Never;
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
worker_port: RequiredPort<WorkerPort>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WorkPart>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
worker_port: RequiredPort::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
worker.connect_to_required(self.worker_port.share());
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Require<WorkerPort> for Manager {
fn handle(&mut self, event: WorkResult) -> Handled {
if self.outstanding_request.is_some() {
self.result_accumulator.push(event.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", event
);
}
Handled::Ok
}
}
impl Actor for Manager {
type Message = Ask<Work, WorkResult>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(msg);
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
worker_port: ProvidedPort<WorkerPort>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
worker_port: ProvidedPort::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
ignore_requests!(WorkerPort, Worker);
impl Actor for Worker {
type Message = WorkPart;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
self.worker_port.trigger(WorkResult(res));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref.ask(work).wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Now we can run our example, by giving it some parameters, say 4 100000
to run 4 workers and calculate the 100000th triangular number. If you play with larger numbers you’ll see that a) it uses more and more memory and b) it will spend most of its time creating the original array, as our aggregration function is very simple and parallelisable, while data creation is done sequentially. Of course, in a real worker pool we’d probably read data from disk somewhere or from an already memory resident set, perhaps. But this is good enough for our little example.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin workers 4 100000
Senders
The one communication-related thing we haven’t touched, yet, is how to do request-response style communication among Actors. The “ask”-pattern gave us request-reponse between an Actor and some arbitrary (non-pool) thread, ports basically give us some form request-response between request and indication events (with some broadcasting semantic caveats, of course). But for Actor to Actor communication, we have not seen anything of this sort, yet. In fact, you may have noticed that receive_local(...)
does not actually give us any sender information, such as an ActorRef
. Neither is this available via the component context as would be the case in Akka.
In Kompact, for local messages at least, sender information must be passed explicitly. This is for two reasons:
- It avoids creating an
ActorRef
for every message when it’s not needed, since actor references are not trivially cheap to create. - It allows the sender reference to be typed with the appropriate message type.
This design gives us basically two variants to do request-reponse. If we know we are always going to respond to the same component instance, the most efficient thing to do is to get a reference to it once and then just keep it around as part of our internal state. This avoids constantly creating actor references, and is pretty efficient. If, however, we must respond to multiple different actors, which is often the case, we must make the sender reference part of the request message. We can do that either by adding a field to our custom message type, or simply wrapping our custom message type into the Kompact provided WithSender
struct. WithSender
is really the same idea as Ask
, replacing the KPromise<Response>
with an ActorRef<Response>
(yes, there is also WithSenderStrong
using an ActorRefStrong
instead).
Workers with Senders
To illustrate this mechanism we are going to rewrite the Workers example from the previous sections to use WithSender
instead of the WorkerPort
communication. We will use WithSender
here, instead of a stored manager actor reference, to illustrate the point, but it should be clear that the latter will be more efficient as we always reply to the manager.
First we remove all mentions of WorkerPort
, of course. Then we change the worker’s Message
type to WithSender<WorkPart, ManagerMessage>
. Why ManagerMessage
and not WorkResult
? Well, since all communication with the manager now happens via messages, we need to differentiate between messages from the main-thread, which are of type Ask<Work, WorkResult>
and messages from the worker, which are of type WorkResult
. Since we can only have a single Message
type, ManagerMessage
is simply an enum of both options.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Debug)]
struct WorkResult(u64);
#[derive(Debug)]
enum ManagerMessage {
Work(Ask<Work, WorkResult>),
Result(WorkResult),
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WithSender<WorkPart, ManagerMessage>>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for Manager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
match msg {
ManagerMessage::Work(msg) => {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(WithSender::from(msg, self));
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
}
ManagerMessage::Result(msg) => {
if self.outstanding_request.is_some() {
self.result_accumulator.push(msg.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", msg
);
}
}
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
impl Actor for Worker {
type Message = WithSender<WorkPart, ManagerMessage>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range.clone()];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
msg.reply(ManagerMessage::Result(WorkResult(res)));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref
.ask_with(|promise| ManagerMessage::Work(Ask::new(promise, work)))
.wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
Thus, when the worker wants to reply(...)
with a WorkResult
it actually needs to wrap it in a ManagerMessage
instance or the compiler is going to reject it.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Debug)]
struct WorkResult(u64);
#[derive(Debug)]
enum ManagerMessage {
Work(Ask<Work, WorkResult>),
Result(WorkResult),
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WithSender<WorkPart, ManagerMessage>>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for Manager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
match msg {
ManagerMessage::Work(msg) => {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(WithSender::from(msg, self));
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
}
ManagerMessage::Result(msg) => {
if self.outstanding_request.is_some() {
self.result_accumulator.push(msg.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", msg
);
}
}
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
impl Actor for Worker {
type Message = WithSender<WorkPart, ManagerMessage>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range.clone()];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
msg.reply(ManagerMessage::Result(WorkResult(res)));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref
.ask_with(|promise| ManagerMessage::Work(Ask::new(promise, work)))
.wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
In the manager we must first update our state to reflect the new message (and thus reference) types.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Debug)]
struct WorkResult(u64);
#[derive(Debug)]
enum ManagerMessage {
Work(Ask<Work, WorkResult>),
Result(WorkResult),
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WithSender<WorkPart, ManagerMessage>>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for Manager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
match msg {
ManagerMessage::Work(msg) => {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(WithSender::from(msg, self));
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
}
ManagerMessage::Result(msg) => {
if self.outstanding_request.is_some() {
self.result_accumulator.push(msg.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", msg
);
}
}
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
impl Actor for Worker {
type Message = WithSender<WorkPart, ManagerMessage>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range.clone()];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
msg.reply(ManagerMessage::Result(WorkResult(res)));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref
.ask_with(|promise| ManagerMessage::Work(Ask::new(promise, work)))
.wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
We also remove the port connection logic from the ComponentLifecycle
handler. Then we change the Message
type of the manager to ManagerMessage
and match on the ManagerMessage
variant in the receive_local(...)
function. For the ManagerMessage::Work
variant, we basically do the same thing as in the old receive_local(...)
function, except that we construct a WithSender
instance from the WorkPart
instead of sending it directly to the worker. We then simply copy the code from the old WorkResult
handler into the branch for ManagerMessage::Result
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Debug)]
struct WorkResult(u64);
#[derive(Debug)]
enum ManagerMessage {
Work(Ask<Work, WorkResult>),
Result(WorkResult),
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WithSender<WorkPart, ManagerMessage>>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for Manager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
match msg {
ManagerMessage::Work(msg) => {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(WithSender::from(msg, self));
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
}
ManagerMessage::Result(msg) => {
if self.outstanding_request.is_some() {
self.result_accumulator.push(msg.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", msg
);
}
}
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
impl Actor for Worker {
type Message = WithSender<WorkPart, ManagerMessage>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range.clone()];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
msg.reply(ManagerMessage::Result(WorkResult(res)));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref
.ask_with(|promise| ManagerMessage::Work(Ask::new(promise, work)))
.wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
The receive_local(...)
function is getting pretty long, so we should probably decompose it into smaller private functions if we actually wanted to maintain this code.
Now finally, when we want to send the Ask
from the main-thread, we also need to wrap it into ManagerMessage::Work
. This prevents us from simply using ActorRef::ask
, as it only produces an Ask
instance, not our wrapper ManagerMessage
. This gets us back to previously mentioned ActorRef::ask_with
function, which allows us to construct our Ask
instance and put it into our wrapper ourselves. If we were to use this construction in many places throughout or code, it would likely be a good idea to use a constructor function on ManagerMessage
to map the promise
and the work
values to the proper structure.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{env, fmt, ops::Range, sync::Arc};
struct Work {
data: Arc<[u64]>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl Work {
fn with(data: Vec<u64>, merger: fn(u64, &u64) -> u64, neutral: u64) -> Self {
let moved_data: Arc<[u64]> = data.into_boxed_slice().into();
Work {
data: moved_data,
merger,
neutral,
}
}
}
impl fmt::Debug for Work {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Work{{
data=<data of length={}>,
merger=<function>,
neutral={}
}}",
self.data.len(),
self.neutral
)
}
}
struct WorkPart {
data: Arc<[u64]>,
range: Range<usize>,
merger: fn(u64, &u64) -> u64,
neutral: u64,
}
impl WorkPart {
fn from(work: &Work, range: Range<usize>) -> Self {
WorkPart {
data: work.data.clone(),
range,
merger: work.merger,
neutral: work.neutral,
}
}
}
impl fmt::Debug for WorkPart {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"WorkPart{{
data=<data of length={}>,
range={:?},
merger=<function>,
neutral={}
}}",
self.data.len(),
self.range,
self.neutral
)
}
}
#[derive(Debug)]
struct WorkResult(u64);
#[derive(Debug)]
enum ManagerMessage {
Work(Ask<Work, WorkResult>),
Result(WorkResult),
}
#[derive(ComponentDefinition)]
struct Manager {
ctx: ComponentContext<Self>,
num_workers: usize,
workers: Vec<Arc<Component<Worker>>>,
worker_refs: Vec<ActorRefStrong<WithSender<WorkPart, ManagerMessage>>>,
outstanding_request: Option<Ask<Work, WorkResult>>,
result_accumulator: Vec<u64>,
}
impl Manager {
fn new(num_workers: usize) -> Self {
Manager {
ctx: ComponentContext::uninitialised(),
num_workers,
workers: Vec::with_capacity(num_workers),
worker_refs: Vec::with_capacity(num_workers),
outstanding_request: None,
result_accumulator: Vec::with_capacity(num_workers + 1),
}
}
}
impl ComponentLifecycle for Manager {
fn on_start(&mut self) -> Handled {
// set up our workers
for _i in 0..self.num_workers {
let worker = self.ctx.system().create(Worker::new);
let worker_ref = worker.actor_ref().hold().expect("live");
self.ctx.system().start(&worker);
self.workers.push(worker);
self.worker_refs.push(worker_ref);
}
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
// clean up after ourselves
self.worker_refs.clear();
let system = self.ctx.system();
self.workers.drain(..).for_each(|worker| {
system.stop(&worker);
});
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for Manager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
match msg {
ManagerMessage::Work(msg) => {
assert!(
self.outstanding_request.is_none(),
"One request at a time, please!"
);
let work: &Work = msg.request();
if self.num_workers == 0 {
// manager gotta work itself -> very unhappy manager
let res = work.data.iter().fold(work.neutral, work.merger);
msg.reply(WorkResult(res)).expect("reply");
} else {
let len = work.data.len();
let stride = len / self.num_workers;
let mut start = 0usize;
let mut index = 0;
while start < len && index < self.num_workers {
let end = len.min(start + stride);
let range = start..end;
info!(self.log(), "Assigning {:?} to worker #{}", range, index);
let msg = WorkPart::from(work, range);
let worker = &self.worker_refs[index];
worker.tell(WithSender::from(msg, self));
start += stride;
index += 1;
}
if start < len {
// manager just does the rest itself
let res = work.data[start..len].iter().fold(work.neutral, work.merger);
self.result_accumulator.push(res);
} else {
// just put a neutral element in there, so our count is right in the end
self.result_accumulator.push(work.neutral);
}
self.outstanding_request = Some(msg);
}
}
ManagerMessage::Result(msg) => {
if self.outstanding_request.is_some() {
self.result_accumulator.push(msg.0);
if self.result_accumulator.len() == (self.num_workers + 1) {
let ask = self.outstanding_request.take().expect("ask");
let work: &Work = ask.request();
let res = self
.result_accumulator
.iter()
.fold(work.neutral, work.merger);
self.result_accumulator.clear();
let reply = WorkResult(res);
ask.reply(reply).expect("reply");
}
} else {
error!(
self.log(),
"Got a response without an outstanding promise: {:?}", msg
);
}
}
}
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
#[derive(ComponentDefinition)]
struct Worker {
ctx: ComponentContext<Self>,
}
impl Worker {
fn new() -> Self {
Worker {
ctx: ComponentContext::uninitialised(),
}
}
}
ignore_lifecycle!(Worker);
impl Actor for Worker {
type Message = WithSender<WorkPart, ManagerMessage>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
let my_slice = &msg.data[msg.range.clone()];
let res = my_slice.iter().fold(msg.neutral, msg.merger);
msg.reply(ManagerMessage::Result(WorkResult(res)));
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("Still ignoring networking stuff.");
}
}
pub fn main() {
let args: Vec<String> = env::args().collect();
assert_eq!(
3,
args.len(),
"Invalid arguments! Must give number of workers and size of the data array."
);
let num_workers: usize = args[1].parse().expect("number");
let data_size: usize = args[2].parse().expect("number");
run_task(num_workers, data_size);
}
fn run_task(num_workers: usize, data_size: usize) {
let system = KompactConfig::default().build().expect("system");
let manager = system.create(move || Manager::new(num_workers));
system.start(&manager);
let manager_ref = manager.actor_ref().hold().expect("live");
let data: Vec<u64> = (1..=data_size).map(|v| v as u64).collect();
let work = Work::with(data, overflowing_sum, 0u64);
println!("Sending request...");
let res = manager_ref
.ask_with(|promise| ManagerMessage::Work(Ask::new(promise, work)))
.wait();
println!("*******\nGot result: {}\n*******", res.0);
assert_eq!(triangular_number(data_size as u64), res.0);
system.shutdown().expect("shutdown");
}
fn triangular_number(n: u64) -> u64 {
(n * (n + 1u64)) / 2u64
}
fn overflowing_sum(lhs: u64, rhs: &u64) -> u64 {
lhs.overflowing_add(*rhs).0
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workers() {
run_task(3, 1000);
}
}
At this point we should able to run the example again, and see the same behaviour as before.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin workers_sender 4 100000
Timers
Kompact comes with build-in support for scheduling some execution to happen in the future. Such scheduled execution can be either one-off or periodically repeating. Concretely, the scheduling API allows developers to subscribe a handler closure to the firing of a timeout after some Duration
. This closure takes two arguments:
- A new mutable reference to the scheduling component, so that its state can be accessed safely from within the closure, and
- a handle to the timeout being triggered, so that different timeouts can be differentiated. The handle is an opaque type named
ScheduledTimer
, but currently is simply a wrapper around aUuid
instance assigned (and returned) when the timeout is originally scheduled.
Batching Example
In order to show the scheduling API, we will develop a batching component, called a Buncher
, that collects received events locally until either a pre-configured batch size is reached or a defined timeout expires, whichever happens first. Once the batch is closed by either condition, a new Batch
event is triggered on the port containing all the collected events.
Since there are two variants of scheduled execution, we will also implement two variants of the batching component:
- The regular variant simply schedules a periodic timeout once, and then fires a batch whenever the timeout expires, no matter how long ago the last batch was triggered (which could be fairly recently if it was triggered by the batch size condition).
- The adaptive variant schedules a new one-off timeout for every batch. If a batch is triggered by size instead of time, this variant will cancel the current timeout and schedule a new one with the full duration again. This approach is more practical, as it results in more evenly sized batches than the regular variant.
Shared Code
Both implementations share the basic events and ports involved. They also both use a printer component for Batch
events, which simply logs the size of each batch so we can see it during execution.
use kompact::prelude::*;
#[derive(Clone, Debug)]
pub struct Ping(pub u64);
#[derive(Clone, Debug)]
pub struct Batch(pub Vec<Ping>);
pub struct Batching;
impl Port for Batching {
type Indication = Batch;
type Request = Ping;
}
#[derive(ComponentDefinition, Actor)]
pub struct BatchPrinter {
ctx: ComponentContext<Self>,
batch_port: RequiredPort<Batching>,
}
impl BatchPrinter {
pub fn new() -> Self {
BatchPrinter {
ctx: ComponentContext::uninitialised(),
batch_port: RequiredPort::uninitialised(),
}
}
}
ignore_lifecycle!(BatchPrinter);
impl Require<Batching> for BatchPrinter {
fn handle(&mut self, batch: Batch) -> Handled {
info!(self.log(), "Got a batch with {} Pings.", batch.0.len());
Handled::Ok
}
}
They’ll also really use the same running code, even though its repeated in each file so it picks the correct implementation. In either case, we set up the Buncher
and the BatchPrinter
in a default system, connect them via biconnect_components(...)
and then send them two waves of Ping
events. The first wave comes around every millisecond, depending on concrete thread scheduling by the OS, while the second comes around every second millisecond.
With a batch size of 100 and a timeout of 150ms we will see mostly full batches in the first wave, while we usually see time-triggered waves in the second wave.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_periodic(self.timeout, self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
Regular Buncher
The state of the Buncher
consists of the two configuration values, batch size and timeout, as well as the Vec
storing the currently collecting batch and the handle for the currently scheduled timeout (ScheduledTimer
).
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_periodic(self.timeout, self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
As part of the lifecyle we must set up the timer, but also make sure to clean it up after we are done. To be able to do so, we must store the ScheduledTimer
handle that the schedule_periodic(...)
function returns in a local field, so we can access it when we are paused or killed and pass it as a parameter to cancel_timer(...)
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_periodic(self.timeout, self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
Warning: Not cleaning up timeouts will cause them to be triggered over and over again. Not only will it slow down the timer facilities, but may also cause a lot of logging, depending on the logging level you are compiling with. Make sure to always clean up scheduled timeouts, especially periodic ones.
The first parameter of the schedule_periodic(...)
function is the time until the timeout is triggered the first time. The second parameters gives the periodicity. We’ll use the same value for both here.
The actual code we want to call whenever our periodic timeout is triggered is a private function called handle_timeout(...)
which has the signature expected by the schedule_periodic(...)
function. It checks that the timeout we got is actually an expected timeout, before invoking the actual trigger_batch(...)
function.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_periodic(self.timeout, self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
The actual handler for the Ping
events on the Buncher
is pretty straight forward. We simply add the event to our active batch. Then we check if the batch is full, and if it is we again call trigger_batch(...)
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_periodic(self.timeout, self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
If we go and run this implementation with the main function from above, we will see that for the first wave we often get a full batch followed by a very small batch, e.g.:
Mar 10 15:42:03.734 INFO Got a batch with 100 Pings., ctype: BatchPrinter, cid: 4c79e0b1-1d74-455b-987a-14f66bcd4025, system: kompact-runtime-1, location: docs/examples/src/batching.rs:33
Mar 10 15:42:03.762 INFO Got a batch with 22 Pings., ctype: BatchPrinter, cid: 4c79e0b1-1d74-455b-987a-14f66bcd4025, system: kompact-runtime-1, location: docs/examples/src/batching.rs:33
Mar 10 15:42:03.890 INFO Got a batch with 100 Pings., ctype: BatchPrinter, cid: 4c79e0b1-1d74-455b-987a-14f66bcd4025, system: kompact-runtime-1, location: docs/examples/src/batching.rs:33
Mar 10 15:42:03.912 INFO Got a batch with 16 Pings., ctype: BatchPrinter, cid: 4c79e0b1-1d74-455b-987a-14f66bcd4025, system: kompact-runtime-1, location: docs/examples/src/batching.rs:33
This happens because we hit 100 Pings somewhere around 120ms into the timeout, and then there is only around 30ms left to collect events for the next batch. This, of course, isn’t particularly great behaviour for a batching abstraction. We would much rather have regular batches if the input is coming in regularly.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin buncher_regular
Adaptive Buncher
In order to get more regular sized batches, we need to reset our timeout whenever we trigger a batch based on size. Since this will cause our timeouts to be very irregular anyway, we will just skip periodic timeouts altogether and always schedule a new timer whenever we trigger a batch, no matter which condition triggered it.
To do so, we must first change the handler for lifecycle events to use schedule_once(...)
instead of schedule_periodic(...)
.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
We must also remember to reschedule a new timeout when we handle a current one. It’s important to correctly replace the handle for the timeout so we never accidentally trigger on an outdated timeout.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
Finally, when we trigger a batch based on size, we must proactively cancel the current timeout and schedule a new one. Note that this cancellation API is asychronous, so it can very well happen that an already cancelled timeout will still be invoked because it was already queued up. That is why we must always check for a matching timeout handle before executing a received timeout.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new(batch_size: usize, timeout: Duration) -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size,
timeout,
current_batch: Vec::with_capacity(batch_size),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(move || Buncher::new(100, Duration::from_millis(150)));
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
If we run again now, we can see that the first wave of pings is pretty much always triggered based on size, while the second wave is always triggered based on timeout, giving us much more regular batches.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin buncher_adaptive
Schedulers
Kompact allows the core component scheduler to be exchanged, in order to support different kinds of workloads.
The default crossbeam_workstealing_pool
scheduler from the executors crate, for example, is designed for fork-join type workloads. That is, workloads where a small number of (pool) external events spawns a large number of (pool) internal events.
But not all workloads are of this type. Sometimes the majority of events are (pool) external, and there is little communication between components running on the thread pool. Our somewhat contrived “counter”-example from the introduction was of this nature, for example. We were sending events and messages from the main-thread to the Counter
, which was running on Kompact’s thread-pool. But we never sent any messages or events to any other component on that pool. In fact, we also only had a single component, and running it a large thread pool seems rather silly. (Kompact’s default thread pool has one thread for each CPU core, as reported by num_cpus.)
Changing Pool Size
We will first change just the pool size for the “counter”-example, since that is easily done.
The number of threads in Kompact’s thread pool is configured with the config value at system::THREADS using the set_config_value
function on a KompactConfig
instance. We will simply pass in 1usize
there, before constructing our Kompact system.
That we change this line
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
to this:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
use kompact::config_keys::system;
let mut conf = KompactConfig::default();
conf.set_config_value(&system::THREADS, 1usize);
let system = conf.build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
The same effect could be achieved via a configuration file by setting kompact.runtime.threads = 1
.
If we run this, we will see exactly (modulo event timing) the same output as when running on the larger pool with Kompact’s default settings.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin counters_pool
Changing Scheduler Implementation
Ok, so now we’ll switch to a pool that is designed for external events: crossbeam_channel_pool
, also from the executors crate.
We can set the scheduler implementation to be used by our Kompact system using the executor(...)
function on a KompactConfig
instance. That function expects a closure from the number of threads (usize
) to something that implements the executors::FuturesExecutor
trait.
Note: There is actually a more general API for changing scheduler, in the
scheduler(...)
function, which expects a function returning aBox<dyn kompact::runtime::Scheduler>
. Theexecutor(...)
function is simply a shortcut for using schedulers that are compatible with the executors crate.
In order to use the crossbeam_channel_pool
scheduler, we need to import the kompact::executors
module, which is simply a re-export from the executors crate:
use kompact::executors;
With that, all we need to add is the following line of code, which selects the ThreadPool
implementation from the crossbeam_channel_pool
module, instead of the one from the crossbeam_workstealing_pool
module, that is default.
#![allow(clippy::unused_unit)]
use kompact::{executors, prelude::*};
use std::time::Duration;
#[derive(Clone, Debug, PartialEq, Eq)]
struct CurrentCount {
messages: u64,
events: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct CountMe;
struct CounterPort;
impl Port for CounterPort {
type Indication = CurrentCount;
type Request = CountMe;
}
#[derive(ComponentDefinition)]
struct Counter {
ctx: ComponentContext<Self>,
counter_port: ProvidedPort<CounterPort>,
msg_count: u64,
event_count: u64,
}
impl Counter {
pub fn new() -> Self {
Counter {
ctx: ComponentContext::uninitialised(),
counter_port: ProvidedPort::uninitialised(),
msg_count: 0u64,
event_count: 0u64,
}
}
fn current_count(&self) -> CurrentCount {
CurrentCount {
messages: self.msg_count,
events: self.event_count,
}
}
}
impl ComponentLifecycle for Counter {
fn on_start(&mut self) -> Handled {
info!(self.ctx.log(), "Got a start event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
info!(self.ctx.log(), "Got a stop event!");
self.event_count += 1u64;
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
info!(self.ctx.log(), "Got a kill event!");
self.event_count += 1u64;
Handled::Ok
}
}
impl Provide<CounterPort> for Counter {
fn handle(&mut self, _event: CountMe) -> Handled {
info!(self.ctx.log(), "Got a counter event!");
self.event_count += 1u64;
self.counter_port.trigger(self.current_count());
Handled::Ok
}
}
impl Actor for Counter {
type Message = Ask<CountMe, CurrentCount>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
msg.complete(|_request| {
info!(self.ctx.log(), "Got a message!");
self.msg_count += 1u64;
self.current_count()
})
.expect("complete");
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("We are still ignoring network messages.");
}
}
pub fn main() {
use kompact::config_keys::system;
let mut conf = KompactConfig::default();
conf.set_config_value(&system::THREADS, 1usize);
conf.executor(executors::crossbeam_channel_pool::ThreadPool::new);
let system = conf.build().expect("system");
let counter = system.create(Counter::new);
system.start(&counter);
let actor_ref = counter.actor_ref();
let port_ref: ProvidedRef<CounterPort> = counter.provided_ref();
for _i in 0..100 {
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The current count is: {:?}", current_count);
}
for _i in 0..100 {
system.trigger_r(CountMe, &port_ref);
// Where do the answers go?
}
std::thread::sleep(Duration::from_millis(1000));
let current_count = actor_ref.ask(CountMe).wait();
info!(system.logger(), "The final count is: {:?}", current_count);
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counters() {
main();
}
}
If we run this, again, we will see exactly (modulo event timing) the same output as when running on the larger pool with Kompact’s default settings.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin counters_channel_pool
Logging
Kompact uses the slog crate to provide system wide logging facilities.
The basic macros for this, slog::{crit, debug, error, info, o, trace, warn}
are re-exported in the prelude for convenience. Logging works out of the box with a default asynchronous console logger implementation that roughly corresponds to the following setup code:
let decorator = slog_term::TermDecorator::new().stdout().build();
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).chan_size(1024).build().fuse();
let logger = slog::Logger::root_typed(Arc::new(drain));
The actual logging levels are controlled via build features. The default features correspond to max_level_trace
and release_max_level_info
, that is in debug builds all levels are shown, while in the release profile only info
and more severe message are shown. Alternatively, Kompact provides a slightly less verbose feature variant called silent_logging
, which is equivalent to max_level_info
and release_max_level_error
.
This is exemplified in the following very simple code example:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
pub fn main() {
let system = KompactConfig::default().build().expect("system");
trace!(
system.logger(),
"You will only see this in debug builds with default features"
);
debug!(
system.logger(),
"You will only see this in debug builds with default features"
);
info!(system.logger(), "You will only see this in debug builds with silent_logging or in release builds with default features");
warn!(system.logger(), "You will only see this in debug builds with silent_logging or in release builds with default features");
error!(system.logger(), "You will always see this");
// remember that logging is asynchronous and won't happen if the system is shut down already
std::thread::sleep(Duration::from_millis(100));
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_logging() {
main();
}
}
Try to run it with a few different build settings and see what you get.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin logging
Custom Logger
Sometimes the default logging configuration is not sufficient for a particular application. For example, you might need a larger queue size in the Async
drain, or you may want to write to a file instead of the terminal.
In the following example we replace the default terminal logger with a file logger, logging to /tmp/myloggingfile
instead. We also increase the queue size in the Async
drain to 2048, so that it fits the 2048 logging events we are sending it short succession later. In order to replace the default logger, we use the KompactConfig::logger(...)
function.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::{fs::OpenOptions, sync::Arc, time::Duration};
const FILE_NAME: &str = "/tmp/myloggingfile";
pub fn main() {
let mut conf = KompactConfig::default();
let logger = {
let file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(FILE_NAME)
.expect("logging file");
// create logger
let decorator = slog_term::PlainSyncDecorator::new(file);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).chan_size(2048).build().fuse();
slog::Logger::root_typed(
Arc::new(drain),
o!(
"location" => slog::PushFnValue(|r: &slog::Record<'_>, ser: slog::PushFnValueSerializer<'_>| {
ser.emit(format_args!("{}:{}", r.file(), r.line()))
})),
)
};
conf.logger(logger);
let system = conf.build().expect("system");
trace!(
system.logger(),
"You will only see this in debug builds with default features"
);
debug!(
system.logger(),
"You will only see this in debug builds with default features"
);
info!(system.logger(), "You will only see this in debug builds with silent_logging or in release builds with default features");
warn!(system.logger(), "You will only see this in debug builds with silent_logging or in release builds with default features");
error!(system.logger(), "You will always see this");
for i in 0..2048 {
info!(system.logger(), "Logging number {}.", i);
}
// remember that logging is asynchronous and won't happen if the system is shut down already
std::thread::sleep(Duration::from_millis(1000));
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_logging() {
main();
std::fs::remove_file(FILE_NAME).expect("remove log file");
}
}
Note: As before, if you have checked out the examples folder you can run the concrete binary and then show the logging file with:
cargo run --release --bin logging_custom cat /tmp/myloggingfile
Configuration
Since it is often inconvenient to pass around a large number of parameters when setting up a component system, Kompact also offers a configuration system allowing parameters to be loaded from a file or provided as a large string at the top level, for example. This system is powered by the Hocon crate and uses its APIs with very little additional support.
Configuration options must be set on the KompactConfig
instance before the system is started and the resulting configuration remains immutable for the lifetime of the system. A configuration can be loaded from a file by passing a path to the file to the load_config_file(...)
function. Alternatively, configuration values can be loaded directly from a string using load_config_str(...)
.
Within each component the Hocon configuration instance can be accessed via the context and individual keys via bracket notation, e.g. self.ctx.config()["my-key"]
. The configuration can also be accessed outside a component via KompactSystem::config()
.
In addition to component configuration, many parts of Kompact’s runtime can also be configured via this mechanism. The complete set of available configuration keys and their effects is described in the modules below kompact::config_keys.
Example
We are going to reuse the Buncher
from the timers section and pass its two parameters, batch_size
and timeout
, via configuration instead of the constructor.
We’ll start off by creating a configuration file application.conf
in the working directory, so its easy to find later. Something like this:
buncher {
batch-size = 100
timeout = 100 ms
}
omega {
initial-period = 10 ms
delta = 1 ms
}
We can then add this file to the KompicsConfig
instance using the load_config_file(...)
function:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new() -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size: 0,
timeout: Duration::from_millis(1),
current_batch: Vec::new(),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
self.batch_size = self.ctx.config()["buncher"]["batch-size"]
.as_i64()
.expect("batch size") as usize;
self.timeout = self.ctx.config()["buncher"]["timeout"]
.as_duration()
.expect("timeout");
self.current_batch.reserve(self.batch_size);
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let mut conf = KompactConfig::default();
conf.load_config_file("./application.conf")
.load_config_str("buncher.batch-size = 50");
let system = conf.build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(Buncher::new);
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
To show off how multiple configuration sources can be combined, we will override the batch-size
value from the main function with a literal string, after the file has been loaded:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new() -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size: 0,
timeout: Duration::from_millis(1),
current_batch: Vec::new(),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
self.batch_size = self.ctx.config()["buncher"]["batch-size"]
.as_i64()
.expect("batch size") as usize;
self.timeout = self.ctx.config()["buncher"]["timeout"]
.as_duration()
.expect("timeout");
self.current_batch.reserve(self.batch_size);
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let mut conf = KompactConfig::default();
conf.load_config_file("./application.conf")
.load_config_str("buncher.batch-size = 50");
let system = conf.build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(Buncher::new);
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
Now we change the Buncher
constructor to not take any arguments anymore. Since we still need to put some values into the struct fields, let’s put some default values, say batch size of 0 and a timeout of 1ms. We could also go with an Option
, if it’s important to know whether the component was initialised properly nor not. We also don’t know the required capacity for the vector anymore, so we just create an empty one, and extend it later once we have read the batch size from the config file.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new() -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size: 0,
timeout: Duration::from_millis(1),
current_batch: Vec::new(),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
self.batch_size = self.ctx.config()["buncher"]["batch-size"]
.as_i64()
.expect("batch size") as usize;
self.timeout = self.ctx.config()["buncher"]["timeout"]
.as_duration()
.expect("timeout");
self.current_batch.reserve(self.batch_size);
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let mut conf = KompactConfig::default();
conf.load_config_file("./application.conf")
.load_config_str("buncher.batch-size = 50");
let system = conf.build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(Buncher::new);
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
And, of course, we must also update the matching create(...)
call in the main function:
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new() -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size: 0,
timeout: Duration::from_millis(1),
current_batch: Vec::new(),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
self.batch_size = self.ctx.config()["buncher"]["batch-size"]
.as_i64()
.expect("batch size") as usize;
self.timeout = self.ctx.config()["buncher"]["timeout"]
.as_duration()
.expect("timeout");
self.current_batch.reserve(self.batch_size);
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let mut conf = KompactConfig::default();
conf.load_config_file("./application.conf")
.load_config_str("buncher.batch-size = 50");
let system = conf.build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(Buncher::new);
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
Finally, the actual config access happens in the on_start
code. At this point the component is properly initialised and we have acceess to configuration values. The Hocon type has a bunch of very convenient conversion functions, so we can get a Duration
directly from the 100 ms
string in the file, for example. Once we have read the values for batch_size
and timeout
, we can also go ahead and reserve the required additional space in the current_batch
vector.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use kompact_examples::batching::*;
use std::time::Duration;
#[derive(ComponentDefinition, Actor)]
struct Buncher {
ctx: ComponentContext<Self>,
batch_port: ProvidedPort<Batching>,
batch_size: usize,
timeout: Duration,
current_batch: Vec<Ping>,
outstanding_timeout: Option<ScheduledTimer>,
}
impl Buncher {
fn new() -> Buncher {
Buncher {
ctx: ComponentContext::uninitialised(),
batch_port: ProvidedPort::uninitialised(),
batch_size: 0,
timeout: Duration::from_millis(1),
current_batch: Vec::new(),
outstanding_timeout: None,
}
}
fn trigger_batch(&mut self) -> () {
let mut new_batch = Vec::with_capacity(self.batch_size);
std::mem::swap(&mut new_batch, &mut self.current_batch);
self.batch_port.trigger(Batch(new_batch))
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.outstanding_timeout {
Some(ref timeout) if *timeout == timeout_id => {
self.trigger_batch();
let new_timeout = self.schedule_once(self.timeout, Self::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
}
impl ComponentLifecycle for Buncher {
fn on_start(&mut self) -> Handled {
self.batch_size = self.ctx.config()["buncher"]["batch-size"]
.as_i64()
.expect("batch size") as usize;
self.timeout = self.ctx.config()["buncher"]["timeout"]
.as_duration()
.expect("timeout");
self.current_batch.reserve(self.batch_size);
let timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Provide<Batching> for Buncher {
fn handle(&mut self, event: Ping) -> Handled {
self.current_batch.push(event);
if self.current_batch.len() >= self.batch_size {
self.trigger_batch();
if let Some(timeout) = self.outstanding_timeout.take() {
self.cancel_timer(timeout);
}
let new_timeout = self.schedule_once(self.timeout, Buncher::handle_timeout);
self.outstanding_timeout = Some(new_timeout);
}
Handled::Ok
}
}
pub fn main() {
let mut conf = KompactConfig::default();
conf.load_config_file("./application.conf")
.load_config_str("buncher.batch-size = 50");
let system = conf.build().expect("system");
let printer = system.create(BatchPrinter::new);
let buncher = system.create(Buncher::new);
biconnect_components::<Batching, _, _>(&buncher, &printer).expect("connection");
let batching = buncher.on_definition(|cd| cd.batch_port.share());
system.start(&printer);
system.start(&buncher);
// these should usually trigger due to full batches
let sleep_dur = Duration::from_millis(1);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
// these should usually trigger due to timeout
let sleep_dur = Duration::from_millis(2);
for i in 0..500 {
let ping = Ping(i);
system.trigger_r(ping, &batching);
std::thread::sleep(sleep_dur);
}
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_buncher() {
main();
}
}
At this point we can run the example, and we can see from the regular “50 event”-sized batches in the beginning that our overriding of the batch size worked just fine.
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin buncher_config
Fault Recovery
Sometimes panics can happen in components that provide crucial services to the rest of system. And while it is, of course, better to try and have Result::Err
branches in place for every anticipated problem, with the use of 3rd party libraries and even some standard library functions not every possible panic can be prevented. Thus components can “fault” unexpectedly and the service they provide suddenly is not available.
In order to deal with cases where simply letting a component die is just not good enough, Kompact provdes a simple mechanism for recovering from faults: For every individual component, users can register a RecoveryFunction
, that is basically a function which takes a FaultContext
and produces a mechanism to recover from that fault, called a RecoveryHandler
. This recovery handler is executed by the system’s ComponentSupervisor
when it is informed of the fault.
Note: Fault recovery only works when the executed binary is compiled with panic unwinding. In binaries set to
panic=abort
none of this applies and fault handling must be dealt with outside the process running the Kompact system.
Warning: Panicking within the
RecoveryHandler
will destroy theComponentSupervisor
, which is unrecoverable, and thus lead to “poisoning” of the whole Kompact system.
A RecoveryFunction
can be registered via Component::set_recovery_function(...)
from outside a component or via ComponentContext::set_recovery_function(...)
from within. Either way causes a Mutex
to be locked, so be aware of the performance cost and the risk for deadlock when using with the latter function (since you are already holding the Mutex
on the ComponentDefinition
at this point). That being said, set_recovery_function(...)
can be called repeatedly to update the state stored in the function. This is particularly useful as a very simple snapshotting mechanism, allowing a replacement component later to be started from this earlier state snapshort, instead of starting from scratch.
Apart from inspecting the FaultContext
the recovery function must produce some kind of recovery handler. The simplest (and default) handler is FaultContext::ignore()
which performs no additional action on the supervisor to recover the faulted component. If custom handling is required, it can be provided via FaultContext::recover_with(...)
, where the user can provide a closure that may use the FaultContext
, the supervisor’s SystemHandle
, and the supervisor’s KompactLogger
to react to the fault. What happens in this function is completely up to the user and the needs of the application. A common case might be to log some particular message, or create a new component via system.create(...)
and start it with system.start(...)
, for example.
Warning: Do not block within the
RecoveryHandler
, as that will prevent theComponentSupervisor
from doing its job. In particular, absolutely do not block on lifecycle event (e.g.,start_notify
) as that will deadlock the supervisor! If you need to execute a complicated sequence of asynchronous commands to recover from a fault, it is recommended to use a temporary component for this sequence, which can simply be started from the recovery handler.
Note: After recovery all component references (
Arc<Component<CD>>
) and actor references to the old component will be invalid. If your application needs their functionality, you need to devise a mechanism to share the new references (e.g., concurrent queues,Arc<Mutex<...>>
, etc.). If the component provides a named service the alias must be re-registered to point to the new instance.
Unstable Counter Example
In order to showcase the recovery mechanism, we write a timer-based counter, which occasionally overflows and thus causes the component to crash. In order not to lose all the instances we have already counted, we will occasionally store the current count in the recovery function, and during recovery start from that point, i.e. a slightly outdated count, but at least not 0.
In addition to the current count, we will store references to two scheduled timers: For every count_timeout
we want to increase our count
by 1 and for every state_timeout
we will update the recovery function.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
By default, we just initialise the count to 0 and leave the timeouts unset until we are started.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
During start, we schedule the two timers and also set our recovery function. Within the recovery function, we simply store the state we want to remember, i.e. the two timeouts and the count. When it is called we produce a recovery handler from this state, that cancels the old timeouts and then starts a new UnstableCounter
by passing in the last count we stored.
As usual, we also cancel our timeouts when we are stopped or killed.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
Note: Cancelling the timeouts during faults is not really necessary, as they will be cleaned automatically when the faulty component is dropped. Since we don’t control who is holding on to component references, though, it may avoid some unnecessary overhead on a heavily loaded timer if done more eagerly, like this. It is included here mostly as an example of possible cleanup code in a recovery handler.
When our timeouts are triggered we must handle them. The count timeout is easy, we simply increment the self.count
variable using the checked_add
to cause a panic on overflow even in release builds. During the state timeout, we essentially reintroduce the recovery function from the on_start
lifecycle handler, so that we update the state it closed over.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
In order to run this, we simply start the default instance of the UnstableCounter
onto a Kompact system and then wait for a bit to let it count. The output will show the counting and the crashes. We can see that after the crash do not start counting from 0, but instead from something much higher, around 199 depending on your exact timing. Also notice how we crash much faster after the first time, since it doesn’t take as long to reach 255 again.
#![allow(clippy::unused_unit)]
use kompact::prelude::*;
use std::time::Duration;
const COUNT_TIMEOUT: Duration = Duration::from_millis(10);
const STATE_TIMEOUT: Duration = Duration::from_millis(1000);
#[derive(ComponentDefinition, Actor)]
struct UnstableCounter {
ctx: ComponentContext<Self>,
count: u8,
count_timeout: Option<ScheduledTimer>,
state_timeout: Option<ScheduledTimer>,
}
impl UnstableCounter {
fn with_state(count: u8) -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count,
count_timeout: None,
state_timeout: None,
}
}
fn handle_count_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(self.log(), "Incrementing count of {}", self.count);
self.count = self.count.checked_add(1).expect("Count overflowed!");
Handled::Ok
}
fn handle_state_timeout(&mut self, _timeout_id: ScheduledTimer) -> Handled {
info!(
self.log(),
"Saving recovery state with count of {}", self.count
);
let mut count_timeout = self.count_timeout.clone();
let mut state_timeout = self.state_timeout.clone();
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
if let Some(timeout) = count_timeout.take() {
system.cancel_timer(timeout);
}
if let Some(timeout) = state_timeout.take() {
system.cancel_timer(timeout);
}
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
}
impl Default for UnstableCounter {
fn default() -> Self {
UnstableCounter {
ctx: ComponentContext::uninitialised(),
count: 0,
count_timeout: None,
state_timeout: None,
}
}
}
impl ComponentLifecycle for UnstableCounter {
fn on_start(&mut self) -> Handled {
let count_timeout = self.schedule_periodic(
COUNT_TIMEOUT,
COUNT_TIMEOUT,
UnstableCounter::handle_count_timeout,
);
self.count_timeout = Some(count_timeout.clone());
let state_timeout = self.schedule_periodic(
STATE_TIMEOUT,
STATE_TIMEOUT,
UnstableCounter::handle_state_timeout,
);
self.state_timeout = Some(state_timeout.clone());
let count = self.count;
self.ctx.set_recovery_function(move |fault| {
fault.recover_with(move |_ctx, system, logger| {
warn!(
logger,
"Recovering UnstableCounter based on last state count={}", count
);
// Clean up now invalid timers
system.cancel_timer(count_timeout);
system.cancel_timer(state_timeout);
let counter_component = system.create(move || Self::with_state(count));
system.start(&counter_component);
})
});
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.count_timeout.take() {
self.cancel_timer(timeout);
}
if let Some(timeout) = self.state_timeout.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
pub fn main() {
let system = KompactConfig::default().build().expect("system");
let component = system.create(UnstableCounter::default);
system.start(&component);
drop(component); // avoid it from holding on to memory after crashing
std::thread::sleep(Duration::from_millis(5000));
println!("Shutting down system");
system.shutdown().expect("shutdown");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_unstable_counter() {
main();
}
}
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin unstable_counter
Dynamic Components
Kompact is a very strictly and statically typed framework. Sometimes, however, it is beneficial to be a little more dynamic. There are many reasons you might want to introduce some dynamism into your component system: modularity, ease of modeling, or sometimes even performance: static dispatch in Rust often involves monomorphising substantial amounts of generic code, which leads to code bloat. The more instructions the CPU has to load the more likely it is that something won’t fit in the cache, which can incur performance penalties.
Because of this, we introduced a way to deal with components with a little bit of dynamic typing. Namely, you’re able
to create components from type-erased definitions with {System,SystemHandle}::create_erased
(nightly only), and query
type-erased components for ports they may provide and/or require with on_dyn_definition
and get_{provided,required}_port
.
Note: While creating type-erased components from type-erased definitions is nightly-only, you can create component just normally and then cast it to a type-erased component on stable.
Let’s create a dynamic interactive system showcasing these features. We’ll build a little REPL which the user can use to spawn some components, set their settings, and send them some data to process.
First some basic components:
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
Our components perform simple arithmetic operations on the incoming message and log the results (as well as their
lifecycle). The internal state of the components can be set via Set{Offset,Scale}
ports. So far we just have components
with either a scale or an offset. Let’s add something slightly more interesting, which uses both.
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
Now let’s write a manager component, which will take care of creating the components described above, killing them, modifying their settings, and sending them data to process. In this case we have just three different types of worker components, but imagine we had tens (still sharing the same message type and some subsets of “settings”). In that case it would be very tedious to manage all these component types explicitly.
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
Using Arc<dyn AbstractComponent<Message=M>>
we can mix different components that take the same type of message in one
collection. Now to fill that Vec
with something useful. We’ll define some messages for the manager and start creating
some components.
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
As we don’t want the manager type to know about the concrete component types at all, the Spawn
message above contains a
boxed, type-erased component definition, which we then turn into a component using create_erased
.
Normally, after creating the components we would connect the ports to each other using connect_to_required
, or maybe
on_definition
and direct port access. However, all of those require concrete types, like Arc<Component<Adder>>
or Arc<Component<Linear>>
, which is not what we get here (Arc<dyn AbstractComponent<Message=f32>
). Instead we can use
on_dyn_definition
together with the Option
-returning get_{provided,required}_port
to dynamically check if a given
port exists on the abstract component and, if so, fetch it.
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
Now that we have the dynamic component part done, we can write a very simple repl. We’ll start the Kompact system in the
main thread, create the manager there, and await system termination. In a separate thread we’ll continuously read stdin
and interpret the lines as commands to send to the manager.
#![allow(clippy::unused_unit)]
use kompact::{component::AbstractComponent, prelude::*};
use std::{
error::Error,
fmt,
io::{stdin, BufRead},
sync::Arc,
};
#[derive(ComponentDefinition)]
struct Adder {
ctx: ComponentContext<Self>,
offset: f32,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Adder);
impl Actor for Adder {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a + self.offset;
info!(self.log(), "Adder result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetOffset;
impl Port for SetOffset {
type Indication = Never;
type Request = f32;
}
impl Provide<SetOffset> for Adder {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Adder {
pub fn new() -> Self {
Adder {
ctx: ComponentContext::uninitialised(),
offset: 0f32,
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Multiplier {
ctx: ComponentContext<Self>,
scale: f32,
set_scale: ProvidedPort<SetScale>,
}
info_lifecycle!(Multiplier);
impl Actor for Multiplier {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale;
info!(self.log(), "Multiplier result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
struct SetScale;
impl Port for SetScale {
type Indication = Never;
type Request = f32;
}
impl Provide<SetScale> for Multiplier {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Multiplier {
fn new() -> Multiplier {
Multiplier {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
set_scale: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct Linear {
ctx: ComponentContext<Self>,
scale: f32,
offset: f32,
set_scale: ProvidedPort<SetScale>,
set_offset: ProvidedPort<SetOffset>,
}
info_lifecycle!(Linear);
impl Actor for Linear {
type Message = f32;
fn receive_local(&mut self, a: Self::Message) -> Handled {
let res = a * self.scale + self.offset;
info!(self.log(), "Linear result = {}", res);
Handled::Ok
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!()
}
}
impl Provide<SetOffset> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.offset = value;
Handled::Ok
}
}
impl Provide<SetScale> for Linear {
fn handle(&mut self, value: f32) -> Handled {
self.scale = value;
Handled::Ok
}
}
impl Linear {
fn new() -> Linear {
Linear {
ctx: ComponentContext::uninitialised(),
scale: 1.0,
offset: 0.0,
set_scale: ProvidedPort::uninitialised(),
set_offset: ProvidedPort::uninitialised(),
}
}
}
#[derive(ComponentDefinition)]
struct DynamicManager {
ctx: ComponentContext<Self>,
arithmetic_units: Vec<Arc<dyn AbstractComponent<Message = f32>>>,
set_offsets: RequiredPort<SetOffset>,
set_scales: RequiredPort<SetScale>,
}
ignore_indications!(SetOffset, DynamicManager);
ignore_indications!(SetScale, DynamicManager);
ignore_lifecycle!(DynamicManager);
enum ManagerMessage {
Spawn(Box<dyn CreateErased<f32> + Send>),
Compute(f32),
SetScales(f32),
SetOffsets(f32),
KillAll,
Quit,
}
impl fmt::Debug for ManagerMessage {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManagerMessage::Spawn(_) => {
write!(f, "Spawn(_)")
}
ManagerMessage::Compute(val) => {
write!(f, "Compute({})", *val)
}
ManagerMessage::SetScales(scale) => {
write!(f, "SetScales({})", *scale)
}
ManagerMessage::SetOffsets(offset) => {
write!(f, "SetOffsets({})", *offset)
}
ManagerMessage::KillAll => {
write!(f, "KillAll")
}
ManagerMessage::Quit => {
write!(f, "Quit")
}
}
}
}
impl Actor for DynamicManager {
type Message = ManagerMessage;
fn receive_local(&mut self, msg: ManagerMessage) -> Handled {
match msg {
ManagerMessage::Spawn(definition) => {
let system = self.ctx.system();
let component = system.create_erased(definition);
component.on_dyn_definition(|def| {
if let Some(set_scale) = def.get_provided_port::<SetScale>() {
biconnect_ports(set_scale, &mut self.set_scales);
}
if let Some(set_offset) = def.get_provided_port::<SetOffset>() {
biconnect_ports(set_offset, &mut self.set_offsets);
}
});
system.start(&component);
self.arithmetic_units.push(component);
}
ManagerMessage::Compute(val) => {
for unit in &self.arithmetic_units {
unit.actor_ref().tell(val);
}
}
ManagerMessage::SetScales(scale) => self.set_scales.trigger(scale),
ManagerMessage::SetOffsets(offset) => self.set_offsets.trigger(offset),
ManagerMessage::KillAll => {
self.kill_all();
}
ManagerMessage::Quit => {
self.kill_all();
self.ctx.system().shutdown_async();
}
}
Handled::Ok
}
fn receive_network(&mut self, _: NetMessage) -> Handled {
unimplemented!()
}
}
impl DynamicManager {
fn kill_all(&mut self) {
let system = self.ctx.system();
for unit in self.arithmetic_units.drain(..) {
system.kill(unit);
}
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let manager: Arc<Component<DynamicManager>> = system.create(|| DynamicManager {
ctx: ComponentContext::uninitialised(),
arithmetic_units: vec![],
set_offsets: RequiredPort::uninitialised(),
set_scales: RequiredPort::uninitialised(),
});
system.start(&manager);
let manager_ref = manager.actor_ref();
std::thread::spawn(move || {
for line in stdin().lock().lines() {
let res = (|| -> Result<(), Box<dyn Error>> {
let line = line?;
let message = match line.trim() {
"spawn adder" => ManagerMessage::Spawn(Box::new(Adder::new())),
"spawn multiplier" => ManagerMessage::Spawn(Box::new(Multiplier::new())),
"spawn linear" => ManagerMessage::Spawn(Box::new(Linear::new())),
"kill all" => ManagerMessage::KillAll,
"quit" => ManagerMessage::Quit,
other => {
if let Some(offset) = other.strip_prefix("set offset ") {
ManagerMessage::SetOffsets(offset.parse()?)
} else if let Some(scale) = other.strip_prefix("set scale ") {
ManagerMessage::SetScales(scale.parse()?)
} else if let Some(val) = other.strip_prefix("compute ") {
ManagerMessage::Compute(val.parse()?)
} else {
Err("unknown command!")?
}
}
};
manager_ref.tell(message);
Ok(())
})();
if let Err(e) = res {
println!("{}", e);
}
}
});
system.await_termination();
}
When run, it looks something like this:
❯ cargo run --features=type_erasure,silent_logging --bin dynamic_components
Compiling kompact-examples v0.10.0 (/home/mrobakowski/projects/kompact/docs/examples)
Finished dev [unoptimized + debuginfo] target(s) in 2.56s
Running `/home/mrobakowski/projects/kompact/target/debug/dynamic_components`
compute 1
spawn adder
Nov 09 00:55:59.917 INFO Starting..., ctype: Adder, cid: 79bd396b-de75-4284-bc57-e0cf8193f72f, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:39
set offset 5
compute 1
Nov 09 00:56:43.465 INFO Adder result = 6, ctype: Adder, cid: 79bd396b-de75-4284-bc57-e0cf8193f72f, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:46
spawn multiplier
Nov 09 00:56:55.518 INFO Starting..., ctype: Multiplier, cid: 47dd4827-8d35-4351-a717-344ec7fe70fe, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:85
set scale 2
compute 2
Nov 09 00:57:09.684 INFO Adder result = 7, ctype: Adder, cid: 79bd396b-de75-4284-bc57-e0cf8193f72f, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:46
Nov 09 00:57:09.684 INFO Multiplier result = 4, ctype: Multiplier, cid: 47dd4827-8d35-4351-a717-344ec7fe70fe, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:92
kill all
Nov 09 00:57:17.769 INFO Killing..., ctype: Adder, cid: 79bd396b-de75-4284-bc57-e0cf8193f72f, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:39
Nov 09 00:57:17.769 INFO Killing..., ctype: Multiplier, cid: 47dd4827-8d35-4351-a717-344ec7fe70fe, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:85
spawn linear
Nov 09 00:57:24.840 INFO Starting..., ctype: Linear, cid: d0f01d1a-b448-4b5f-bddd-701d764992ea, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:135
spawn adder
Nov 09 00:57:32.136 INFO Starting..., ctype: Adder, cid: c3b9a0c5-875d-4e1d-8c70-3b414fe2a7bb, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:39
set offset 2
set scale 3
compute 4
Nov 09 00:57:41.558 INFO Linear result = 14, ctype: Linear, cid: d0f01d1a-b448-4b5f-bddd-701d764992ea, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:142
Nov 09 00:57:41.558 INFO Adder result = 6, ctype: Adder, cid: c3b9a0c5-875d-4e1d-8c70-3b414fe2a7bb, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:46
quit
Nov 09 00:57:51.351 INFO Killing..., ctype: Linear, cid: d0f01d1a-b448-4b5f-bddd-701d764992ea, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:135
Nov 09 00:57:51.352 INFO Killing..., ctype: Adder, cid: c3b9a0c5-875d-4e1d-8c70-3b414fe2a7bb, system: kompact-runtime-1, location: docs/examples/src/bin/dynamic_components.rs:39
Distributed Kompact
Each Kompact system can be configured to use a networking library, called a Dispatcher
, to communicate with other remote systems. In order to send messages to remote components, a special kind of actor reference is needed: an ActorPath
. This is different from an ActorRef
in that it contains the necessary information to route a message to the target component, not a reference to a queue. The queue for this kind of message is the system wide Dispatcher
, which is responsible for figuring out how to get the message to the target indicated in the ActorPath
. The implementation provided in the NetworkDispatcher
that ships with Kompact will automatically establish and maintain the needed network links for any target you are sending to.
Actor Paths
In addition to serving as opaque handles to remote components, actor paths can also be treated as human-readable resource identifiers. Internally, they are divided into two major parts:
- A
SystemPath
, which identifies the Kompact system we are trying to send to, and - the actual actor path tail, which identifies the actor within the system.
Kompact provides two flavours of actor paths:
- A Unique Path identifies exactly one concrete instance of a component.
- A Named Path can identify one or more instances of a component providing some service. The component(s) that a named path points to can be changed dynamically over time.
Examples of actor path string representations are:
tcp://127.0.0.1:63482#c6a799f0-77ff-4548-9726-744b90556ce7
(unique)tcp://127.0.0.1:63482/my-service/instance1
(named)
System Paths
A system path is essentially the same as you would address a server over a network. It specifies the transport protocol to use, the IP address, and the port. Different dispatchers are free to implement whichever set of transport protocols they wish to support. The provided NetworkDispatcher
currently only offers TCP, in addition to the “fake” local
protocol, which simply specifies an actor path within the same system via the dispatcher.
The SystemPath
type specifies a system path alone, and can be acquired via KompactSystem::system_path()
, for example. It doesn’t have any function by itself, but can be used to build up a full actor path or for comparisons, for example.
Unique Paths
The ActorPath::Unique
variant identifies a concrete instance of a component by its unique identifier, that is the same one you would get with self.ctx.id()
, for example. So a unique path is really just a system path combined with a component id, which in the string representation is separated by a #
character, e.g.: tcp://127.0.0.1:63482#c6a799f0-77ff-4548-9726-744b90556ce7
That means that once the target component is destroyed (due to a fault, for example) its unique actor path becomes invalid and can not be reassigned to even if a component of the same type is started to take its place. That makes unique paths relatively inflexible. However, the dispatcher is significantly faster at resolving unique paths compared to named paths, so they are still recommended for performance critical communication.
Named Paths
The ActorPath::Named
variant is more flexible than a unique path, in that it can be reassigned later. It also allows the specification of an actual path, that is a sequence of strings, which could be hierarchical like in a filesystem. This opens up the possibilities for things like broadcast or routing semantics over path subtrees, for example.
In human-readable format a named path is represented by a system path followed by a sequence of strings beginning with and separated by forward slash (/
) characters, just like a unix filesystem path would, e.g.: tcp://127.0.0.1:63482/my-service/instance1
Multiple named paths can be registered to the same component.
Basic Communication
In order to use remote communication with Kompact we need to replace the default Dispatcher
implementation, with the provided NetworkDispatcher
. Custom dispatchers in general are set with the KompactConfig::system_components(...)
function, which also allows replacement of the system’s deadletter box, that is the component that handles messages where no recipient could be resolved. An instance of the NetworkDispatcher
should be created via its configuration struct using NetworkConfig::build()
. This type also allows to specify the listening socket for the system via KompactConfig::with_socket(...)
. The default implementation will bind to 127.0.0.1
on a random free port. Attempting to bind on an occupied port, or without appropriate rights on a reserved port such as 80 will cause the creation of the KompactSystem
instance to fail.
Once a Kompact system with a network dispatcher is created, we need to acquire actor paths for each component we want to be addressable. Kompact requires components to be explicitly registered with a dispatcher and returns an appropriate actor path as the result of a successful registration. The easiest way to acquire a registered component and a unique actor path for it, is to call KompactSystem::create_and_register(...)
instead of KompactSystem::create(...)
when creating it. This will return both the component and a future with the actor path, which completes once registration was successful. It is typically recommended not to start a component before registration is complete, as messages it sends with its unique path as source might not be answerable until registration is completed.
Sending messages is achieved by calling ActorPath::tell(...)
with something that is serialisable (i.e. implements the Serialisable
trait) and something that can produce a source address as well as a reference to the Dispatcher
, typically just self
from within a component.
In order to receive messages, a component must implement (some variant of) the Actor
trait, and in particular its receive_network(...)
function. Deserialisation happens lazily in Kompact, that means components are passed serialised data and a serialisation identifier in the form of a NetworkMessage
message. They must then decide based on the identifier if they want to try and deserialise the content into a message. This can be done using the NetworkMessage::try_deserialise::<TargetType, Deserialiser>()
function, or more conveniently for multiple messages via the match_deser!
macro. We will get back to serialisation in more detail later.
Example
In this section we will go through a concrete example of a distributed service in Kompact. In particular, we are going to develop a distributed leader election abstraction, which internally uses heartbeats to establish a “candidate set” of live nodes, and then deterministically picks one node from the set to be the “leader”.
Local Abstraction
Locally we want to expose a port abstraction called EventualLeaderDetection
, which has no requests and only a single indication: The Trust
event indicates the selection of a new leader.
use kompact::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
pub struct Heartbeat;
impl SerialisationId for Heartbeat {
const SER_ID: SerId = 1234;
}
#[derive(Clone, Debug)]
pub struct Trust(pub ActorPath);
pub struct EventualLeaderDetection;
impl Port for EventualLeaderDetection {
type Indication = Trust;
type Request = Never;
}
#[derive(ComponentDefinition, Actor)]
pub struct TrustPrinter {
ctx: ComponentContext<Self>,
omega_port: RequiredPort<EventualLeaderDetection>,
}
impl TrustPrinter {
pub fn new() -> Self {
TrustPrinter {
ctx: ComponentContext::uninitialised(),
omega_port: RequiredPort::uninitialised(),
}
}
}
ignore_lifecycle!(TrustPrinter);
impl Require<EventualLeaderDetection> for TrustPrinter {
fn handle(&mut self, event: Trust) -> Handled {
info!(self.log(), "Got leader: {}.", event.0);
Handled::Ok
}
}
In order to see some results later when we run it, we will also add a quick printer component for these Trust
events:
use kompact::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
pub struct Heartbeat;
impl SerialisationId for Heartbeat {
const SER_ID: SerId = 1234;
}
#[derive(Clone, Debug)]
pub struct Trust(pub ActorPath);
pub struct EventualLeaderDetection;
impl Port for EventualLeaderDetection {
type Indication = Trust;
type Request = Never;
}
#[derive(ComponentDefinition, Actor)]
pub struct TrustPrinter {
ctx: ComponentContext<Self>,
omega_port: RequiredPort<EventualLeaderDetection>,
}
impl TrustPrinter {
pub fn new() -> Self {
TrustPrinter {
ctx: ComponentContext::uninitialised(),
omega_port: RequiredPort::uninitialised(),
}
}
}
ignore_lifecycle!(TrustPrinter);
impl Require<EventualLeaderDetection> for TrustPrinter {
fn handle(&mut self, event: Trust) -> Handled {
info!(self.log(), "Got leader: {}.", event.0);
Handled::Ok
}
}
Messages
We have two ways to interact with our leader election implementation: Different instances will send Heartbeat
message over the network among themselves. For simplicity we will use Serde as a serialisation mechanism for now. For Serde serialisation to work correctly with Kompact we have assign a serialisation id to Heartbeat
, that is a unique number that can be used to identify it during deserialisation. It’s very similar to a TypeId
, except that it’s guaranteed to be same in any binary generated with the code included since the constant is hardcoded. For the example, we’ll simply use 1234
since that isn’t taken, yet. In a larger project, however, it’s important to keep track of these ids to prevent duplicates.
use kompact::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
pub struct Heartbeat;
impl SerialisationId for Heartbeat {
const SER_ID: SerId = 1234;
}
#[derive(Clone, Debug)]
pub struct Trust(pub ActorPath);
pub struct EventualLeaderDetection;
impl Port for EventualLeaderDetection {
type Indication = Trust;
type Request = Never;
}
#[derive(ComponentDefinition, Actor)]
pub struct TrustPrinter {
ctx: ComponentContext<Self>,
omega_port: RequiredPort<EventualLeaderDetection>,
}
impl TrustPrinter {
pub fn new() -> Self {
TrustPrinter {
ctx: ComponentContext::uninitialised(),
omega_port: RequiredPort::uninitialised(),
}
}
}
ignore_lifecycle!(TrustPrinter);
impl Require<EventualLeaderDetection> for TrustPrinter {
fn handle(&mut self, event: Trust) -> Handled {
info!(self.log(), "Got leader: {}.", event.0);
Handled::Ok
}
}
Additionally, we want be able to change the set of involved processes at runtime. This is primarily due to the fact that we will use unique paths for now and we simply don’t know the full set of unique paths at creation time of the actors that they refer to.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
State
There is a bit of state we need to keep track of in our EventualLeaderElector
component:
- First me must provide the
EventualLeaderDetection
port, of course. - We also need to track the current process set, which we will handle as a boxed slice shared behind an
Arc
, since all components should have the same set anyway. Of course, if we were running this in real distribution, and not just with multiple systems in a single process, we would probably only run a single instance per process and a simple boxed slice (or just a normal vector) would probably be more sensible. - Further we must track the current candidate set, for which we will use a standard
HashSet
to avoid adding duplicates. - We also need to know how often to check the candidate set and update our leader. Since this time needs to be able to dynamically adjust to network conditions, we keep two values for this in our state: The current
period
and adelta
value, which we use when we need to adjust the period. Thedelta
is technically immutable and could be a constant, but we want to make both values configurable, so we need to store the loaded values somewhere. - Finally, we to keep track of the current timer handle and the current leader, if any.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
In order to load our configuration values from a file, we need to put something like the following into an application.conf
file in the current working directory:
omega {
initial-period = 10 ms
delta = 1 ms
}
And then we can load it and start the initial timeout in the on_start
handler as before:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
Leader Election Algorithm
This part isn’t very specific to networking, but basically the election algorithm works as follows: Every time the timeout fires we clear out the current candidate set into a temporary vector. We then sort the vector and take the last element, if any, as the potential new leader. If that new leader is not the same as the current one then either our current leader has failed, or the timeout is wrong. For simplicity we will assume both is true and replace the leader and update the scheduled timeout by adding the delta
to the current period
. We then announce our new leader choice via a trigger on the EventualLeaderDetection
port. Whether or not we replaced the leader, we always send heartbeats to everyone in the process set.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
Sending Network Messages
The only place in this example where we are sending remote messages is when we are sending heartbeats:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
We invoke the ActorPath::tell(...)
method with a tuple of the actual Heartbeat
together with the serialiser with want to use, which is kompact::serde_serialisers::Serde
. We also pass a reference to self
which will automatically insert our unique actor path into the message as the source and send everything to our system’s dispatcher, which will take care of serialisation, as well as network channel creation and selection for us.
Handling Network Messages
In order to handle (network) messages we must implement the Actor trait as described previously. The local message type we are handling is UpdateProcesses
and whenever we get it, we simply replace our current processes
with the new value.
For network messages, on the other hand, we don’t know what are being given, generally, so we get NetworkMessage
. This is basically a wrapper around a sender ActorPath
, a serialisation id, and a byte buffer with the serialised data. In our example, we know we only want to handle messages that deserialise to Heartbeat
. We also know we need to use Serde
as a deserialiser, since that’s what we used for serialisation in the first place. Thus, we use NetMessage::try_deserialise::<Heartbeat, Serde>()
to attempt to deserialise a Heartbeat
from the buffer using the Serde
deserialiser. This call will automatically check if the serialisation id matches Heartbeat::SER_ID
and if yes, attempt to deserialise it using Serde
. If it doesn’t work, we’ll get a Result::Err
instead. If it does work, however, we don’t actually care about the Hearbeat itself, but we insert the sender from the NetMessage
into self.candidates
.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
System Setup
In this example we need to set up multiple systems in the same process for the first time, since we want them to communicate via the network instead of directly, as a preparation for actually running distributed. We are going to take the number of systems (and thus leader election components) as a command line argument. We start each system with the same configuration file and give them each a NetworkDispatcher
with default settings. This way we don’t have to manually pick a bunch of ports and hope they happen to be free. On the other hand that means, of course, that we can’t predict what system addresses are going to look like. So in order to give everyone a set of processes to talk to, we need to wait until all systems are set up and all the leader elector components started and registered, collect all the registrations into a vector and then send an update to every component with the complete set.
At this point the system is running just fine and we give it some time to settle on timeouts and elect a leader. We will see the result in the logging messages eventually. Now to see the leader election responding to actual changes, we are going to kill one system at a time and always give it a second to settle. This way we can watch the elector on the remaining systems updating the trust values one by one.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{collections::HashSet, sync::Arc, time::Duration};
#[derive(Debug)]
struct UpdateProcesses(Arc<[ActorPath]>);
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
processes: Arc<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new() -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
processes: Vec::new().into_boxed_slice().into(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> () {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = UpdateProcesses;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
info!(
self.log(),
"Received new process set with {} processes",
msg.0.len()
);
let UpdateProcesses(processes) = msg;
self.processes = processes;
Handled::Ok
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match msg.data.try_deserialise::<Heartbeat, Serde>() {
Ok(_heartbeat) => {
self.candidates.insert(sender);
}
Err(e) => warn!(self.log(), "Invalid data: {:?}", e),
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
assert_eq!(
2,
args.len(),
"Invalid arguments! Must give number of systems."
);
let num_systems: usize = args[1].parse().expect("number");
run_systems(num_systems);
}
pub fn run_systems(num_systems: usize) {
let mut systems: Vec<KompactSystem> = {
let system = || {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
cfg.build().expect("KompactSystem")
};
let mut data = Vec::with_capacity(num_systems);
for _i in 0..num_systems {
let sys = system();
data.push(sys);
}
data
};
let (processes, actors): (Vec<ActorPath>, Vec<ActorRef<UpdateProcesses>>) = systems
.iter()
.map(|sys| {
let printer = sys.create(TrustPrinter::new);
let (detector, registration) = sys.create_and_register(EventualLeaderElector::new);
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer)
.expect("connection");
let path =
registration.wait_expect(Duration::from_millis(1000), "actor never registered");
sys.start(&printer);
sys.start(&detector);
(path, detector.actor_ref())
})
.unzip();
let shared_processes: Arc<[ActorPath]> = processes.into_boxed_slice().into();
actors.iter().for_each(|actor| {
let update = UpdateProcesses(shared_processes.clone());
actor.tell(update);
});
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_omega() {
run_systems(3);
}
}
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin leader_election 3
Note that running in debug mode will produce a lot of output now as it will trace all the network messages.
Named Services
In the last section we discussed how to build a leader election mechanism with a bunch of networked Kompact systems. But we couldn’t actually run it in deployment, because we couldn’t really figure out how to collect a list of actor paths for all the processes and then distribute that list to every process. This happens because we can only know the actor path of an actor after we have created it. We could have manually distributed the actor paths, by writing the assigned path to a file, then collecting it externally, and finally parsing paths from said collected file and passing them to each elector component. But that wouldn’t be a very nice system now, would it?
What we are missing here is a way to predict an ActorPath
for a particular actor on a particular system. If we can know even a single path on a single host in the distributed actor system, we can have everyone send a message there, which will give that special component the unique paths for everyone that sends there, which it can in turn distribute back to everyone who has “checked in” in this manner. This process is often referred to as “bootstrapping”. In this section we are going to use named actor paths, which we can predict given some information about the system, to build a bootstrapping “service” for our leader election group.
Messages
For the bootstrapping communication we require a new CheckIn
message. It doesn’t actually need any content, since we really only care about the ActorPath
of the sender. We will reply to this message with our UpdateProcesses
message from the previous section. However, since that has to go over the network now, we need to make it serialisable. We also aren’t locally sharing the process set anymore, so we turn the Arc<[ActorPath]>
into a simple Vec<ActorPath>
.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
State
Our bootstrap server’s state is almost trivial. All it needs to keep track of is the current process set.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
We also need to alter our leader elector a bit. First it needs to know the actor path of the bootstrap server, so it can actually check in. And second, we need to adapt the type of processes
to be in line with our changes to UpdateProcesses
. We’ll make it a Box<[ActorPath]>
instead of Arc<[ActorPath]>
and do the conversion from Vec<ActorPath>
whenever we receive an update.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Behaviours
The behaviour of the bootstrap server is very simple. Whenever it gets a CheckIn
, it adds the source of the message to its process set and then broadcasts the new process set to every process in the set. We will use the NetworkActor
trait to implement the actor part here instead of Actor
. NetworkActor
is a convenience trait for actors that handle the same set of messages locally and remotely and ignore all other remote messages. It handles the deserialisation part for us, but we must tell it both the Message
type and the Deserialiser
type to use. Of course, in this case we don’t actually do anything for local messages, since we only need the sender and local messages simply don’t have a sender attached.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
We must also make some small changes to the behaviour of the leader elector itself. First of all we must now send the CheckIn
when we are being started. As before we are using Serde
as a serialisation mechanism, so we really only have to add the following line to the on_start
function:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
We also have to change how we handle UpdateProcesses
slightly, since they are now coming in over the network. We thus have to move the code from receive_local
to receive_network
. But now we have two different possible network messages we could deserialise whenever we get a NetMessage
: It could either be a Heartbeat
or an UpdateProcesses
. Since trying through them individually one by one is somewhat inefficient, what we really want is something like this:
match msg.ser_id() {
Heartbeat::SER_ID => // deserialise and handle Heartbeat
UpdateProcesses::SER_ID => // deserialise and handle UpdateProcesses
}
Kompact provides the match_deser!
macro to generate code like the above, since this is very common behaviour and writing it manually gets somewhat tedious eventually. The overall syntax for the macro is:
match_deser! {
(<message expression>) {
<message case 1>,
<message case 2>,
...
}
}
Here <message expression>
is an expression that gives the message (data) to be deserialised. If the expression is simply an identifier like msg
then the parenthesis may be elided.
The syntax for each different message case in the macro is basically:
msg(variable_name): MessageType [using DeserialiserType] => <body>
For cases where MessageType = DeserialiserType
the [using DeserialiserType]
block can be elided. There are also default and error branches available for the macro, an example of which can be see in the API docs. It is also possible to immediately destructure the deserialised message by replacing variable_name
with a pattern, as can be seen in the case of UpdateProcesses
below.
Using this macro, our new actor implementation becomes the following:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
System
Now the real difference happens in the way we set up the Kompact systems. In the last section we set up a configurable number of systems that were all the same in the same process. Now we are only going to run a single system per process and we have two different setups as well: Most processes will be “clients” and only run the leader elector and the trust printer, but one process will additionally run the BootstrapServer
.
Server
The one thing that sets our bootstrap server creation apart from any other actor we have created so far, is that we want a named actor path for it. Basically, we want any other process to be able to constuct a valid ActorPath
instance for the bootstrap server, such as tcp://127.0.0.1:<port>/bootstrap
, given only the port for it. In order to make Kompact resolve that path to the correct component we must do two things:
- Make sure that the Kompact system actually runs on localhost at the given port, and
- register a named path alias for the
BootstrapServer
with the name"bootstrap"
.
To achieve the first part, we create the NetworkDispatcher
from a SocketAddr
instance that contains the correct IP and port instead of using the default value as we did before. To register a component with a named path, we must call KompactSystem::register_by_alias(...)
with the target component and the path to register. The rest is more or less as before.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Client
The client setup works almost the same as in the previous section, except that we need to construct the required ActorPath
instance for the bootstrap server given its SocketAddr
now. We can do so using NamedPath::with_socket(...)
which will construct a NamedPath
instance that can easily be converted into an ActorPath
. We pass this instance to the leader elector component during construction.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Running with Commandline Arguments
All that is left to do is to convert the port numbers given on the command line to the required SocketAddr
instances and calling the correct method. When we are given 1 argument (port number) we will start a bootstrap server, and if we are given 2 arguments (server port and client port) we will start a client instead.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use serde::{Deserialize, Serialize};
use std::{
collections::HashSet,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl SerialisationId for UpdateProcesses {
const SER_ID: SerId = 3456;
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> () {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process.tell((msg.clone(), Serde), self);
});
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = Serde;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess();
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer =
self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats();
Handled::Ok
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, Serde), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(self.period, self.period, Self::handle_timeout);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(UpdateProcesses(processes)): UpdateProcesses [using Serde] => {
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
};
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x-1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Now we can run this by first starting a server in one shell and then a few clients in a few other shells. We can also see changes in trust events as we kill and add processes.
Note: As before, if you have checked out the examples folder you can build a binary with:
cargo build --release
You can run the bootstrap server on port 12345 with:
../../target/release/bootstrapping 12345
Similarly, you can run a matching client on some free port with:
../../target/release/bootstrapping 12345 0
Path Routing
In the previous section on Named Services we have seen that we can register components to named paths, such as tcp://127.0.0.1:<port>/bootstrap
. These paths look very much like a URL, and indeed, just like in REST APIs, Kompact named paths form a tree-like hierarchy. For example tcp://127.0.0.1:<port>/bootstrap/server1
would be a sub-path of tcp://127.0.0.1:<port>/bootstrap
. This hierarchy is reflected in the way Kompact stores these actor aliases internally, which from a structure like a directory tree.
This approach to named paths opens up the possibility of exploiting the hierarchy for implicit and explicit routing of messages over sub-trees (directories, in a sense), which we explore in this section.
Routing Policies
In general, a routing policy is something that takes a message and set of references and selects one or more references that the message will be sent to. In the concrete case of routing within the named path tree, the type of the message must be NetMessage
and the references are DynActorRef
. The set of references we give to a policy is going to be the set of all registered nodes under a particular prefix in the named actor tree, which we will call the routing path.
Example: If
tcp://127.0.0.1:<port>/bootstrap
is a routing path with some policy P, then whenever we send something to it, we will pass the set containing the actor ref registered attcp://127.0.0.1:<port>/bootstrap/server1
to P. If there were another registration attcp://127.0.0.1:<port>/bootstrap/servers/server1
we would add that to the set as well.
Types of Routing Paths
Kompact supports two different types of routing paths: explicit paths and implicit paths.
In order to explain this in the following paragraphs, consider a system where the following three actors are registered:
tcp::127.0.0.1:1234/parent/child1
tcp::127.0.0.1:1234/parent/child2
tcp::127.0.0.1:1234/parent/child1/grandchild
Implicit Routing
Routing in Kompact can be used without any (routing specific) setup at all. If we simply construct an ActorPath
of the form tcp::127.0.0.1:1234/parent/*
and send a message there, Kompact will automatically broadcast this message to all three nodes registered above, since all of them have tcp::127.0.0.1:1234/parent
as their prefix. This kind of implicit routing path is called a broadcast path. The other type of implicit routing supported by Kompact is called a select path and takes the form tcp::127.0.0.1:1234/parent/?
. Sending a message to this select path will cause the message to be sent to exactly one of the actors. Which node exactly is subject to the routing policy at tcp::127.0.0.1:1234/parent
, which is not guaranteed to be stable by the runtime. The current default policy for select is based on hash buckets over the messages sender field.
Warning: In certain deployments allowing implicit routing can become a security risk with respect to DoS attacks, since an attacker can basically force the system to broadcast a message to every registered node, which can cause a lot unnecessary load.
If this is a concern for your deployment scenario, you can compile Kompact without default features, which will remove implicit routing completely.
Explicit Routing
If implicit routing is not a good match for your use case, Kompact allows you explicitly set a policy at a particular point in the named tree via the KompactSystem::set_routing_policy(...)
method. Not only does this allow you to customise the behaviour of routing for a particular sub-tree, it also enables you to hide the fact that a tree is routing at all, as with an explicit policy both tcp::127.0.0.1:1234/parent
(where the routing policy is set) and one of tcp::127.0.0.1:1234/parent/*
and tcp::127.0.0.1:1234/parent/?
(depending on whether your police is of broadcast or select type) will exhibit the same behaviour.
Explicit routing works even if implicit routing is disabled.
Provided Policies
Kompact comes with three routing policies built in:
kompact::routing::groups::BroadcastRouting
is the default policy for broadcast paths. As the name implies, it will simply send a copy of each message to every member of the routing set. In order to improve the efficiency of broadcasting, you may want to override the default implementation ofSerialisable::cloned()
for the types you are broadcasting, at least when you know that local delivery can happen.kompact::routing::groups::SenderDefaultHashBucketRouting
is the default policy for select paths. It will use the hash of the messages sender field to determine a member to send the message to. Changing the member set in any way will thus also change the assignments.SenderDefaultHashBucketRouting
is actually just a type alias for a more customisable hash-based routing policy calledkompact::routing::groups::FieldHashBucketRouting
, which lets you decide the field(s) to use for hashing and the actual hashing algorithm.kompact::routing::groups::RoundRobinRouting
uses a mutable index (anAtomicUsize
to be exact) to select exactly one member in a round-robin manner.
Custom Policies
In addition to the already provided routing policies, users can easily implement their own by implementing RoutingPolicy<DynActorRef, NetMessage>
for their custom type. It is important to note that policy lookups happen concurrently in the store and hence routing must be implemented with a &self
reference instead of &mut self
. Thus, routing protocols that must update manage state for each message must rely on atomics or—if really necessary—on mutexes or similar concurrent structures as appropriate for their access pattern.
Example
To show-case the path routing feature of Kompact, we will sketch a simple client-server application, where the server holds a “database” (just a large slice of strings in our case) and the client sends “queries” against this database. The queries are simply going to be shorter strings, which we will try to find as substrings in the database and return all matching strings. Since our database is actually immutable, we will share it among multiple server components and use select routing with the round-robin policy to spread out the load. Since the queries are expensive, we will also cache the results on the clients. To provide an example of broadcast routing we will cache the responses for any client at every client via broadcast. For simplicity, this example is going to be completely local within a single Kompact system, but the mechanisms involved are really designed for remote use primarily, with local paths only an optimisation normally.
Messages
We only have two messages, the Query
with a unique request id and the actual pattern we want to match against, and the QueryResponse
which has all the fields of the Query
plus a vector of strings that matched the pattern. For convenience, we will use Serde
as serialisation mechanism again.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
State and Behaviour
As for this example the exact implementation of the servers and clients is not really crucial, we won’t describe it in detail here. The important things to note are that the Client
uses the path server_path
field to send requests, which we will initialise later with a select path of the form tcp://127.0.0.1:<port>/server/?
. It also replaces its unique response path with a broadcast_path
, which we will initialise later with a broadcast path of the form tcp://127.0.0.1:<port>/client/*
.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
System Setup
When setting up the Kompact system in the main, we will use the following constants, which essentially represent configuration of our scenario:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
First of all we set up the routing policies and their associated paths. In order to show off both variants, we will use implicit routing for the client broadcast path and explicit routing for the server select path. As mentioned before, implicit routing does not really require any specific setup. We simply construct the appropriate path, which in this case is going to be our system path followed by client/*
. For the server load-balancing, we want to use the round-robin policy, which we will register under the server
alias using KompactSystem::set_routing_policy(...)
. Like a normal actor registration, this call returns a future with the actual path for this policy. Since the policy is set explicitly, this path will actually be of the form tcp://127.0.0.1:<port>/server
, but sending a message to tcp://127.0.0.1:<port>/server/?
would behave in the same manner.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
We will then create and register both the servers and the clients, making sure to register either with a unique name (based on their index) under the correct path prefix.
Servers
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
Clients
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
Running
Finally, we simply start the servers and the clients, then run them for a few seconds, and shut them down again, before shutting down the system itself.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use lru::LruCache;
use rand::{distributions::Alphanumeric, rngs::SmallRng, thread_rng, Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use std::{num::NonZeroUsize, sync::Arc, time::Duration};
use uuid::Uuid;
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Query {
id: Uuid,
pattern: String,
}
impl SerialisationId for Query {
const SER_ID: SerId = 4242;
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct QueryResponse {
id: Uuid,
pattern: String,
matches: Vec<String>,
}
impl SerialisationId for QueryResponse {
const SER_ID: SerId = 4243;
}
#[derive(ComponentDefinition)]
struct QueryServer {
ctx: ComponentContext<Self>,
database: Arc<[String]>,
handled_requests: usize,
}
impl QueryServer {
fn new(database: Arc<[String]>) -> Self {
QueryServer {
ctx: ComponentContext::uninitialised(),
database,
handled_requests: 0,
}
}
fn find_matches(&self, pattern: &str) -> Vec<String> {
self.database
.iter()
.filter(|e| e.contains(pattern))
.cloned()
.collect()
}
}
impl ComponentLifecycle for QueryServer {
fn on_kill(&mut self) -> Handled {
info!(
self.log(),
"Shutting down a Server that handled {} requests", self.handled_requests
);
Handled::Ok
}
}
impl Actor for QueryServer {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(query): Query [using Serde] => {
let matches = self.find_matches(&query.pattern);
let response = QueryResponse { id: query.id, pattern: query.pattern, matches };
sender.tell((response, Serde), self);
self.handled_requests += 1;
}
}
}
Handled::Ok
}
}
#[derive(ComponentDefinition)]
struct Client {
ctx: ComponentContext<Self>,
server_path: ActorPath,
broadcast_path: ActorPath,
request_count: usize,
cache_hits: usize,
cache: LruCache<String, Vec<String>>,
current_query: Option<Query>,
rng: SmallRng,
}
impl Client {
fn new(server_path: ActorPath, broadcast_path: ActorPath) -> Self {
Client {
ctx: ComponentContext::uninitialised(),
server_path,
broadcast_path,
request_count: 0,
cache_hits: 0,
cache: LruCache::new(NonZeroUsize::new(20).unwrap()),
current_query: None,
rng: SmallRng::from_entropy(),
}
}
fn send_request(&mut self) -> () {
while self.current_query.is_none() {
let pattern = generate_string(&mut self.rng, PATTERN_LENGTH);
self.request_count += 1;
let res = self.cache.get(&pattern).map(|result| result.len());
if let Some(result) = res {
self.cache_hits += 1;
debug!(
self.log(),
"Answered query #{} ({}) with {} matches from cache.",
self.request_count,
pattern,
result
);
} else {
let id = Uuid::new_v4();
trace!(
self.log(),
"Sending query #{} ({}) with id={}",
self.request_count,
pattern,
id
);
let query = Query { id, pattern };
self.current_query = Some(query.clone());
self.server_path
.tell((query, Serde), &self.broadcast_path.using_dispatcher(self));
}
}
}
}
impl ComponentLifecycle for Client {
fn on_start(&mut self) -> Handled {
self.send_request();
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
let hit_ratio = (self.cache_hits as f64) / (self.request_count as f64);
info!(
self.log(),
"Shutting down a Client that ran {} requests with {} cache hits ({}%)",
self.request_count,
self.cache_hits,
hit_ratio
);
Handled::Ok
}
}
impl Actor for Client {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!("Can't instantiate Never type");
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
match_deser! {
msg {
msg(response): QueryResponse [using Serde] => {
trace!(self.log(), "Got response for query id={}: {:?}", response.id, response.matches);
if let Some(current_query) = self.current_query.take() {
if current_query.id == response.id {
debug!(self.log(), "Got response with {} matches for query: {}", response.matches.len(), current_query.pattern);
self.send_request();
} else {
// wrong id, put it back
self.current_query = Some(current_query);
}
}
// in any case, put it in the cache
self.cache.put(response.pattern, response.matches);
},
}
}
Handled::Ok
}
}
const ENTRY_LENGTH: usize = 20;
const PATTERN_LENGTH: usize = 2;
const BALANCER_PATH: &str = "server";
const CLIENT_PATH: &str = "client";
const NUM_SERVERS: usize = 3;
const NUM_CLIENTS: usize = 12;
const DATABASE_SIZE: usize = 10000;
const TIMEOUT: Duration = Duration::from_millis(100);
fn generate_string<R: Rng>(rng: &mut R, length: usize) -> String {
std::iter::repeat(())
.map(|_| rng.sample(Alphanumeric) as char)
.take(length)
.collect()
}
fn generate_database(size: usize) -> Arc<[String]> {
let mut data: Vec<String> = Vec::with_capacity(size);
let mut rng = thread_rng();
for _i in 0..size {
let entry = generate_string(&mut rng, ENTRY_LENGTH);
data.push(entry);
}
data.into()
}
pub fn main() {
let mut cfg = KompactConfig::default();
cfg.load_config_str(kompact::runtime::MINIMAL_CONFIG);
cfg.system_components(DeadletterBox::new, NetworkConfig::default().build());
let system = cfg.build().expect("KompactSystem");
// use implicit policy
let broadcast_path: ActorPath = system
.system_path()
.into_named_with_string("client/*")
.expect("path")
.into();
// set explicit policy
let balancer_path = system
.set_routing_policy(
kompact::routing::groups::RoundRobinRouting::default(),
BALANCER_PATH,
false,
)
.wait_expect(TIMEOUT, "balancing policy");
let database = generate_database(DATABASE_SIZE);
let servers: Vec<Arc<Component<QueryServer>>> = (0..NUM_SERVERS)
.map(|_| {
let db = database.clone();
system.create(move || QueryServer::new(db))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = servers
.iter()
.enumerate()
.map(|(index, server)| {
system.register_by_alias(server, format!("{}/server-{}", BALANCER_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "server path");
let clients: Vec<Arc<Component<Client>>> = (0..NUM_CLIENTS)
.map(|_| {
let server_path = balancer_path.clone();
let client_path = broadcast_path.clone();
system.create(move || Client::new(server_path, client_path))
})
.collect();
let registration_futures: Vec<KFuture<RegistrationResult>> = clients
.iter()
.enumerate()
.map(|(index, client)| {
system.register_by_alias(client, format!("{}/client-{}", CLIENT_PATH, index))
})
.collect();
// We don't actually need the paths,
// just need to be sure they finished registering
registration_futures.expect_ok(TIMEOUT, "client path");
// Start everything
servers
.iter()
.map(|s| system.start_notify(s))
.expect_completion(TIMEOUT, "server start");
clients
.iter()
.map(|c| system.start_notify(c))
.expect_completion(TIMEOUT, "client start");
// Let them work for a while
std::thread::sleep(Duration::from_secs(5));
// Shut down clients nicely.
clients
.into_iter()
.map(|c| system.kill_notify(c))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "client kill");
// Shut down servers nicely.
servers
.into_iter()
.map(|s| system.kill_notify(s))
.collect::<Vec<_>>()
.expect_completion(TIMEOUT, "server kill");
system.shutdown().expect("shutdown");
// Wait a bit longer, so all output is logged (asynchronously) before shutting down
std::thread::sleep(Duration::from_millis(10));
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_balancer() {
main();
}
}
If we inspect the output in release mode, we can see that both clients and servers print some final statistics about their run. In particular the results of the servers show that the requests were very well balanced, thanks to our round-robin policy:
Oct 23 18:15:58.869 INFO Shutting down a Client that ran 1060 requests with 6 cache hits (0.005660377358490566%), ctype: Client, cid: 07739284-1171-43c7-b547-198f9adf31e2, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.869 INFO Shutting down a Client that ran 1055 requests with 7 cache hits (0.006635071090047393%), ctype: Client, cid: 7a33e17c-042f-4271-95ea-a725ee471dae, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.869 INFO Shutting down a Client that ran 1052 requests with 4 cache hits (0.0038022813688212928%), ctype: Client, cid: 9b3c3c57-8246-4456-a7b8-0d200086df8d, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.869 INFO Shutting down a Client that ran 1050 requests with 3 cache hits (0.002857142857142857%), ctype: Client, cid: 1ecdef68-43af-46b4-8a40-a8ad4147b811, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.869 INFO Shutting down a Client that ran 1051 requests with 5 cache hits (0.004757373929590866%), ctype: Client, cid: 034f5dcc-a0ba-4bc2-aca0-6f1ab12be139, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.870 INFO Shutting down a Client that ran 1047 requests with 2 cache hits (0.0019102196752626551%), ctype: Client, cid: 59679577-6e9a-44ef-9739-08ca1b32b03f, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.870 INFO Shutting down a Client that ran 1048 requests with 3 cache hits (0.0028625954198473282%), ctype: Client, cid: ef76ddd0-e240-4ad6-8a10-b98da9ba41ff, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.870 INFO Shutting down a Client that ran 1044 requests with 0 cache hits (0%), ctype: Client, cid: ddf7d77a-4987-4411-81a5-bc4841200c32, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.870 INFO Shutting down a Client that ran 1051 requests with 7 cache hits (0.006660323501427212%), ctype: Client, cid: 12b65a83-c443-4853-8337-47ba5c45f60d, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.871 INFO Shutting down a Client that ran 1046 requests with 3 cache hits (0.0028680688336520078%), ctype: Client, cid: c7978b3f-9cf2-44d2-b93f-fc32ad90c941, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.872 INFO Shutting down a Client that ran 1049 requests with 6 cache hits (0.005719733079122974%), ctype: Client, cid: af389f4d-bc93-4f37-8f50-a70e054651e0, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.872 INFO Shutting down a Client that ran 1047 requests with 4 cache hits (0.0038204393505253103%), ctype: Client, cid: ad20509a-dbab-4dd3-a497-99a8488101b3, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:152
Oct 23 18:15:58.873 INFO Shutting down a Server that handled 4183 requests, ctype: QueryServer, cid: 35309404-a989-4b18-848f-5cc719b19a76, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:56
Oct 23 18:15:58.873 INFO Shutting down a Server that handled 4184 requests, ctype: QueryServer, cid: 2a2ed2cb-36bb-4df0-ac0e-0204e12417bd, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:56
Oct 23 18:15:58.873 INFO Shutting down a Server that handled 4183 requests, ctype: QueryServer, cid: a3d6d94a-ff9c-4749-9b6a-db2bfa2ac3e2, system: kompact-runtime-1, location: docs/examples/src/bin/load_balancer.rs:56
Note: As before, if you have checked out the examples folder you can run the concrete binary with:
cargo run --release --bin load_balancer
Note that running in debug mode will produce a lot of output as it will trace all the network messages.
Serialisation
In this section we are going take a closer look at the various serialisation options offered by Kompact. In particular, we will look at how to write different kinds of custom serialiser implementations, as well how to handle messages that are all but guaranteed to actually go over the network more efficiently.
Custom Serialisation
At the centre of Kompact’s serialisation mechanisms are the Serialisable
and Deserialiser
traits, the signature of which looks roughtly like this:
pub trait Serialisable: Send + Debug {
/// The serialisation id for this serialisable
fn ser_id(&self) -> SerId;
/// An indicator how many bytes must be reserved in a buffer for a value to be
/// serialsed into it with this serialiser
fn size_hint(&self) -> Option<usize>;
/// Serialises this object (`self`) into `buf`
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError>;
/// Try move this object onto the heap for reflection, instead of serialising
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>>;
}
pub trait Deserialiser<T>: Send {
/// The serialisation id for which this deserialiser is to be invoked
const SER_ID: SerId;
/// Try to deserialise a `T` from the given `buf`
fn deserialise(buf: &mut dyn Buf) -> Result<T, SerError>;
}
Outgoing Path
When ActorPath::tell(...)
is invoked with a type that is Serialisable
, it will create a boxed trait object from the given instance and send it to the network layer. Only when the network layer has the determind that the destination must be accessed via a network channel, will the runtime serialise the instance into the network channel’s buffer. If it turns out the destination is on the same actor system as the source, it will simply call Serialisable::local(...)
to get a boxed instance of the Any
trait and then send it directly to the target component, without ever serialising. This approach is called lazy serialisation. For the vast majority of Serialisable
implementations, Serialisable::local(...)
is implemented simply as Ok(self)
. However, for some more advanced usages (e.g., serialisation proxies) the implementation may have to call some additional code.
Once it is determined that an instance does indeed need to be serialised, the runtime will reserve some buffer memory for it to be serialised into. It does so by querying the Serialisable::size_hint(...)
function for an estimate of how much space the type is likely going to take. For some types this is easy to know statically, but others it is not so clear. In any case, this is just an optimisation. Serialisation will proceed correctly even if the estimate is terribly wrong or no estimate is given at all.
The first thing in the new serialisation buffer is typically the serialisation id obtained via Serialisable::ser_id(...)
. Typically, Kompact will only require a single serialisation id for the message to be written into the buffer, even if the message uses other serialisers internally, as long as all the internal types are statically known. This top-level serialisation id must match the Deserialiser::SER_ID
for the deserialiser to be used for this instance. For types that implement both Serialisable
and Deserialiser
, as most do, it is recommended to simply use is Self::SER_ID
as the implementation for Serialisable::ser_id(...)
to make sure the ids match later.
The actual serialisation of the instance is handled by Serialisable::serialise(...)
, which should use the functions provided by BufMut to serialise the individual parts of the instance into the buffer.
Serialiser
Instead of implementing Serialisable
we can also implement the Serialiser
trait:
pub trait Serialiser<T>: Send {
/// The serialisation id for this serialiser
fn ser_id(&self) -> SerId;
/// An indicator how many bytes must be reserved in a buffer for a value to be
fn size_hint(&self) -> Option<usize>;
/// Serialise `v` into `buf`.
fn serialise(&self, v: &T, buf: &mut dyn BufMut) -> Result<(), SerError>;
}
This behaves essentually the same, except that it doesn’t serialise itself, but an instance of another type T
. In order to use an instance t: T
with a Serialiser<T>
we can simply pass a pair of the two to the ActorPath::tell(...)
function, as we have already seen in the previous section, for example with Serde
:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{
collections::HashSet,
convert::TryInto,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
struct ZstSerialiser<T>(T)
where
T: Send + Sync + Default + Copy + SerialisationId;
impl<T> Serialiser<T> for &ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
fn ser_id(&self) -> SerId {
T::SER_ID
}
fn size_hint(&self) -> Option<usize> {
Some(0)
}
fn serialise(&self, _v: &T, _buf: &mut dyn BufMut) -> Result<(), SerError> {
Ok(())
}
}
impl<T> Deserialiser<T> for ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
const SER_ID: SerId = T::SER_ID;
fn deserialise(_buf: &mut dyn Buf) -> Result<T, SerError> {
Ok(T::default())
}
}
#[derive(Debug, Clone, Copy, Default)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
static CHECK_IN_SER: ZstSerialiser<CheckIn> = ZstSerialiser(CheckIn);
#[derive(Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl Serialisable for UpdateProcesses {
fn ser_id(&self) -> SerId {
Self::SER_ID
}
fn size_hint(&self) -> Option<usize> {
let procs_size = self.0.len() * 23; // 23 bytes is the size of a unique actor path
Some(8 + procs_size)
}
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError> {
let len = self.0.len() as u64;
buf.put_u64(len);
for path in self.0.iter() {
path.serialise(buf)?;
}
Ok(())
}
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>> {
Ok(self)
}
}
impl Deserialiser<UpdateProcesses> for UpdateProcesses {
const SER_ID: SerId = 3456;
fn deserialise(buf: &mut dyn Buf) -> Result<UpdateProcesses, SerError> {
let len_u64 = buf.get_u64();
let len: usize = len_u64.try_into().map_err(SerError::from_debug)?;
let mut data: Vec<ActorPath> = Vec::with_capacity(len);
for _i in 0..len {
let path = ActorPath::deserialise(buf)?;
data.push(path);
}
Ok(UpdateProcesses(data))
}
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> Handled {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process
.tell_serialised(msg.clone(), self)
.unwrap_or_else(|e| warn!(self.log(), "Error during serialisation: {}", e));
});
Handled::Ok
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = ZstSerialiser<CheckIn>;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess()
} else {
Handled::Ok
}
} else {
Handled::Ok
}
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats()
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> Handled {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
Handled::Ok
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, &CHECK_IN_SER), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(update): UpdateProcesses => {
let UpdateProcesses(processes) = update;
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x - 1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping_serialisation() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Incoming Path
For any incoming network message the Kompact framework will buffer all data, and once it is complete, it will read out the serialisation id and create a NetMessage
from it and the remaining buffer. It will then send the NetMessage
directly to the destination component without any further processing. This approach is called lazy deserialisation and is quite different from most other actor/component frameworks, which tend to deserialise eagerly and then type match later at the destination component. However, in Rust the lazy approach is more efficient as it avoids unnecessary heap allocations for the deserialised instance.
When the NetMessage::try_deserialise
function is called on the destination component, the serialisation ids of the message and the given Deserialiser
will be checked and if they match up the Deserialiser::deserialise(...)
function is called with the message’s data. For custom deserialisers, this method must use the Buf API to implement essentially the inverse path of what the serialisable did before.
Example
To show how custom serialisers can be implemented, we will show two examples re-using the bootstrapping leader election from the previous sections.
Serialiser
In our example, CheckIn
is a zero-sized type (ZST), since we don’t really care about the message, only about the sender. Since ZSTs have no content, we can uniquely identify them by their serialisation id alone and all the serialisers for them are basically identical, in that their serialise(...)
function consists simply of Ok(())
. For this example, instead of using Serde
for CheckIn
, we will write our own Serialiser
implementation for ZSTs and then use it for CheckIn
. We could also use it for Heartbeat
, but we won’t, so as to leave it as a reference for the other approach.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{
collections::HashSet,
convert::TryInto,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
struct ZstSerialiser<T>(T)
where
T: Send + Sync + Default + Copy + SerialisationId;
impl<T> Serialiser<T> for &ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
fn ser_id(&self) -> SerId {
T::SER_ID
}
fn size_hint(&self) -> Option<usize> {
Some(0)
}
fn serialise(&self, _v: &T, _buf: &mut dyn BufMut) -> Result<(), SerError> {
Ok(())
}
}
impl<T> Deserialiser<T> for ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
const SER_ID: SerId = T::SER_ID;
fn deserialise(_buf: &mut dyn Buf) -> Result<T, SerError> {
Ok(T::default())
}
}
#[derive(Debug, Clone, Copy, Default)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
static CHECK_IN_SER: ZstSerialiser<CheckIn> = ZstSerialiser(CheckIn);
#[derive(Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl Serialisable for UpdateProcesses {
fn ser_id(&self) -> SerId {
Self::SER_ID
}
fn size_hint(&self) -> Option<usize> {
let procs_size = self.0.len() * 23; // 23 bytes is the size of a unique actor path
Some(8 + procs_size)
}
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError> {
let len = self.0.len() as u64;
buf.put_u64(len);
for path in self.0.iter() {
path.serialise(buf)?;
}
Ok(())
}
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>> {
Ok(self)
}
}
impl Deserialiser<UpdateProcesses> for UpdateProcesses {
const SER_ID: SerId = 3456;
fn deserialise(buf: &mut dyn Buf) -> Result<UpdateProcesses, SerError> {
let len_u64 = buf.get_u64();
let len: usize = len_u64.try_into().map_err(SerError::from_debug)?;
let mut data: Vec<ActorPath> = Vec::with_capacity(len);
for _i in 0..len {
let path = ActorPath::deserialise(buf)?;
data.push(path);
}
Ok(UpdateProcesses(data))
}
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> Handled {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process
.tell_serialised(msg.clone(), self)
.unwrap_or_else(|e| warn!(self.log(), "Error during serialisation: {}", e));
});
Handled::Ok
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = ZstSerialiser<CheckIn>;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess()
} else {
Handled::Ok
}
} else {
Handled::Ok
}
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats()
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> Handled {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
Handled::Ok
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, &CHECK_IN_SER), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(update): UpdateProcesses => {
let UpdateProcesses(processes) = update;
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x - 1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping_serialisation() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
We continue using the SerialisationId
trait like we did for Serde, because we need to write id of the ZST not of the ZstSerialiser
, which can serialise and deserialise many different ZSTs.
In order to create the correct type instance during deserialisation, we use the Default
trait, which can be trivially derived for ZSTs.
It is clear that this serialiser is basically trivial. We can use it by creating a pair of Checkin
with a reference to our static instance CHECK_IN_SER
, which simply specialises the ZstSerialiser
for CheckIn
, as we did before:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{
collections::HashSet,
convert::TryInto,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
struct ZstSerialiser<T>(T)
where
T: Send + Sync + Default + Copy + SerialisationId;
impl<T> Serialiser<T> for &ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
fn ser_id(&self) -> SerId {
T::SER_ID
}
fn size_hint(&self) -> Option<usize> {
Some(0)
}
fn serialise(&self, _v: &T, _buf: &mut dyn BufMut) -> Result<(), SerError> {
Ok(())
}
}
impl<T> Deserialiser<T> for ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
const SER_ID: SerId = T::SER_ID;
fn deserialise(_buf: &mut dyn Buf) -> Result<T, SerError> {
Ok(T::default())
}
}
#[derive(Debug, Clone, Copy, Default)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
static CHECK_IN_SER: ZstSerialiser<CheckIn> = ZstSerialiser(CheckIn);
#[derive(Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl Serialisable for UpdateProcesses {
fn ser_id(&self) -> SerId {
Self::SER_ID
}
fn size_hint(&self) -> Option<usize> {
let procs_size = self.0.len() * 23; // 23 bytes is the size of a unique actor path
Some(8 + procs_size)
}
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError> {
let len = self.0.len() as u64;
buf.put_u64(len);
for path in self.0.iter() {
path.serialise(buf)?;
}
Ok(())
}
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>> {
Ok(self)
}
}
impl Deserialiser<UpdateProcesses> for UpdateProcesses {
const SER_ID: SerId = 3456;
fn deserialise(buf: &mut dyn Buf) -> Result<UpdateProcesses, SerError> {
let len_u64 = buf.get_u64();
let len: usize = len_u64.try_into().map_err(SerError::from_debug)?;
let mut data: Vec<ActorPath> = Vec::with_capacity(len);
for _i in 0..len {
let path = ActorPath::deserialise(buf)?;
data.push(path);
}
Ok(UpdateProcesses(data))
}
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> Handled {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process
.tell_serialised(msg.clone(), self)
.unwrap_or_else(|e| warn!(self.log(), "Error during serialisation: {}", e));
});
Handled::Ok
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = ZstSerialiser<CheckIn>;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess()
} else {
Handled::Ok
}
} else {
Handled::Ok
}
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats()
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> Handled {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
Handled::Ok
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, &CHECK_IN_SER), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(update): UpdateProcesses => {
let UpdateProcesses(processes) = update;
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x - 1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping_serialisation() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
Serialisable
Since the previous example was somewhat trivial, we will do a slightly trickier one for the Serialisable
example. We will make the UpdateProcesses
type both Serialisable
and Deserialiser<UpdateProcesses>
. This type contains a vector of ActorPath
instances, which we must handle correctly. We will reuse the Serialisable
and Deserialiser<ActorPath>
implementations that are already provided for the ActorPath
type.
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{
collections::HashSet,
convert::TryInto,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
struct ZstSerialiser<T>(T)
where
T: Send + Sync + Default + Copy + SerialisationId;
impl<T> Serialiser<T> for &ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
fn ser_id(&self) -> SerId {
T::SER_ID
}
fn size_hint(&self) -> Option<usize> {
Some(0)
}
fn serialise(&self, _v: &T, _buf: &mut dyn BufMut) -> Result<(), SerError> {
Ok(())
}
}
impl<T> Deserialiser<T> for ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
const SER_ID: SerId = T::SER_ID;
fn deserialise(_buf: &mut dyn Buf) -> Result<T, SerError> {
Ok(T::default())
}
}
#[derive(Debug, Clone, Copy, Default)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
static CHECK_IN_SER: ZstSerialiser<CheckIn> = ZstSerialiser(CheckIn);
#[derive(Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl Serialisable for UpdateProcesses {
fn ser_id(&self) -> SerId {
Self::SER_ID
}
fn size_hint(&self) -> Option<usize> {
let procs_size = self.0.len() * 23; // 23 bytes is the size of a unique actor path
Some(8 + procs_size)
}
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError> {
let len = self.0.len() as u64;
buf.put_u64(len);
for path in self.0.iter() {
path.serialise(buf)?;
}
Ok(())
}
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>> {
Ok(self)
}
}
impl Deserialiser<UpdateProcesses> for UpdateProcesses {
const SER_ID: SerId = 3456;
fn deserialise(buf: &mut dyn Buf) -> Result<UpdateProcesses, SerError> {
let len_u64 = buf.get_u64();
let len: usize = len_u64.try_into().map_err(SerError::from_debug)?;
let mut data: Vec<ActorPath> = Vec::with_capacity(len);
for _i in 0..len {
let path = ActorPath::deserialise(buf)?;
data.push(path);
}
Ok(UpdateProcesses(data))
}
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> Handled {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process
.tell_serialised(msg.clone(), self)
.unwrap_or_else(|e| warn!(self.log(), "Error during serialisation: {}", e));
});
Handled::Ok
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = ZstSerialiser<CheckIn>;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess()
} else {
Handled::Ok
}
} else {
Handled::Ok
}
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats()
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> Handled {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
Handled::Ok
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, &CHECK_IN_SER), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(update): UpdateProcesses => {
let UpdateProcesses(processes) = update;
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x - 1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping_serialisation() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
It would be easy to just iterate through the vector during serialisation and write one path at a time using its own serialise(...)
implementation. But during deserialisation we need to know how many paths we have to take out of the buffer. We could simply try taking until the buffer refuses us, but this kind of approach often makes it difficult to detect bugs in one’s serialiser implementations. We will instead write the length of the vector before we serialise the actor paths, and during deserialisation we will read it first and allocate a vector of appropriate size. If we are concerned about the space the length wastes, we could try to use some better integer encoding like Protocol Buffers do, for example. But for now we don’t care so much and simply write a full u64
. Those extra 8 bytes make little different compared to the sizes of a bunch of actor paths.
We don’t really have a good idea for size_hint(...)
here. It’s basically 8 plus the sum of the size hints for each actor path. In this case, we actually know we are pretty much just going to send unique actor paths in this set, so we can assume each one is 23 bytes long. If that assumption turns out to be wrong in practice, it will simply cause some additional allocations during serialisation. In general, as a developer we have to decide on a trade-off between how much time we want to spend calculating accurate size hints, and how much time we want to spend on potential reallocations. We could also simply return a large number such as 1024 and accept that we may may often waste much of that allocated space. Application requirements (read: benchmarking) will determine which is the best choice in a particular scenario.
Eager Serialisation
As mentioned above, using ActorPath::tell(...)
may cause a stack-to-heap move of the data, as it is being converted into a boxed trait object for lazy serialisation. This approach optimises for avoiding expensive serialisations in the case where the ActorPath
turns out to be local. However, this may not always be the appropriate approach, in particular if serialisation is quick compared to allocation, or most actor paths are not going to be local anyway. For these cases, Kompact also allows eager serialisation. To force an instance to be serialised eagerly, on the sending component’s thread, you can use ActorPath::tell_serialised(...)
. It works essentially the same as ActorPath::tell(...)
but uses a buffer pool local to the sending component to serialise the data into, before sending it off to the dispatcher. If then in the dispatcher it turns out that the actor path was actually local, the data simply has to be deserialised again, as if it had arrived remotely. If the target is remote, however, the data can be written directly into the appropriate channel.
Example
To show an easy usage for this approach, we use eager serialisation in the BootstrapServer::broadcast_processess
function:
#![allow(clippy::unused_unit)]
use kompact::{prelude::*, serde_serialisers::*};
use kompact_examples::trusting::*;
use std::{
collections::HashSet,
convert::TryInto,
net::{IpAddr, Ipv4Addr, SocketAddr},
time::Duration,
};
struct ZstSerialiser<T>(T)
where
T: Send + Sync + Default + Copy + SerialisationId;
impl<T> Serialiser<T> for &ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
fn ser_id(&self) -> SerId {
T::SER_ID
}
fn size_hint(&self) -> Option<usize> {
Some(0)
}
fn serialise(&self, _v: &T, _buf: &mut dyn BufMut) -> Result<(), SerError> {
Ok(())
}
}
impl<T> Deserialiser<T> for ZstSerialiser<T>
where
T: Send + Sync + Default + Copy + SerialisationId,
{
const SER_ID: SerId = T::SER_ID;
fn deserialise(_buf: &mut dyn Buf) -> Result<T, SerError> {
Ok(T::default())
}
}
#[derive(Debug, Clone, Copy, Default)]
struct CheckIn;
impl SerialisationId for CheckIn {
const SER_ID: SerId = 2345;
}
static CHECK_IN_SER: ZstSerialiser<CheckIn> = ZstSerialiser(CheckIn);
#[derive(Debug, Clone)]
struct UpdateProcesses(Vec<ActorPath>);
impl Serialisable for UpdateProcesses {
fn ser_id(&self) -> SerId {
Self::SER_ID
}
fn size_hint(&self) -> Option<usize> {
let procs_size = self.0.len() * 23; // 23 bytes is the size of a unique actor path
Some(8 + procs_size)
}
fn serialise(&self, buf: &mut dyn BufMut) -> Result<(), SerError> {
let len = self.0.len() as u64;
buf.put_u64(len);
for path in self.0.iter() {
path.serialise(buf)?;
}
Ok(())
}
fn local(self: Box<Self>) -> Result<Box<dyn Any + Send>, Box<dyn Serialisable>> {
Ok(self)
}
}
impl Deserialiser<UpdateProcesses> for UpdateProcesses {
const SER_ID: SerId = 3456;
fn deserialise(buf: &mut dyn Buf) -> Result<UpdateProcesses, SerError> {
let len_u64 = buf.get_u64();
let len: usize = len_u64.try_into().map_err(SerError::from_debug)?;
let mut data: Vec<ActorPath> = Vec::with_capacity(len);
for _i in 0..len {
let path = ActorPath::deserialise(buf)?;
data.push(path);
}
Ok(UpdateProcesses(data))
}
}
#[derive(ComponentDefinition)]
struct BootstrapServer {
ctx: ComponentContext<Self>,
processes: HashSet<ActorPath>,
}
impl BootstrapServer {
fn new() -> Self {
BootstrapServer {
ctx: ComponentContext::uninitialised(),
processes: HashSet::new(),
}
}
fn broadcast_processess(&self) -> Handled {
let procs: Vec<ActorPath> = self.processes.iter().cloned().collect();
let msg = UpdateProcesses(procs);
self.processes.iter().for_each(|process| {
process
.tell_serialised(msg.clone(), self)
.unwrap_or_else(|e| warn!(self.log(), "Error during serialisation: {}", e));
});
Handled::Ok
}
}
ignore_lifecycle!(BootstrapServer);
impl NetworkActor for BootstrapServer {
type Deserialiser = ZstSerialiser<CheckIn>;
type Message = CheckIn;
fn receive(&mut self, source: Option<ActorPath>, _msg: Self::Message) -> Handled {
if let Some(process) = source {
if self.processes.insert(process) {
self.broadcast_processess()
} else {
Handled::Ok
}
} else {
Handled::Ok
}
}
}
#[derive(ComponentDefinition)]
struct EventualLeaderElector {
ctx: ComponentContext<Self>,
omega_port: ProvidedPort<EventualLeaderDetection>,
bootstrap_server: ActorPath,
processes: Box<[ActorPath]>,
candidates: HashSet<ActorPath>,
period: Duration,
delta: Duration,
timer_handle: Option<ScheduledTimer>,
leader: Option<ActorPath>,
}
impl EventualLeaderElector {
fn new(bootstrap_server: ActorPath) -> Self {
let minimal_period = Duration::from_millis(1);
EventualLeaderElector {
ctx: ComponentContext::uninitialised(),
omega_port: ProvidedPort::uninitialised(),
bootstrap_server,
processes: Vec::new().into_boxed_slice(),
candidates: HashSet::new(),
period: minimal_period,
delta: minimal_period,
timer_handle: None,
leader: None,
}
}
fn select_leader(&mut self) -> Option<ActorPath> {
let mut candidates: Vec<ActorPath> = self.candidates.drain().collect();
candidates.sort_unstable();
candidates.reverse(); // pick smallest instead of largest
candidates.pop()
}
fn handle_timeout(&mut self, timeout_id: ScheduledTimer) -> Handled {
match self.timer_handle.take() {
Some(timeout) if timeout == timeout_id => {
let new_leader = self.select_leader();
if new_leader != self.leader {
self.period += self.delta;
self.leader = new_leader;
if let Some(ref leader) = self.leader {
self.omega_port.trigger(Trust(leader.clone()));
}
self.cancel_timer(timeout);
let new_timer = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(new_timer);
} else {
// just put it back
self.timer_handle = Some(timeout);
}
self.send_heartbeats()
}
Some(_) => Handled::Ok, // just ignore outdated timeouts
None => {
warn!(self.log(), "Got unexpected timeout: {:?}", timeout_id);
Handled::Ok
} // can happen during restart or teardown
}
}
fn send_heartbeats(&self) -> Handled {
self.processes.iter().for_each(|process| {
process.tell((Heartbeat, Serde), self);
});
Handled::Ok
}
}
impl ComponentLifecycle for EventualLeaderElector {
fn on_start(&mut self) -> Handled {
self.bootstrap_server.tell((CheckIn, &CHECK_IN_SER), self);
self.period = self.ctx.config()["omega"]["initial-period"]
.as_duration()
.expect("initial period");
self.delta = self.ctx.config()["omega"]["delta"]
.as_duration()
.expect("delta");
let timeout = self.schedule_periodic(
self.period,
self.period,
EventualLeaderElector::handle_timeout,
);
self.timer_handle = Some(timeout);
Handled::Ok
}
fn on_stop(&mut self) -> Handled {
if let Some(timeout) = self.timer_handle.take() {
self.cancel_timer(timeout);
}
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
// Doesn't have any requests
ignore_requests!(EventualLeaderDetection, EventualLeaderElector);
impl Actor for EventualLeaderElector {
type Message = Never;
fn receive_local(&mut self, _msg: Self::Message) -> Handled {
unreachable!();
}
fn receive_network(&mut self, msg: NetMessage) -> Handled {
let sender = msg.sender;
match_deser! {
(msg.data) {
msg(_heartbeat): Heartbeat [using Serde] => {
self.candidates.insert(sender);
},
msg(update): UpdateProcesses => {
let UpdateProcesses(processes) = update;
info!(
self.log(),
"Received new process set with {} processes",
processes.len()
);
self.processes = processes.into_boxed_slice();
},
}
}
Handled::Ok
}
}
pub fn main() {
let args: Vec<String> = std::env::args().collect();
match args.len() {
2 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let system = run_server(bootstrap_socket);
system.await_termination(); // gotta quit it from command line
}
3 => {
let bootstrap_port: u16 = args[1].parse().expect("port number");
let bootstrap_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), bootstrap_port);
let client_port: u16 = args[2].parse().expect("port number");
let client_socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), client_port);
let system = run_client(bootstrap_socket, client_socket);
system.await_termination(); // gotta quit it from command line
}
x => panic!("Expected either 1 argument (the port for the bootstrap server to bind on) or 2 arguments (boostrap server and client port), but got {} instead!", x - 1),
}
}
const BOOTSTRAP_PATH: &str = "bootstrap";
pub fn run_server(socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(DeadletterBox::new, NetworkConfig::new(socket).build());
let system = cfg.build().expect("KompactSystem");
let (bootstrap, bootstrap_registration) = system.create_and_register(BootstrapServer::new);
let bootstrap_service_registration = system.register_by_alias(&bootstrap, BOOTSTRAP_PATH);
let _bootstrap_unique = bootstrap_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
let bootstrap_service = bootstrap_service_registration
.wait_expect(Duration::from_millis(1000), "bootstrap never registered");
system.start(&bootstrap);
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
pub fn run_client(bootstrap_socket: SocketAddr, client_socket: SocketAddr) -> KompactSystem {
let mut cfg = KompactConfig::default();
cfg.load_config_file("./application.conf");
cfg.system_components(
DeadletterBox::new,
NetworkConfig::new(client_socket).build(),
);
let system = cfg.build().expect("KompactSystem");
let bootstrap_service: ActorPath = NamedPath::with_socket(
Transport::Tcp,
bootstrap_socket,
vec![BOOTSTRAP_PATH.into()],
)
.into();
let printer = system.create(TrustPrinter::new);
let (detector, registration) =
system.create_and_register(|| EventualLeaderElector::new(bootstrap_service));
biconnect_components::<EventualLeaderDetection, _, _>(&detector, &printer).expect("connection");
let _path = registration.wait_expect(Duration::from_millis(1000), "detector never registered");
system.start(&printer);
system.start(&detector);
system
}
#[cfg(test)]
mod tests {
use super::*;
const SERVER_SOCKET: &str = "127.0.0.1:12345";
const CLIENT_SOCKET: &str = "127.0.0.1:0";
#[test]
fn test_bootstrapping_serialisation() {
let server_socket: SocketAddr = SERVER_SOCKET.parse().unwrap();
let server_system = run_server(server_socket);
let client_socket: SocketAddr = CLIENT_SOCKET.parse().unwrap();
let mut clients_systems: Vec<KompactSystem> = (0..3)
.map(|_i| run_client(server_socket, client_socket))
.collect();
// let them settle
std::thread::sleep(Duration::from_millis(1000));
// shut down systems one by one
for sys in clients_systems.drain(..) {
std::thread::sleep(Duration::from_millis(1000));
sys.shutdown().expect("shutdown");
}
std::thread::sleep(Duration::from_millis(1000));
server_system.shutdown().expect("shutdown");
}
}
As you can see above, another feature of eager serialisation is that you can (and must) deal with serialisaition errors, which you have no control over using lazy serialisation. In particular, your memory allocation may prevent your local buffer pool from allocating a buffer large enough to fit your data at the time of serialisation. In this case you will get a SerError::BufferError
and must decide how to handle that. You could either retry at a later time, or switch to lazy serialisation and hope the network’s buffers still have capacity (assuming they likely have priority over component local buffer pools).
Network Buffers
Kompact uses a BufferPool system to serialize network messages. This section describes the BufferPools briefly and how they can be configured with different parameters.
Before we begin describing the Network Buffers we remind the reader that there are two different methods for sending messages over the network in Kompact:
- Lazy serialisation
dst.tell(msg: M, from: S);
- Eager serialisation
dst.tell_serialised(msg: M, from: &self);
With lazy serialisation the Actor moves the data to the heap, and transfers it unserialised to the NetworkDispatcher
, which later serialises the message into its (the NetworkDispatcher
‘s) own buffers.
Eager serialisation serialises the data immediately into the Actor’s buffers, and then transfers ownership of the serialised the data to the NetworkDispatcher
.
Lazy serialisation may fail due to two reasons: A serialisation error, or there are no available buffers. Both will be unnoticeable to the actor initiating the message sending, and both will lead to the message being lost.
How the Buffer Pools work
Buffer Pool locations
In a Kompact system where many actors use eager serialisation there will be many BufferPool
instances. If the actors in the system only use lazy serialisation there will be a single pool, owned by the NetworkThread
for serialising and receiving data.
BufferPool, BufferChunk, and ChunkLease
Each BufferPool
(pool) consists of more than one BufferChunk
(chunks). A chunk is the concrete memory area used for serialising data into. There may be many messages serialised into a single chunk, and discrete slices of the chunks (i.e. individual messages) can be extracted and sent to other threads/actors through the smart-pointer ChunkLease
(lease). When a chunk runs out of space it will be locked and returned to the pool. If and only if all outstanding leases created from a chunk have been dropped may the chunk be unlocked and reused, or deallocated.
When a pool is created it will pre-allocate a configurable amount of chunks, and will attempt to reuse those as long as possible, and only when it needs to will it allocate more chunks, up to a configurable maximum number of chunks.
BufferPool interface
Actors access their pool through the EncodeBuffer
wrapper which maintains a single active chunk at a time, and automatically swaps the active buffer with the local BufferPool
when necessary.
The method tell_serialised(msg, &self)
automatically uses the EncodeBuffer
interface such that users of Kompact do not need to use the interfaces of the pool (which is why the method requires a self
reference).
BufferPool initialization
Actors initialize their local buffers automatically when the first invocation of tell_serialised(...)
occurs. If an Actor never invokes the method it will not allocate any buffers.
An actor may call the initialization method through the call self.ctx.borrow().init_buffers(None, None);
1 to explicitly initialize the local BufferPool
without sending a message.
BufferConfig
Parameters
There are four configurable parameters in the BufferConfig:
chunk_size
: The size (in bytes) of theBufferChunks
. Default value is 128KB.initial_chunk_count
: How manyBufferChunks
theBufferPool
will pre-allocate. Default value is 2.max_chunk_count
: The maximum number ofBufferChunks
theBufferPool
may have allocated simultaneously. Default value is 1000.encode_buf_min_free_space
: When an Actor begins serialising a message theEncodeBuffer
will compare how much space (in bytes) is left in the active chunk and compare it to this parameter, if there is less free space the active chunk will be replaced with a new one from the pool before continuing the serialisation. Default value is 64 bytes.
Configuring the Buffers
Individual Actor Configuration
If no BufferConfig
is specified Kompact will use the default settings for all BufferPools
. Actors may be configured with individual BufferConfigs
through the init_buffer(Some(config), None)
1 config. It is important that the call is made before any calls to tell_serialised(...)
. For example, the on_start()
function of the ComponentLifecycle
may be used to ensure this, as in the following example:
impl ComponentLifecycle for CustomBufferConfigActor {
fn on_start(&mut self) -> Handled {
let mut buffer_config = BufferConfig::default();
buffer_config.encode_buf_min_free_space(128);
buffer_config.max_chunk_count(5);
buffer_config.initial_chunk_count(4);
buffer_config.chunk_size(256*1024);
self.ctx.borrow().init_buffers(Some(buffer_config), None);
Handled::Ok
}
...
}
Configuring All Actors
If a programmer wishes for all actors to use the same BufferConfig
configuration, a Hocon string can be inserted into the KompactConfig
or loaded from a Hocon-file (see configuration chapter on loading configurations), for example:
let mut cfg = KompactConfig::new();
cfg.load_config_str(
r#"{
buffer_config {
chunk_size: "256KB",
initial_chunk_count: 3,
max_chunk_count: 4,
encode_min_remaining: "20B",
}
}"#,
);
...
let system = cfg.build().expect("KompactSystem");
If a BufferConfig
is loaded into the systems KompactConfig
then all actors will use that configuration instead of the default BufferConfig
, however individual actors may still override the configuration by using the init_buffers(...)
method.
Configuring the NetworkDispatcher and NetworkThread
The NetworkDispatcher
and NetworkThread
are configured separately from the Actors and use their buffers for lazy serialisation and receiving data from the network. To configure their buffers the NetworkConfig
may be created using the method ::with_buffer_config(...)
as in the example below:
let mut cfg = KompactConfig::new();
let mut network_buffer_config = BufferConfig::default();
network_buffer_config.chunk_size(512);
network_buffer_config.initial_chunk_count(2);
network_buffer_config.max_chunk_count(3);
network_buffer_config.encode_buf_min_free_space(10);
cfg.system_components(DeadletterBox::new, {
NetworkConfig::with_buffer_config(
"127.0.0.1:0".parse().expect("Address should work"),
network_buffer_config,
)
.build()
});
let system = cfg.build().expect("KompactSystem");
BufferConfig Validation
BufferConfig
implements the method validate()
which causes a panic if the set of parameters is invalid. It is invoked whenever a BufferPool
is created from the given configuration. The validation checks that the following conditions hold true:
chunk_size > encode_buf_min_free_space
chunk_size > 127
max_chunk_count >= initial_chunk_count
The method init_buffers(...)
takes two Option
arguments, of which the second argument has not been covered. The second argument allows users of Kompact to specify a CustomAllocator
: a poorly tested, experimental feature which is left undocumented for the time being.
Network Status Port
The NetworkDispatcher
provides a NetworkStatusPort
which any Component may use to subscribe to information about the
Network, or make requests to the Network Layer.
Using the Network Status Port
To subscribe to the import a component must implement Require
for NetworkStatusPort
. The system must be set-up with
a NetworkingConfig
to enable Networking. When the component is instantiated it must be explicitly connected to the
NetworkStatusPort
. KompactSystem
exposes the convenience method
connect_network_status_port<C>(&self, required: &mut RequiredPort<NetworkStatusPort>)
to subscribe a component to the
port, and may be used as in the example below.
# use kompact::prelude::*;
# use kompact::net::net_test_helpers::NetworkStatusCounter;
let mut cfg = KompactConfig::new();
cfg.system_components(DeadletterBox::new, {
let net_config = NetworkConfig::new("127.0.0.1:0".parse().expect(""));
net_config.build()
});
let system = cfg.build().expect("KompactSystem");
let status_counter = system.create(NetworkStatusCounter::new);
status_counter.on_definition(|c|{
system.connect_network_status_port(&mut c.network_status_port);
})
Network Status Indications
NetworkStatus
events are the Indications
sent by the dispatcher to the subscribed components.
The Event is an enum
with the following variants:
ConnectionEstablished(SystemPath)
Indicates that a connection has been established to the remote systemConnectionLost(SystemPath)
Indicates that a connection has been lost to the remote system. The system will automatically try to recover the connection for a configurable amount of retries. The end of the automatic retries is signalled by aConnectionDropped
message.ConnectionDropped(SystemPath)
Indicates that a connection has been dropped and no more automatic retries to re-establish the connection will be attempted and all queued messages have been dropped.ConnectionClosed(SystemPath)
Indicates that a connection has been gracefully closed, no automatic retries will be attempted.SoftConnectionLimitReached
Indicates that theSoftConnectionLimit
has been reached, theNetworkThread
will gracefully close the least-recently-used connection, and will continuously evict (gracefully) the LRU-connection when new connection attempts (incoming or outgoing) are made.HardConnectionLimitReached
Indicates that theHardConnectionLimit
has been reached. New connection attempts will be discarded immediately until the number of connections are lower.CriticalNetworkFailure
TheNetworkThread
has Panicked and will be restarted, any number of incoming and outgoing messages may have been lost.BlockedSystem(SystemPath)
Indicates that a system has been blocked.BlockedIp(IpAddr)
Indicates that an IpAddr has been blocked.BlockedIpNet(IpNet)
Indicates that an IpAddr has been blocked.AllowedSystem(SystemPath)
Indicates that a system has been unblocked after previously being blocked.AllowedIp(IpAddr)
Indicates that an IpAddr has been unblocked after previously being blocked.AllowedIpNet(IpNet)
Indicates that an IpNet has been unblocked after previously being blocked.
The Networking layer distinguishes between gracefully closed connections and lost connections. A lost connection will trigger reconnection attempts for a configurable amount of times, before it is completely dropped. It will also retain outgoing messages for lost connections until the connection is dropped.
Network Status Requests
The NetworkDispatcher
may respond to NetworkStatusRequest
sent by Components onto the channel.
The event is an enum
with the following variants:
DisconnectSystem(SystemPath)
Request that the connection to the given system is closed gracefully. TheNetworkDispatcher
will immediately start a graceful shutdown of the channel if it is currently active.ConnectSystem(SystemPath)
Request that a connection is (re-)established to the given system. Sending this message is the only way a connection may be re-established between systems previously disconnected by aDisconnectSystem
request.BlockSystem(SystemPath)
Request that a SystemPath to be blocked from this system. An established connection will be dropped and future attempts to establish a connection by that given SystemPath will be dropped.BlockIp(IpAddr)
Request an IpAddr to be blocked. Established connections which become blocked will be dropped and future attempts to establish a connection by that given IpAddr will be dropped.BlockIpNet(IpNet)
Request an IpNet to be blocked. Established connections which become blocked will be dropped and future attempts to establish connections to the IpNet be dropped.AllowSystem(SystemPath)
This acts as an allow-list for SystemPaths. Allowed SystemPaths will always take precedence over Blocked Ips or IpNets. This is the only way to undo a previously blocked SystemPath.AllowIp(IpAddr)
Request an IpAddr to be unblocked after previously being blocked.
Blocking and Allowing
A component that requires NetworkStatusPort
can block a specific IP address, Network or SystemPath.
By triggering the request BlockIp(IpAddr)
or BlockIpNet(IpNet)
on NetworkStatusPort
, all connections to IpAddr
will be dropped and future attempts to establish a connection will also be ignored.
The BlockIp(IpAddr)
/ AllowIp(IpAddr)
/ BlockIpNet(IpNet)
/ AllowIpNet(IpNet)
are applied in the NetworkThread
in the order they are received and produce a single block-list. For example, allowing the IpAddr
10.0.0.1
and then
blocking the IpNet
10.0.0.0/24
means that the IpAddr
will effectively become blocked. However, applying the same
two operations in the reverse order means that the IpAddr
will be allowed.
The AllowSystem(SystemPath)
/ BlockSystem(SystemPath)
are somewhat different, as the AllowSystem()
maintains a
separate allow-list which takes precedence over the IpAddr
/IpNet
block-list. So if the user first calls
AllowSystem(10.0.0.1:8080)
and then calls BlockIpNet(10.0.0.0/24)
the first AllowSystem
will take precedence and
connections to the System with the canonical address (listening ip:port) 10.0.0.1:8080
will remain unaffected.
To undo a previously allowed system, or to block a specific SystemPath
, one can use BlockSystem(SystemPath)
.
When the network thread has successfully blocked or unblocked an IpAddr
/ IpNet
/ SystemPath
, a BlockedIp
/
BlockedIpNet
/ BlockedSystem
or AllowedIp
/ AllowedIpnet
/ AllowedSystem
indication will be triggered on the
NetworkStatusPort
.
Async/Await Interaction
In addition to providing its own asynchronous APIs as described in the previous sections, Kompact also allows components to interact with Rust’s async/await features in a variety of manners. In particular, Kompact provides three different semantics for this interaction:
- A component can “block” on a future, suspending all other processing until the result of the future is available.
- A component can run a number of futures concurrently with other messages and events, allowing each future safe mutable access to its internal state whenever it is polled.
- A component or Kompact system can spawn futures to run on its executor pool.
The third variant is unremarkable and works like any other futures executor. It is invoked via KompactSystem::spawn(...)
or via ComponentDefinition::spawn_off(...)
. Variants 1 and 2, however, provide novel interactions between an asychronous API and an actor/component system, and so we will describe in more detail using an example below.
Example
In order to show off interaction between Kompact components and asynchronous calls, we use the asynchronous DNS resolution API provided by the async-std-resolver crate to build a DNS lookup component. In order to tell the component what to look up, we will read domain names from stdin, send them via ask(...)
to the component and wait for the result to come in, which we then print out. In fact, we will allow multiple concurrent queries to be specified as comma-separated list, to show off concurrent future interaction in Kompact components.
Messages
The messages we need a very simple, we simply pass a String
representing a single domain name as a request, and we return an already preformatted string with the resolved IPs as a response.
#![allow(clippy::unused_unit)]
use async_std_resolver::{config, resolver, AsyncStdResolver};
use dialoguer::Input;
use kompact::prelude::*;
use trust_dns_proto::{rr::record_type::RecordType, xfer::dns_request::DnsRequestOptions};
#[derive(Debug)]
struct DnsRequest(String);
#[derive(Debug)]
struct DnsResponse(String);
#[derive(ComponentDefinition)]
struct DnsComponent {
ctx: ComponentContext<Self>,
resolver: Option<AsyncStdResolver>,
}
impl DnsComponent {
pub fn new() -> Self {
DnsComponent {
ctx: ComponentContext::uninitialised(),
resolver: None,
}
}
}
impl ComponentLifecycle for DnsComponent {
fn on_start(&mut self) -> Handled {
debug!(self.log(), "Starting...");
Handled::block_on(self, move |mut async_self| async move {
let resolver = resolver(
config::ResolverConfig::default(),
config::ResolverOpts::default(),
)
.await
.expect("failed to connect resolver");
async_self.resolver = Some(resolver);
debug!(async_self.log(), "Started!");
})
}
fn on_stop(&mut self) -> Handled {
drop(self.resolver.take());
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for DnsComponent {
type Message = Ask<DnsRequest, DnsResponse>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
debug!(self.log(), "Got request for domain: {}", msg.request().0);
if let Some(ref resolver) = self.resolver {
let query_result_future = resolver.lookup(
msg.request().0.clone(),
RecordType::A,
DnsRequestOptions::default(),
);
self.spawn_local(move |async_self| async move {
let query_result = query_result_future.await.expect("dns query result");
debug!(
async_self.log(),
"Got reply for domain: {}",
msg.request().0
);
let mut results: Vec<String> = Vec::new();
for (index, ip) in query_result.iter().enumerate() {
results.push(format!("{}. {:?}", index, ip));
}
let result_string = format!("{}:\n {}", msg.request().0, results.join("\n "));
msg.reply(DnsResponse(result_string)).expect("reply");
Handled::Ok
});
Handled::Ok
} else {
panic!("Component should have been initialised first!")
}
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("ignore networking");
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let dns_comp = system.create(DnsComponent::new);
let dns_comp_ref = dns_comp.actor_ref().hold().expect("live");
system.start_notify(&dns_comp).wait();
println!("System is ready, enter your queries.");
loop {
let command = Input::<String>::new().with_prompt(">").interact();
match command {
Ok(s) => match s.as_ref() {
"stop" => break,
_ => {
let mut outstanding = Vec::new();
for domain in s.split(',') {
let domain = domain.trim();
info!(system.logger(), "Sending request for {}", domain);
let query_f = dns_comp_ref.ask(DnsRequest(domain.to_string()));
outstanding.push(query_f);
}
for query_f in outstanding {
let result = query_f.wait();
info!(system.logger(), "Got:\n {}\n", result.0);
}
}
},
Err(e) => error!(system.logger(), "Error with input: {}", e),
}
}
system.kill_notify(dns_comp).wait();
system.shutdown().expect("shutdown");
}
State
The component’s state is almost as simple, we simply require the usual component context and an instance of the asynchronous dns resolver. Since creation of that instance is performed asynchronously by the async-std-resolver library, we won’t have the instance we need available during component creation, and thus use an option indicating whether our component has already been properly initialised or not.
#![allow(clippy::unused_unit)]
use async_std_resolver::{config, resolver, AsyncStdResolver};
use dialoguer::Input;
use kompact::prelude::*;
use trust_dns_proto::{rr::record_type::RecordType, xfer::dns_request::DnsRequestOptions};
#[derive(Debug)]
struct DnsRequest(String);
#[derive(Debug)]
struct DnsResponse(String);
#[derive(ComponentDefinition)]
struct DnsComponent {
ctx: ComponentContext<Self>,
resolver: Option<AsyncStdResolver>,
}
impl DnsComponent {
pub fn new() -> Self {
DnsComponent {
ctx: ComponentContext::uninitialised(),
resolver: None,
}
}
}
impl ComponentLifecycle for DnsComponent {
fn on_start(&mut self) -> Handled {
debug!(self.log(), "Starting...");
Handled::block_on(self, move |mut async_self| async move {
let resolver = resolver(
config::ResolverConfig::default(),
config::ResolverOpts::default(),
)
.await
.expect("failed to connect resolver");
async_self.resolver = Some(resolver);
debug!(async_self.log(), "Started!");
})
}
fn on_stop(&mut self) -> Handled {
drop(self.resolver.take());
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for DnsComponent {
type Message = Ask<DnsRequest, DnsResponse>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
debug!(self.log(), "Got request for domain: {}", msg.request().0);
if let Some(ref resolver) = self.resolver {
let query_result_future = resolver.lookup(
msg.request().0.clone(),
RecordType::A,
DnsRequestOptions::default(),
);
self.spawn_local(move |async_self| async move {
let query_result = query_result_future.await.expect("dns query result");
debug!(
async_self.log(),
"Got reply for domain: {}",
msg.request().0
);
let mut results: Vec<String> = Vec::new();
for (index, ip) in query_result.iter().enumerate() {
results.push(format!("{}. {:?}", index, ip));
}
let result_string = format!("{}:\n {}", msg.request().0, results.join("\n "));
msg.reply(DnsResponse(result_string)).expect("reply");
Handled::Ok
});
Handled::Ok
} else {
panic!("Component should have been initialised first!")
}
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("ignore networking");
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let dns_comp = system.create(DnsComponent::new);
let dns_comp_ref = dns_comp.actor_ref().hold().expect("live");
system.start_notify(&dns_comp).wait();
println!("System is ready, enter your queries.");
loop {
let command = Input::<String>::new().with_prompt(">").interact();
match command {
Ok(s) => match s.as_ref() {
"stop" => break,
_ => {
let mut outstanding = Vec::new();
for domain in s.split(',') {
let domain = domain.trim();
info!(system.logger(), "Sending request for {}", domain);
let query_f = dns_comp_ref.ask(DnsRequest(domain.to_string()));
outstanding.push(query_f);
}
for query_f in outstanding {
let result = query_f.wait();
info!(system.logger(), "Got:\n {}\n", result.0);
}
}
},
Err(e) => error!(system.logger(), "Error with input: {}", e),
}
}
system.kill_notify(dns_comp).wait();
system.shutdown().expect("shutdown");
}
Setup
When we create a resolver instance via async_std_resolver::resolver
, we actually get a future back that we need to wait for. But our DNS component can’t perform any lookups until this future completed. Normally, we would have to manually queue up all requests received during that period until the future completed and then replay them. Instead, we can “block” on the provided future, causing the component itself to enter into a blocked
lifecycle state, during which it handles no messages or events. Only when the future’s result is available will the component enter the active
state and process other events and messages as normal again.
In order to enter the blocked
state, we must return a a special variant of the Handled
enum, which is obtained from the Handled::block_on(...)
method. This method takes the self
reference to the component and an asynchronous closure, that is a closure that produces a future when invoked. This closure is given a single parameter by the Kompact API, which is an access guard object to a mutable component reference. In order words, a special owned struct that can be mutably dereferenced to the current component definition type. This guard object ensures safe mutable access to the current component instance, whenever the resulting future is polled, but prevents holding on to actual references over await
calls (which are illegal). It is very important that this guard object is never sent to another thread from within the future. The async closure can not directly close over the component’s self
reference, as the correct lifetime for it can not be guaranteed. Only the references obtained from the special guard object are safe in between await
calls.
Having said all that, in our case the async closure very simply await
s the result of the resolver creation and then stores it locally, after which the component unblocks.
#![allow(clippy::unused_unit)]
use async_std_resolver::{config, resolver, AsyncStdResolver};
use dialoguer::Input;
use kompact::prelude::*;
use trust_dns_proto::{rr::record_type::RecordType, xfer::dns_request::DnsRequestOptions};
#[derive(Debug)]
struct DnsRequest(String);
#[derive(Debug)]
struct DnsResponse(String);
#[derive(ComponentDefinition)]
struct DnsComponent {
ctx: ComponentContext<Self>,
resolver: Option<AsyncStdResolver>,
}
impl DnsComponent {
pub fn new() -> Self {
DnsComponent {
ctx: ComponentContext::uninitialised(),
resolver: None,
}
}
}
impl ComponentLifecycle for DnsComponent {
fn on_start(&mut self) -> Handled {
debug!(self.log(), "Starting...");
Handled::block_on(self, move |mut async_self| async move {
let resolver = resolver(
config::ResolverConfig::default(),
config::ResolverOpts::default(),
)
.await
.expect("failed to connect resolver");
async_self.resolver = Some(resolver);
debug!(async_self.log(), "Started!");
})
}
fn on_stop(&mut self) -> Handled {
drop(self.resolver.take());
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for DnsComponent {
type Message = Ask<DnsRequest, DnsResponse>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
debug!(self.log(), "Got request for domain: {}", msg.request().0);
if let Some(ref resolver) = self.resolver {
let query_result_future = resolver.lookup(
msg.request().0.clone(),
RecordType::A,
DnsRequestOptions::default(),
);
self.spawn_local(move |async_self| async move {
let query_result = query_result_future.await.expect("dns query result");
debug!(
async_self.log(),
"Got reply for domain: {}",
msg.request().0
);
let mut results: Vec<String> = Vec::new();
for (index, ip) in query_result.iter().enumerate() {
results.push(format!("{}. {:?}", index, ip));
}
let result_string = format!("{}:\n {}", msg.request().0, results.join("\n "));
msg.reply(DnsResponse(result_string)).expect("reply");
Handled::Ok
});
Handled::Ok
} else {
panic!("Component should have been initialised first!")
}
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("ignore networking");
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let dns_comp = system.create(DnsComponent::new);
let dns_comp_ref = dns_comp.actor_ref().hold().expect("live");
system.start_notify(&dns_comp).wait();
println!("System is ready, enter your queries.");
loop {
let command = Input::<String>::new().with_prompt(">").interact();
match command {
Ok(s) => match s.as_ref() {
"stop" => break,
_ => {
let mut outstanding = Vec::new();
for domain in s.split(',') {
let domain = domain.trim();
info!(system.logger(), "Sending request for {}", domain);
let query_f = dns_comp_ref.ask(DnsRequest(domain.to_string()));
outstanding.push(query_f);
}
for query_f in outstanding {
let result = query_f.wait();
info!(system.logger(), "Got:\n {}\n", result.0);
}
}
},
Err(e) => error!(system.logger(), "Error with input: {}", e),
}
}
system.kill_notify(dns_comp).wait();
system.shutdown().expect("shutdown");
}
Note: The complicated looking
move |async_self| async move {...}
syntax is currently only necessary on stable Rust. On nightly, the much easierasync move |async_self| {...}
syntax is already available.
Queries
To handle queries we must call lookup(...)
on the resolver, which returns a future of a dns lookup result, which we must await before replying to the actual request. As we want to handle multiple such outstanding lookups in parallel, we can’t simply block on this future as we did before. Instead we want to spawn the future, to run locally on the component whenever it is polled, via ComponentDefinition::spawn_local(...)
. In this way, we have the same advantages as during blocking, but we can handle mutliple outstanding requests in parallel. Technically, except for some logging, we do not really need access to the component’s state in this particular case, but we will use it anyway to showcase the API.
Since the result of a DNS query can consist of multiple IP addresses, we construct a single string by formatting them together with the domain into an enumerated list. We then return that string a reply to the original request.
#![allow(clippy::unused_unit)]
use async_std_resolver::{config, resolver, AsyncStdResolver};
use dialoguer::Input;
use kompact::prelude::*;
use trust_dns_proto::{rr::record_type::RecordType, xfer::dns_request::DnsRequestOptions};
#[derive(Debug)]
struct DnsRequest(String);
#[derive(Debug)]
struct DnsResponse(String);
#[derive(ComponentDefinition)]
struct DnsComponent {
ctx: ComponentContext<Self>,
resolver: Option<AsyncStdResolver>,
}
impl DnsComponent {
pub fn new() -> Self {
DnsComponent {
ctx: ComponentContext::uninitialised(),
resolver: None,
}
}
}
impl ComponentLifecycle for DnsComponent {
fn on_start(&mut self) -> Handled {
debug!(self.log(), "Starting...");
Handled::block_on(self, move |mut async_self| async move {
let resolver = resolver(
config::ResolverConfig::default(),
config::ResolverOpts::default(),
)
.await
.expect("failed to connect resolver");
async_self.resolver = Some(resolver);
debug!(async_self.log(), "Started!");
})
}
fn on_stop(&mut self) -> Handled {
drop(self.resolver.take());
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for DnsComponent {
type Message = Ask<DnsRequest, DnsResponse>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
debug!(self.log(), "Got request for domain: {}", msg.request().0);
if let Some(ref resolver) = self.resolver {
let query_result_future = resolver.lookup(
msg.request().0.clone(),
RecordType::A,
DnsRequestOptions::default(),
);
self.spawn_local(move |async_self| async move {
let query_result = query_result_future.await.expect("dns query result");
debug!(
async_self.log(),
"Got reply for domain: {}",
msg.request().0
);
let mut results: Vec<String> = Vec::new();
for (index, ip) in query_result.iter().enumerate() {
results.push(format!("{}. {:?}", index, ip));
}
let result_string = format!("{}:\n {}", msg.request().0, results.join("\n "));
msg.reply(DnsResponse(result_string)).expect("reply");
Handled::Ok
});
Handled::Ok
} else {
panic!("Component should have been initialised first!")
}
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("ignore networking");
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let dns_comp = system.create(DnsComponent::new);
let dns_comp_ref = dns_comp.actor_ref().hold().expect("live");
system.start_notify(&dns_comp).wait();
println!("System is ready, enter your queries.");
loop {
let command = Input::<String>::new().with_prompt(">").interact();
match command {
Ok(s) => match s.as_ref() {
"stop" => break,
_ => {
let mut outstanding = Vec::new();
for domain in s.split(',') {
let domain = domain.trim();
info!(system.logger(), "Sending request for {}", domain);
let query_f = dns_comp_ref.ask(DnsRequest(domain.to_string()));
outstanding.push(query_f);
}
for query_f in outstanding {
let result = query_f.wait();
info!(system.logger(), "Got:\n {}\n", result.0);
}
}
},
Err(e) => error!(system.logger(), "Error with input: {}", e),
}
}
system.kill_notify(dns_comp).wait();
system.shutdown().expect("shutdown");
}
Running
In our main
function we want to set up the component, and then read from the command line over and over until the user enters "stop"
to end the loop. For each line we read that is not "stop"
, we will simply assume that it a comma-separated list of domain names. We split them apart, remove unnecessary spaces and then send them one by one to the DNSComponent
via ask(...)
. Instead of waiting for each future immediately, we store the response futures until all requests have been sent, and only then do we wait for each of them in order. We could also have waited for them in the order they are replied to, instead, it doesn’t really matter in this case. Only when the last of them has been completed, do we read input again.
#![allow(clippy::unused_unit)]
use async_std_resolver::{config, resolver, AsyncStdResolver};
use dialoguer::Input;
use kompact::prelude::*;
use trust_dns_proto::{rr::record_type::RecordType, xfer::dns_request::DnsRequestOptions};
#[derive(Debug)]
struct DnsRequest(String);
#[derive(Debug)]
struct DnsResponse(String);
#[derive(ComponentDefinition)]
struct DnsComponent {
ctx: ComponentContext<Self>,
resolver: Option<AsyncStdResolver>,
}
impl DnsComponent {
pub fn new() -> Self {
DnsComponent {
ctx: ComponentContext::uninitialised(),
resolver: None,
}
}
}
impl ComponentLifecycle for DnsComponent {
fn on_start(&mut self) -> Handled {
debug!(self.log(), "Starting...");
Handled::block_on(self, move |mut async_self| async move {
let resolver = resolver(
config::ResolverConfig::default(),
config::ResolverOpts::default(),
)
.await
.expect("failed to connect resolver");
async_self.resolver = Some(resolver);
debug!(async_self.log(), "Started!");
})
}
fn on_stop(&mut self) -> Handled {
drop(self.resolver.take());
Handled::Ok
}
fn on_kill(&mut self) -> Handled {
self.on_stop()
}
}
impl Actor for DnsComponent {
type Message = Ask<DnsRequest, DnsResponse>;
fn receive_local(&mut self, msg: Self::Message) -> Handled {
debug!(self.log(), "Got request for domain: {}", msg.request().0);
if let Some(ref resolver) = self.resolver {
let query_result_future = resolver.lookup(
msg.request().0.clone(),
RecordType::A,
DnsRequestOptions::default(),
);
self.spawn_local(move |async_self| async move {
let query_result = query_result_future.await.expect("dns query result");
debug!(
async_self.log(),
"Got reply for domain: {}",
msg.request().0
);
let mut results: Vec<String> = Vec::new();
for (index, ip) in query_result.iter().enumerate() {
results.push(format!("{}. {:?}", index, ip));
}
let result_string = format!("{}:\n {}", msg.request().0, results.join("\n "));
msg.reply(DnsResponse(result_string)).expect("reply");
Handled::Ok
});
Handled::Ok
} else {
panic!("Component should have been initialised first!")
}
}
fn receive_network(&mut self, _msg: NetMessage) -> Handled {
unimplemented!("ignore networking");
}
}
fn main() {
let system = KompactConfig::default().build().expect("system");
let dns_comp = system.create(DnsComponent::new);
let dns_comp_ref = dns_comp.actor_ref().hold().expect("live");
system.start_notify(&dns_comp).wait();
println!("System is ready, enter your queries.");
loop {
let command = Input::<String>::new().with_prompt(">").interact();
match command {
Ok(s) => match s.as_ref() {
"stop" => break,
_ => {
let mut outstanding = Vec::new();
for domain in s.split(',') {
let domain = domain.trim();
info!(system.logger(), "Sending request for {}", domain);
let query_f = dns_comp_ref.ask(DnsRequest(domain.to_string()));
outstanding.push(query_f);
}
for query_f in outstanding {
let result = query_f.wait();
info!(system.logger(), "Got:\n {}\n", result.0);
}
}
},
Err(e) => error!(system.logger(), "Error with input: {}", e),
}
}
system.kill_notify(dns_comp).wait();
system.shutdown().expect("shutdown");
}
Note: If you have checked out the examples folder and are trying to run from there, you need to specify the concrete binary with:
cargo run --bin dns_resolver
Project Info
While Kompact is primarily being developed at the KTH Royal Institute of Technology and at RISE Research Institutes of Sweden in Stockholm, Sweden, we do wish to thank all contributors.
Releases
Kompact releases are hosted on crates.io.
API Documentation
Kompact API docs are hosted on docs.rs.
Sources & Issues
The sources for Kompact can be found on Github.
All issues and requests related to Kompact should be posted there.
Bleeding Edge
This tutorial is built off the master
branch on github and thus tends to be a bit ahead of what is available in a release.
If you would like to try out new features before they are released, you can add the following to your Cargo.toml
:
kompact = { git = "https://github.com/kompics/kompact", branch = "master" }
Documentation
If you need the API docs for the latest master run the following at an appropriate location (e.g., outside another local git repository):
git checkout https://github.com/kompics/kompact
cd kompact/core/
cargo doc --open --no-deps