Building a framework – The early Akka.NET history

In this post, I will try to cover some of the early history of Akka.NET and how and why things turned out the way they did.
Akka.NET of course have some parallel histories going as there are many contributors on the project.
But the post is written from my own point of view and my reasons for getting involved in this.

The butterfly effect

Back in 2005, I attended an architecture workshop initiated by Jimmy Nilsson, hosted in Lillehammer Norway.
One of the attendees there was a Einar Landre, he worked for Statoil at the time, and he talked about how they used asynchronous systems and how you could build eventually consistent systems using message passing.
I was totally sold on the concepts and as soon as I got back from the workshop, I introduced the concepts at my work, and build my first asynchronous message passing application, which is actually still in use today, ten years later.
This had a huge impact on me, and changed how I came to reason about systems and integrations and why I almost a decade later thought it was a good idea to port Akka.

I also met Mats Helander, he had just started developing an Object Relatonal mapper called NPersist.
NPersist was based on code generation, so I showed him my framework for aspect oriented programming I was building at the time, and explained how that would be able to get rid of all the code generation and make NPersist persistence ignorant via POCOs.
Me and Mats started working on these two tools together, we packaged them under an umbrella project called Puzzle Framework.
Back then, our competitor NHibernate was in alpha stage and featurewise we were way ahead of them.

But as time passed by, it turned out that NHibernate would become the winner, not because it was better, but because it attracted a lot more people due to it’s well known sister project, Hibernate on the JVM.
Hibernate had a lot of learning material; books, videos and tutorials.
So having the same framework on .NET of-course ment that you could re-use existing knowledge or learn from the vast set of resources.

Eventually me and Mats dropped the development of NPersist, at this time, NHibernate was already the de-facto standard and Linq to SQL had just been released, there were simply no reason for us to keep the project alive any more.

The most important thing that I learned in this process, was that adoption will always outweigh features, that is; documentation, ease of use and familiarity are worth more than shiny features if no one knows how to use them.

Laying the foundation

Now fast forward to 2013.
I was doing a consultancy gig for a Swedish agency, that project contained a fair deal of concurrency, multiple systems integrating with each other, all touching the same data, possibly at the same time.
During this project, I got more and more frustrated with the lack of concurrency tools for .NET, I started reading up on this topic and eventually stumbled upon the actor model and Akka on the JVM.

As often when I find an interesting programming topic, I had to try to implement some of these concepts myself, as this is my way of learning.
I did some weekend hacking, first using F# with pattern matching, mailbox processors and all the goodness that exists there.
I played around with some proof of concept implementations of the core concepts of Akka, as a learning experience and with the intent to make something I might be able to use in my everyday work.

However, I knew that if I ever should have any chance to get to use any of this in my client projects, I would have to switch over to C#, and the same was true to a large extent for attracting contributors, simply because C# has a much larger market share than F# has.

I also remembered the lesson learned from NPersist vs NHbernate, there were already a handful of small hobby hacks or abandoned actor frameworks on .NET, but I knew that if I would contribute to one of those, or roll my own, the result would still be something new unproven, untrusted and it would be extremely hard to get any adoption of such effort.

Porting Akka

A few weeks passed and eventually I actually had something that worked pretty well, quite a few of the core Akka-Actor features and some rudimentary Akka-Remote support like remote deployment and a fairly complete HOCON configuration parser was now in place.
The code was published on Github and the project was named “Pigeon” in a lame attempt to play on carrier pigeons for message passing.
(The name Pigeon can still be seen in the Akka.NET source code, as the main configuration file is still called “Pigeon.conf”)

The networking layer was a problem, I didn’t have much experience writing low level networking code, so the first early attempts of Akka-Remote used SignalR for communication, which later was replaced with a very naive socket implementation.

First class support for F#

Even if I decided to go for C# as the language of implementation, I still wanted to involve the F# community.
F# has a truly awesome opensource community around it, and I had seen that there was a genuine interest in the actor model over at the F# camp.
So I sent out a few requests on the F# forums, looking for someone who could help me build an idiomatic F# API on top of the C# code.

The Co-Pilot

One day in February (2014), I got an email by a guy named Aaron.

This is the actual letter:

Hello Roger!

My name is Aaron Stannard – I’m the Founder of MarkedUp Analytics, a .NET startup in Los Angeles. We build analytics and marketing automation tools for developers who author native applications for Microsoft platforms, including native Windows.
I began my own port of Akka to C# beginning in early December, and took a break right around Christmas. I just got back to it this week and discovered Pigeon when I was researching some details about the TPL Dataflow! I wish you had started this project a few weeks earlier ;)
My implementation of Akka is right around the same stage / maturity as yours, but Pigeon offers much better performance (3.5-5x), is more simply designed than mine, and you’ve already made a lot of headway on features that I haven’t even started on like Remoting and Configuration.
Therefore, I would like to stop working on my own implementation of Akka and support Pigeon instead. I think that would be a much better use of my time than trying to invent it all on my own.
I’m an experienced .NET OSS contributor – I currently maintain FluentCassandra (popular C# Cassandra driver) and have a bunch of projects of my own that I’ve open-sourced.

I plan on using Pigeon in at least two of our services, both of which operate under high loads.

Please let me know how I can help!
Aaron Stannard • Founder • MarkedUp

“I plan on using Pigeon in at least two of our services, both of which operate under high loads.”

Say WAAT!?

It turned out that Aaron had created his own networking lib called Helios, which was exactly what Akka-Remote needed.
Aaron joined the effort and started working on the akka-remote bits while I focused mostly on akka-actor and akka-testkit.
We had some nice progress going, and we contacted Jonas Bonér of Typesafe to see if we could use the name Akka as we aimed to be a pure port, which we got an OK to do.

Lift off

Now the project started to gain some real attention.
Håkan Canberger joined the team and Jérémie Chassaing contributed the first seed of the F# API.

At the same time, my youngest son Theo was born, 10 weeks too early and 1195 grams small, so I spend the next two months full time in the hospital, managing pull requests and issues on my phone.

This turned out to be a good thing for the project, up until that point, I had seen the project as “mine”, which is not a good mindset to have when trying to run a community project.

Meanwhile we gained more users and contributors and Aaron and Håkan were busy pushing new features.
Now all of a sudden we have people like the F# language inventor Don Syme retweeting our tweets.

Bartosz joins the team and sets out to complete the F# API.
This results in even more attention from the F# community, and Don Syme even did a code review of one of the F# API pull requests.

From that point on, the project have been pretty much self sustaining, with new contributors stepping up and contributing entire modules or integrations.

A lot more have of course happened since then, which may be the subject of another post, but I hope this post gives some insight into why Akka.NET came to be and why some of the early design choices was made.

With that being said, I’m sure the other developers have some interesting stories to share on why they got involved and what lead them down this route.


akka.net, Azure, C#

Akka.NET + Azure: Azure ServiceBus integration

I know that there is some confusion out there on how Akka.NET relates to products like NServiceBus and Azure ServiceBus, I think that Akka.NET Co-founder Aaron Stannard said it the best;

they’re very complimentary Akka.NET makes a great consumer or producer for NServiceBus

Another closely related question that comes up from time to time is how to integrate Akka.NET actors with service buses.

How can we pull messages from a service bus and pass those to a number of worker actors w/o message loss?

One approach we can use to solve this is since actors in Akka.NET support the Ask operator.
We can pass a message to an actor and expect a response, this response will be delivered in form of a Task.

As the response is a task, we can pipe this task into a continuation and depending on if the response represents a processing success or failure from the worker actor, we can then decide what we want to do with the service bus message.

In this case, we might want to Ack the service bus message, telling the service bus that we are done with this message and it can be removed from the queue.

If the response was a failure, just ignore the failure and continue processing other messages.
As we haven’t acked the message to the service bus, the service bus will try to re-deliver the message to our client and we get the chance to try again some time later.

A simple implementation of this approach using Azure Service bus could look something like this:

namespace ConsoleApplication13
    //define your worker actor
    public class MyBusinessActor : ReceiveActor
        public MyBusinessActor()
            //here is where you should receive your business messages
            //apply domain logic, store to DB etc.
            Receive<string>(str =>
                Console.WriteLine("{0} Processed {1}", Self.Path, s);

                //reply to the sender that everything went well
                //in this example, we pass back the message we received in a built in `Success` message
                //you can send back a Status.Failure incase of exceptions if you desire too
                //or just let it fail by timeout as we do in this example
                Sender.Tell(new Status.Success(s));

    internal class Program
        private static void Main(string[] args)

            using (var system = ActorSystem.CreateSystem("mysys"))
                //spin up our workers
                //this should be done via config, but here we use a
                //hardcoded setup for simplicity

                //Do note that the workers can be spread across multiple
                //servers using Akka.Remote or Akka.Cluster
                var businessActor =
                       .WithRouter(new ConsistentHashingPool(10)));

                //start the message processor

                //wait for user to end the application

        private static async void ProcessMessages(IActorRef myBusinessActor)
            //set up a azure SB subscription client
            //(or use a Queue client, or whatever client your specific MQ supports)
            var subscriptionClient = SubscriptionClient.Create("service1","service1");

            while (true)
                //fetch a batch of messages
                var batch = await subscriptionClient.ReceiveBatchAsync(100, TimeSpan.FromSeconds(1));

                //transform the messages into a list of tasks
                //the tasks will either be successful and ack the MQ message
                //or they will timeout and do nothing
                var tasks = (
                    from res in batch
                    let importantMessage = res.GetBody<string>()
                    let ask = myBusinessActor
                        .Ask<Status.Success>(new ConsistentHashableEnvelope(importantMessage,
                    let done = ask.ContinueWith(t =>;
                        if (t.IsCanceled)
                            Console.WriteLine("Failed to ack {0}", importantMessage);
                            Console.WriteLine("Completed {0}", importantMessage);
                    select done).ToList();

                //wait for all messages to either succeed or timeout
                await Task.WhenAll(tasks);
                Console.WriteLine("All messages handled (acked or failed)");
                //continue with the next batch

        //dummy method only used to prefill the msgqueue with data for this example
        private static void CreateMessages()
            var client = TopicClient.Create("service1");

            for (var i = 0; i < 100; i++)
                client.SendAsync(new BrokeredMessage("hello" + i)
                    MessageId = Guid.NewGuid().ToString()

But do note that when applying this pattern, we now go from the default Akka.NET “At most once” deliver to “At least once”.


Because if we fail to ack the message back to the service bus, we will eventually receive the same message again at a later time.

It could be that our worker actor have processed the message correctly, stored it in some persistent store, but the ack back to the client might have failed, network problems, timeout or something similar.

Thus, the wervice bus meesage will not be removed from the queue and the client will receive it as soon as whatever locking mechanism is in place frees the message again.

One extremely nice feature in Akka.NET is the cluster support. cluster nodes can be added or removed to a live application, so we can easily spread our load over multiple worker actors on remove nodes.

Completely w/o writing any special code for this, we just need to configure our actor system to be part of a Akka.NET cluster.



Learning Azure, Day 2 | Servicebus

This is a continuation of my completely random learning experiences while trying to learn the Azure plattform.

Future messages

When you send a message on the Azure Servicebus, you have the option to set a ScheduledEnqueueTimeUtc property.
This value decides when the message will be visible to the receivers, it is kind of like sending a message in the future.

So when is this useful?

Let’s say we want to send a payment reminder 20 days after an order was placed, but only if no payments have arrived.
We could solve this using future messages.

Consider the following flow of messages

Receive PlaceOrder message -> Send Reminder (set ScheduledEnqueueTimeUtc to 20 days in the future)

..20 days pass

Receive Reminder message -> Check if the order have been paied, if not, send the reminder to the buyer using snailmail or email

This way, you don’t have to build additional scheduling logic to your system or add polling tables to your database, it is handled by the Servicebus itself.

Looks super useful, I have to look into how this scales, if messages can be kept for years without side effects.

That’s all for now

Azure, C#

Learning Azure, Day 1 | Servicebus

I’ve decided to bite the bullet and finally dig into Azure.

My first learning experience was to toy around with the Servicebus.
Azure Servicebus is a message queue pretty much like the old MSMQ, but with a few more nifty features.
e.g. messages can have a time to live duration, you can chose between At most once or At least once delivery (ReceiveMode.ReceiveAndDelete vs ReceiveMode.PeekLock).

The first gotcha was that the sample I downloaded from MS, recreated the queue I intended to communicate with.
Which in turn causes your credentials for the queue to disappear, don´t fall for that :-)

It also turns out that the built in batch operations are way more performant than single operations.
So if possible, go for batching if you need high throughput.

Another thing that was expected but maybe not to that extent, was that using ReceiveAndDelete gives a lot better throughput than using PeekLock.
Using PeekLock, you have to Ack back to the queue yourself to notify the queue that you have dealt with the message.

This was using a single threaded client, so maybe the difference is not that big if processing async, I don’t know yet, but in the very naive example I’m using, there was a huge difference.

And for those who are just looking for some example code, here it is:

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using Microsoft.ServiceBus.Messaging;

namespace ConsoleApplication5
    class Program
        private static QueueClient _queueClient;
        static void Main(string[] args)
            Console.WriteLine("Press anykey to start sending messages ...");
            Console.WriteLine("Press anykey to start receiving messages that you just sent ...");
            Console.WriteLine("nEnd of scenario, press anykey to exit.");

        private static void SendMessages()
            // connect to queue named "queue1"
            // use At most once delivery
            _queueClient = QueueClient.Create("queue1", ReceiveMode.ReceiveAndDelete);

            var messageList = new List<BrokeredMessage>();

            for (int i = 0; i < 1000; i++)
                messageList.Add(new BrokeredMessage("Some message")
                    MessageId = i.ToString(CultureInfo.InvariantCulture)

            Console.WriteLine("nSending messages to Queue...");

            // send 1000 messages to the queue

        private static void ReceiveMessages()
            Console.WriteLine("nReceiving message from Queue...");

            while (true)
                //receive a batch of messages from Queue
                var messages = _queueClient.ReceiveBatch(100,


                foreach (var message in messages)
                    if (message != null)
                        Console.WriteLine("Message received: Id = {0}, Body = {1}", 
                            message.MessageId, message.GetBody<string>());

Actor based distributed transactions

One question that often shows up when talking about the Actor Model, is how to deal with distributed transactions.

In .NET there is the concept of MSDTC, Microsoft Distributed Transaction Coordinator, that can be used to solve this problem when working with things like SQL Server etc.

The MS Research project Orleans (MS Azure Actor framework) also supports distributed transactions.
See 3.8 http://research.microsoft.com/pubs/153347/socc125-print.pdf on this.
[Edit] The transactions described in that paper was revoked, the Orleans team are currently working on a new implementation.

The problem with distributed transactions is that they are expensive, very expensive, they do not scale very well.

We as programmers are also trained to think of transactions as some sort of binary instant event that occurs, it either succeeds or it fails, and the time span is very short, but during this timespan, you freeze and lock everything that is involved with it.

In the real world however, transactions are more fuzzy, they don’t necessarily succeed or fail in a binary fashion, and they are far from instant.

And just to avoid confusion here, lets think of this as technical transactions vs. business transactions. In the end, they both ensures a degree of consistency, even though the concepts are different.

In the actor world, we can approach this the same way that the Saga pattern works.
See @Kellabytes excellent post on this topic: http://kellabyte.com/2012/05/30/clarifying-the-saga-pattern/

In Kellys post, she talks about failures in a technical context, but the failure could very well be a failure to comply to an agreement also. there would be no distinction between technical and business failures.

Let’s say that you purchase something on credit in a store, you make up a payment plan that stretches over X months.
During this time, you agree to pay Y amount of money at the end of each month for example.
If you do so, everything is fine, if you don’t, the store will send you a reminder, and if you still don’t pay, you will get fined.
This is a transaction that stretches over a very long time, and it can partially succeed.

Happy path:


Partial failure with compensating action:

Complete failure with compensating action:


This is how you could model business transactions in a distributed system, it is asynchronous and it scales extremely well.

When two or more parties begin a transaction, all parties have to agree to some sort of contract before the transaction starts, this is your message flow, much like a protocol, that defines what happens if you violate the contract.A scheme of messages and actions that describe exactly how your (business) transaction is supposed to be resolved, and which of the involved parties that needs to ensure that a specific part of this agreement is handled.

This will make your transactional flow very business oriented, it goes very well with the concept of domain driven design. The transactional flow is actually just a process of domain events.


Akka.NET – Concurrency control

Time to break the silence!

A lot of things have happened since I last wrote.
I’ve got a new job at nethouse.se as developer and mentor.

Akka.NET have been doing some crazy progress the last few months.
When I last wrote, we were only two developers, now, we are about 10 core developers.
The project also have a new fresh site here: akkadotnet.github.com.
We are at version 0.7 right now, but pushing hard for a 1.0 release as soon as possible.

But not, let’s get back on topic.

In this mini tutorial, I will show how to deal with concurrency using Akka.NET.

Let’s say we need to model a bank account.
That is a classic concurrency problem.
If we would use OOP, we might start with something like this:

public class BankAccount
    private decimal _balance;
    public void Withdraw(decimal amount)
        if (_balance < amount)
            throw new ArgumentException(
              "amount may not be larger than account balance");

        _balance -= amount;

        //... more code
    //... more code

That seems fair, right?
This will work fine in a single threaded environment where only one thread is accessing the above code.
But what happens when two or more competing threads are calling the same code?

    public void Withdraw(decimal amount)
        if (_balance < amount) //<-

That if-statement might be running in parallel on two or more theads, and at that very point in time, the balance is still unchanged. so all threads gets past the guard that is supposed to prevent a negative balance.

So in order to deal with this we need to introduce locks.
Maybe something like this:

    private readonly object _lock = new object();
    private decimal _balance;
    public void Withdraw(decimal amount)
        lock(_lock) //<-
           if (_balance < amount)
              throw new ArgumentException(
               "amount may not be larger than account balance");

           _balance -= amount;


This prevents multiple threads from accessing and modifying the state inside the Withdraw method at the same time.
So all is fine and dandy, right?

Not so much.. locks are bad for scaling, threads will end up waiting for resources to be freed up.
And in bad cases, your software might spend more time waiting for resources than it does actually running business code.
It will also make your code harder to read and reason about, do you really want threading primitives in your business code?

Here is where the Actor Model and Akka.NET comes into the picture.
The Actor Model makes actors behave “as if” they are single threaded.
Actors have a concurrency constraint that prevents it from processing more than one message at any given time.
This still applies if there are multiple producers passing messages to the actor.

So let’s model the same problem using an Akka.NET actor:

//immutable message for withdraw:
public class Withdraw
     public readonly decimal Amount;
     public Withdraw(decimal amount)
         Amount = amount;

//the account actor
public class BankAccount : TypedActor, IHandle<Withdraw>
     private decimal _balance;

     public void Handle(Withdraw withdraw)
         if (_balance < amount)
              //you should use real message types here

          _balance -= withdraw.Amount;
          //and here too

So what do we have here?
We have a message class that represents the Withdraw message, the actor model relies on async message passing.
The BankAccount actor, is then told to handle any message of the type Withdraw by subtracting the amount from the balance.

If the amount is too large, the actor will reply to it’s sender telling it that the operation failed due to a too large amount trying to be withdrawn.

In the example code, I use strings as the response on the status of the operation, you probably want to use some real message types for this purpose. but to keep the example small, strings will do fine.

How do we use this code then?

ActorSystem system = ActorSystem.Create("mysystem");
var account = system.ActorOf<BankAccount>();

var result = await account.Ask(new Withdraw(100));
//result is now "success" or "fail"

Thats about it, we now have a completely lock free implementation of an bank account.

Feel free to ask any question :-)


Deploying actors with Akka.NET

We have now ported both the code and configuration based deployment features of Akka.
This means that you can now use Akka.NET to deploy actors and routers on remote nodes either via code or configuration.

For those new to akka what does this mean?

Let’s say that we are building a simple local actor system.
It might have one actor that deals with user input and another actor that does some sort of work.

It could look something like this:

var system = ActorSystem.Create("mysystem");
var worker = system.ActorOf<WorkerActor>("worker");
var userInput = system.ActorOf<UserInput>("userInput");
    var input = Console.ReadLine();

Ok, maybe a bit cheap on the user experience there, but lets keep the sample small..
Depending on input, the user input actor might pass messages to the worker and order it to perform some sort of work.

Now, lets say that it turns out that our worker can’t handle the load, since actors (act as if they) are single threaded, we might want to add additional workers.
Instead of letting the user input actor know how many workers we have, we can introduce the concept of “Routers”.

Router actors act as a facade on top of other actors, this means that the router can delegate incoming messages to a pool or group of underlying actors.

var system = ActorSystem.Create("mysystem");
var worker1 = system.ActorOf<Worker>("worker1");
var worker2 = system.ActorOf<Worker>("worker2");
var worker3 = system.ActorOf<Worker>("worker3");

var worker = system.ActorOf(Props.Empty.WithRouter(new RoundRobinGroup(new[] { worker1, worker2, worker3 })));

var userInput = system.ActorOf<UserInput>("userInput");
    var input = Console.ReadLine();

Here we have introduced three worker actors and one “round robin group” router.
A round robin group router is a router that will use an array of workers and for each message it receives, it will delegate that message the one of the workers.

We do not need to change any of the other code, as long as the user input actor can find the worker router, we are good to go.

If we want to accomplish the same thing, but using a config instead, we can something like this:

 var config = ConfigurationFactory.ParseString(@"
	akka.actor.deployment {
            /worker {
                router = round-robin-pool
 # pool routers are not yet implemented
 # you have to use the group routers with an array of workers still
                nr-of-instances = 5
var system = ActorSystem.Create("mysystem",config);
var worker = system.ActorOf<Worker>("worker");
var userInput = system.ActorOf<UserInput>("userInput");
    var input = Console.ReadLine();

The worker router is no longer configured via code but rather ia a soft config, that could be placed in an external file if you want.
This means that we can scale up and utilize more CPU of our computer just by changing our configuration.
But what if this is still not enough?
We might need to scale out also, and introduce more machines.
This can be done using “remote deployment”, like this:

 var config = ConfigurationFactory.ParseString(@"
	akka.actor.deployment {
            /worker {
                router = round-robin-pool
                nr-of-instances = 5
                remote = akka.tcp://otherSystem@someMachine:8080

    ....more config to set up Akka.Remote
var system = ActorSystem.Create("mysystem",config);
var worker = system.ActorOf<Worker>("worker");
var userInput = system.ActorOf<UserInput>("userInput");
    var input = Console.ReadLine();

Using this configuration, we can now tell Akka.NET to deploy the worker router on a different machine.
The settings will be read from the config, packed on a remoting message and sent to the remote system that we want to create our
worker router on.
(This of course means that Akka.NET must run on the remote machine and listen to the port specified on the config)

So by just adding configuration, we can now scale up and out from a single machine to a remote server, or even a cloud provider e.g. Azure.

That’s all for now :)

For more info, see: https://github.com/rogeralsing/Pigeon