A Case for Event Sourcing in Browser JS Apps

6 minute read

Note: Everything in this blog post is purely theoretical, treat it as a thought experiment. I haven’t tried this yet.

I’ve been thinking about use cases for Event Sourcing (ES). It’s most often associated with backend applications where you need strong audit logs but I’m starting to wonder if it might be a good fit for some Javascript single-page applications (SPA) as well. There are folks doing ES in node but I haven’t seen anyone try this in the browser yet, so I’ll try to outline a few reasons it might be worthwhile.

This article assumes you’re already familiar with ES. Still, just to be clear, I’m suggesting a move away from JS persistence like this:

// JS
var product = new Product({
    'name': 'Foo',
    'price': 90
product.set('price', 60);
product.add('tag', 'awesome');
POST /product
Host: example.com
Content-Type: application/json

    'name': 'Foo',
    'price': 60,
    'tags': [ 'awesome' ]

to something more like:

// JS
var product = Product.import('Foo', 90);

eventStream.append(product.id(), product.getPendingEvents());
POST /events
Host: example.com
Content-Type: application/json

    { event: 'ProductImported', name: 'Foo', price: 90, id: 'some-generated-uuid' },
    { event: 'ProductDiscounted', price: 60, id: 'some-generated-uuid' },
    { event: 'TagAdded', 'tag': 'awesome', id: 'some-generated-uuid' },

In the first example, we’re using a pretty standard getter/setter, ActiveRecord model. In the latter, events are generated inside the JS entities (yes, in the browser!), loaded into an event stream and then flushed to the server in one go.

Okay, so that’s the example. Why do this?

To start, when you build an MVC-ish JS app, you often end up duplicating some code on both the server and client, particularly in the model layer.

We’ve all done the dance: You need a Product model in the JS code but to save it in the database, you also need a Product model in your PHP/Ruby/Java/etc. Then when you need to add a new field, you have to update the Javascript, the PHP, the database, etc. The smell of lasagna permeates the room.

On the other hand, if we used ES, the server wouldn’t receive full blown entities. It would only receive serialized events. If the SPA is the only interested party, the server can just pass them to the persistence layer and the entire process stops there. The JS model would be authoritative and the server would be much simpler since most event stores are just serializing the events.

That does bring us to a downside: we won’t need the entities but we will need Event classes and there’s going to be a lot more of those then there were entities.

That said, the events are dead simple and actually useful. This is the code you want to write. You might even be able to reduce some of this in clever implementation, especially on the JavaScript side where anonymous objects would work fine for events.

That said, any extra work is offset by doing away with a big bunch of useless code: the REST API. Don’t get me wrong, I love REST. I love hypermedia. Many of the issues this article describes would be best solved with a really well designed RMM Level 3 API. Unfortunately, most JS libraries encourage CRUD style APIs which can be a poor fit and a huge maintenance burden. If you don’t need or want to design a good API for multiple consumers, then I’d argue don’t even try: a single RPC endpoint is easier to refactor than a pile of near identical controllers and anemic models.

There are several other benefits to the server:

  • Security inspections become much simpler. If you’re dealing with a CRUD API, you need to derive and approve user changes from the data structure. With domain events, the user behavior is explicit, so security checks could be as simple as matching the Event class name to an allowed list per role (or voter or whatever you prefer).
  • The fine-grained behavior also makes triggering other server-only side effects a cinch. It’s already an event!
  • Debug logs are a classic ES benefit and doubly so when chasing errors through a complex GUI.
  • You can have one major transaction for several operations. If you were writing to several API resources, it would be nigh impossible to rollback all changes.
  • Good Domain Events are probably reusable.

I think there’s benefits for the JS as well:

  • ES often brings better surrounding architecture as well, like command dispatching and strong domain models. This can only be good for your JS, which is frequently neglected when designing.
  • Many JS UIs are already evented, listening for changes on individual fields in the model. Unfortunately, listening for changes on a single field might not be high-level enough to express what the update should be, turning the UI management code into a mess. Instead, we could publish the domain events not just to the server but to our own UI, leading to more concise updates.
  • Events open the door to some cool JS features which might normally be hard to implement:
    • Saving everything in one request, both for performance and to avoid sequence issues.
    • Saving events in local storage in case the user loses connection.
    • Maybe even an undo or rewind feature. Event Streams should be immutable but you could potentially do this with only your unsaved actions, provided your models support forward and reverse mutators.
    • Replicating changes to other users like in a game or collaborative app.

This might sound great but as they say, “no plan survives contact with the enemy.” For example, there’s still some duplication between the client and server. The JS will certainly be more lines of code than a CRUD/ActiveRecord style. The ES rehydration process (reapplying the entire event stream to the individual models) may take more CPU on load. And how would you resolve two event streams that differ significantly, say from two users?

To counter these points though: the duplicated Event code is straightforward and easier to manage than multiple CRUD controllers. It’s more lines of code but it’s simpler code. ES rehydration is often offset by snapshotting, which may work especially well if you need several items at once for loading: you can maintain CQRS ViewModels to grab all of your interface’s data in tight blobs. As for merging differing streams, I’m not sure how this differs greatly from a standard ES scenario so the usual solutions apply, say optimistic locking with a version number.

That said, there’s little defense against the additional complexity argument. To make this worthwhile, you’d definitely need a reasonably large Javascript app. However, as Javascript and user expectations continue to evolve, this might not be as rare as you’d think. I’ve already worked on at least one or two projects in my career where I’d consider this a valid or improved approach.

The largest JS project I’ve worked on dispatched Commands, rather than Events, to a single endpoint. This was a marked improvement and had many of the same advantages (transactions, some debug log, batching) but it also came with a lot of duplicated code and made you sometimes wonder which was the authoritative model: JS or PHP? You could put all of the logic server-side but you may get a laggy interface for your trouble.

Still, this is theoretical for me so if you know anyone who’s tried this approach, please let me know. I wouldn’t recommend it for most projects but If I could do some of them over again, there’s a good chance I’d give this a shot.

Many thanks to Warnar Boekkooi and Shawn McCool for proofreading this article.