In the fourth quarter of last year the engineering team focused on infrastructure problems. Much of the improvement from this quarter customers unfortunately, will never get to see except in the form of a much more stable system. That said, one that they will never see by design is the session replication we implemented across our three production JVMs. Yep, until late last year we had no notion of sessions outside a given server. This led to users being logged out if one of the JVMs crashed, annoying if they were viewing a page, awful if they were in the middle of editing something.

To implement replication we initially considered off the shelf solutions from Oracle and Terracotta. During their evaluation, we ran into problems because we use Jetty (most were designed for Tomcat) and because we had many non-serializable objects in the session. Both of these solutions worked on the premise of pulling your session whole-sale from the Servlet container and serializing it across the wire to the other JVMs. Once we established these solutions wouldn’t work for us, we started a spike to put together our own. We knew we didn’t want to write our own backing store so after another round of evaluation, we settled on Cassandra, primarily for its superior write speeds and straight forward schema design.

With our backing store in hand we began development in earnest, but soon encountered two major problems. The first was we needed a way to persist and retrieve the sessions to Cassandra. Unsurprisingly, we were presented with the same problem the off the shelf solutions had: we couldn’t directly save the user sessions because they contained all sorts of unserializable objects. Our solution was to iterate through all of the objects in the session, discard the non-serializable ones, and persist the rest to Cassandra. Ah, but how to get them back into the session? It turns out that there is no public constructor in the Jetty implementation of the Session object with the session ID as a parameter (only a protected one); so we had to extend their Session implementation with a public constructor that delegated to their protected one. In the event of a server crash, we pull the session’s objects out of Cassanda and re-populate a new session on the JVM the client landed. (A small, related aside: the version of Jetty we were on did not really support universally unique session ids either, so there was a chance two sessions on different JVMs could have the same session id, even though they were different accounts. The fix was to upgrade several minor versions to get a longer one.)

With our solution in hand, we started our load testing and discovered more subtle problem. After a JVM crashes and the clients are moved onto a new node, they tend to fire off multiple requests very close together (for different parts of the page). These requests can come in faster than we can re-populate the session, causing the second, third, fourth… requests to start processing without a complete session. This problem took quite some time to diagnose as it manifested in strange ways. Basically as null pointers or missing references when something would look for an object in the session that wasn’t there. Confounding the diagnosis were the intentionally absent unserializable objects lost when we saved the session in Cassandra. We realized we needed a mutex around session creation, so that all the other requests would wait until the session was created, but the obvious implementation of mutexing on the session id string results in an unbounded number of mutexes with poor options for cleaning them up. Instead, we came up with the idea of using 10,000 mutexes and using the hash of the session id string modulo over 10,000. In this way we bound the number of mutexes at the expense of potentially having two clients hash/modulo to the same mutex. Given the fairly low latency of creating a new session, we thought this was a good compromise.

Since session replication went live earlier this year it has prevented downtime during several crashes due to JVM bugs as well as one or two of our own. We’d like to think this has led to a lot less frustrated customers who never knew the difference.

Request a Call

Looking for support? Open a support case.

Send Us Your Feedback

Provide us with some information about yourself and we'll be in touch soon. * Required Field