For longer than I’ve been any good at software engineering, I’ve been talking about software engineering with my buddy Justin. I’m not claiming I’m actually any good at software engineering now, but he and I have been talking about it for quite some time. He’s always harassing me about CQRS, so I figured I’d try to shut him up and post about some things we’re doing.

In my last post, I mentioned we needed to ditch the ORM for a particular scenario. It turns out that even though we had readable code that was asking a clear question, we weren’t really asking the question we thought needed to. Walking a tree of projects and finding the ones open and accessible to a particular user makes sense if you are trying to find the projects in scope. The problem is that all we’re going to do with the projects is pull off the id to filter by the set of ids in scope.

The process of reading the projects and performing the scope calculation is what was causing the high load mentioned in my previous post. The scope information is sometimes asked for and calculated multiple times in a single request. While the queries for projects were typically fulfilled by the second-level cache, the cache is thread-safe (read slow) and caused some contention. Additionally, the intersection records that dictate whether a user has access to a project typically needed to be built, which uses more memory. The intersection records for any given user are used less frequently than the projects themselves and were therefore less likely to be fulfilled by the cache.

Because calculating the projects in scope can be expensive for our larger customers, the first thing we thought about was caching the results of the calculation for some period. Once we started looking at the methods involved, it became apparent that we were doing way more work than we really needed to. Just about everywhere we were asking for the in-scope projects, we did something like this block of code:

  1. Iterable<Project> projects = getProjectsOpenAndAccessibleTo(currentUser());
  2. Iterable<Long> projectOids = transform(projects, toOids());

Side note: the calls to getProjectsOpenAndAccessibleTo, currentUser, transform, and toOids are all static imports. While I think we overuse statics in our codebase, I have to declare my love for static imports. I’m a firm believer that Java is not a pleasant languages to program in (I’m not going to list alternatives for fear of flamewar), but static imports can create some amazingly readable code (for java). Also, transform is a method on the Iterables Google Guava class. The other three are Rally-defined.

Once we realized we only needed the oids, we decided it was a good time to ditch the ORM and query directly for the information we needed. Cooking up the queries was the easy part – we used some straight SQL and spring jdbc templates for that. PL-SQL has a nifty clause called CONNECT BY that makes dealing with hierarchical information a snap.

There are actually a few slight variations on how we calculate which projects are in scope, so we introduced an abstraction called OidsInScope based on the Guava Supplier interface. The pseudocode for all of the algorithms looks something like this (the actual code is tl;dr for this post). The key points are that we pass the object a mechanism for querying, and that equality is determined based on the combination of userId, projectOid, scopeUp, and scopeDown.

  1. public class ProjectOidsInScope implements OidsInScope {
  2. public ProjectOidsInScope(queryMechanism, userId, projectOid, scopeUp, scopeDown) {
  3. // captured in instance variables
  4. }
  5.  
  6. // get is the method we are relying on from the supplier
  7. public Iterable<Long> get() {
  8. // query for the ids of projects in scope based on
  9. // curentUserId, projectId, scopeUp, scopeDown
  10. }
  11.  
  12. public boolean equals(Object o) {
  13. // compare using curentUserId, projectId, scopeUp, scopeDown
  14. }
  15. }

Google Guava once again turned what could have been a ton of code around caching into very little. Using Guava’s Supplier as a common base for the abstraction made the code that build the backing store for our cache look like this:

  1. new MapMaker()
  2. .initialCapacity(10000)
  3. .expireAfterWrite(numberOfSeconds, SECONDS)
  4. .concurrencyLevel(16)
  5. .makeComputingMap(new Function<OidsInScope, ProjectScope>() {
  6. @Override
  7. public ProjectScope apply(OidsInScope key) {
  8. return new ProjectScope(memoize(key));
  9. }
  10. })

A computing map is a Guava class that knows how to create or find values if you ask for a key that doesn’t exist. In this case, we calculate the value using an anonymous inner class that memoizes the result of the OidsInScope’s get method. The cache stores the memoized value for the specified time, at which point it expires and needs to be recalculated. We can tune the expiration time (represented by the numberOfSeconds parameter) at run time. We end up with the OidsInScope itself as the key in the map, using the equality rules described above.

This is all coordinated by a factory object that builds the OidsInScope object that is passed to the cache for memoization. It figures out enough information to call the scope cache like this. It creates the OidsInScope object with a mechanism for issuing the pertinent queries.

  1. return scopeCache.get(new ProjectOidsInScope(queryMechanism, curentUserId, projectId, scopeUp, scopeDown));

The end result of breaking the reliance on our transactional model for querying was a big reduction in memory consumption and in database utilization. For the page that was causing high load when used by a big customer, we saw a profound drop in the amount of memory allocated for each request. This chart is pulled from our internal splunk instance, and it shows the amount of heap allocated for that customer on that page over a one-month period. We rolled out the scope caching code on September 10th. Check out the flat-line.

Heap Allocated for Customer X on Page Y

Request a Call

Looking for support?

Send Us Your Feedback

Provide us with some information about yourself and we'll be in touch soon. * Required Field