Dealing with support requests without getting overwhelmed

You might have read that starting a Software as a Service business takes five years.

The first couple of years are spent building the product and getting your first few customers. You iterate on their feedback. You work really hard to get the word out.

And customers churn.

If you're lucky you'll bring in a few more customers that you lose. Most hit an equilibrium where the number of customers stays relatively constant. So you continue working hard to get the word out.

Things start to change around year three (from both what I've read and from friends and people I know who've been through it). The number of customers starts to pick up, as not only do you have the general functionality you need, but also your reputation has grown to the point where people start coming to you. (Tip - blog about what you're doing - that way you'll have three years of stories that people might stumble across when searching).

The beauty of SAAS businesses is that Cost of Goods Sold is minimal. If you were building furniture for a living, if you receive an order for one chair you have to buy materials (and labour) to build it. If you receive an order for a thousand chairs you have to buy a thousand times as many materials (though probably a little less than a thousand times labour). Whereas, for a SAAS business, you get one customer or you get a thousand customers - the increase in materials and labour costs are tiny. That's because, for a SAAS, most of the costs are "overheads", not "costs". You rent servers and barring a bit of scaling, that cost remains fixed. You have developers and designers and sales people - and barring a bit of scaling, that cost remains fixed.

But there is a hidden cost with SAAS businesses.

The hidden cost of SAAS

This hidden cost is not linear (like the cost of materials for making a chair). But it does increase rapidly as your number of customers goes up.

And that's the cost of support.

You may think you've built the simplest, easy to use, product in the world. But if it does anything non-trivial (and even if it does something trivial), even if you have the most comprehensive documentation, video library and knowledge base imaginable, you will receive support requests. And customers will expect you to deal with those support requests immediately, thoroughly and in a friendly manner.

Which takes good people.

Meaning your cost of labour scales with the number of customers.

What most support requests look like for me

Most of the systems I've built (including Collabor8Online) tend to have direct customers (the people who are paying us - or in my freelance projects, my client) and they, in turn, have their own customers.

Collabor8Online mainly deals with the construction industry. Our clients are often contractors who then use the software to collaborate (good name eh?) with the other companies they work alongside on a project - the architects, the sub-contractors, the authorities. So there are two tiers of users - those at our client, who have chosen to buy the software and have undergone training for it, and those at their clients, who have been told to use the software and have no affinity to it.

This means that a lot of incoming support requests are "I've found a bug, it's not working correctly". The support team (and often myself) then dig through the system, figure out what it's doing - often by ploughing through the logs - and, more often than not, we report back to them.

"It is working correctly - setting X was switched to Y which is why it's behaving in that way".

As each client is configured differently, this is a process that we can't really shortcut. At least, not in the traditional way.

The solution

The key to this is that the clients don't fully understand what the system is doing. To them, it's a black box - they ask it for something, the black box whirrs and grinds, and an answer pops out the other end.

But as so many requests effectively boil down to "why is it doing this?" - the best way round this is to give the client greater insight into what's actually going on.

Ruby on Rails logs every web request that comes in. You can easily add your own log entries using the Rails.logger object, and a lesson I've learnt from these support requests is that you should be logging a lot, as you'll spend a lot of time looking at it.

But the Rails log isn't for end users. They don't care which controller was invoked when - the URLs are meaningless to them and, if it's an API-based app, or an SPA, they won't even recognise them.

Instead, I now produce an "application log" - the things that have happened in the system, in a format that makes sense to the users.

With Collabor8, this is a core feature. Imagine you're on a £50m construction project and the wrong thing got built because the sub-contractor was working to an old version of the drawings. In Collabor8, not only is the new revision logged, but also the notifications sent out and received - so the client can prove that the correct version was uploaded and the sub-contractor was told about it.

But even if it's not core to the product, it can help clients diagnose their own issues - or save your support team time (so they don't have to ask the dev team to look through the Rails logs).

How to implement an application log

In Collabor8Online, I implemented this application log through ActiveModel callbacks - after_create, after_save, after_destroy. Or more accurately, after_commit on: :create/:update/:destroy - as transactions can cause problems when using Sidekiq (often Sidekiq starts a background task before the transaction has completed, so the background task cannot read the data it needs to do its work).

We have a (huge) activities table that gets written to (via a background task, so it doesn't slow the UI down) every time a significant change is made to one of the models. Then the user interface groups the various activities into types, so the client can see "all uploads in this folder" or "all notifications sent regarding this document".

This does work well, but as the software grows in complexity, I'm beginning to think it was the wrong approach. Callbacks are convenient, but it becomes hard to trace what's going on - especially if one callback triggers a background task that triggers another callback that triggers a background task that ... - you get the picture.

So for Standard Procedure (my upcoming side-project), I'm trying a different approach.

I've defined a model, called a Command. By itself, a command doesn't do much - it maintains a list of related objects and has stub methods for authorisation (who has permission to do this?) and doing the work (a perform method). It also a collection of variable fields (using Rails serialised fields - more on this another week) so I can store arbitrary data against each command without requiring a single table with lots and lots of fields.

I then define subclasses, using Rails' Single-Table Inheritance. For example List::AddCard is called by List#add_card method. If you remember, when I was talking about "Design by Database Seed", lots of the method calls had a "user" parameter. This is passed in to the Command subclass, which then checks that the user has permission to do this, it then calls the perform method to actually do the work, and marks itself as completed.

There's more to it than just this of course. Commands log their start and completion time - or any errors that occurred. They can be nested within a hierarchy (Command A invokes Command B which invokes Command C) or run as background tasks. If a command invokes a number of background task commands, it can wait till they have completed - giving a hacked together version of async/await. I will be releasing this Command framework as open source in the near future, when the initial version has stabilised a bit.

There are a few advantages to this.

Firstly, each individual action is easy to test. I write a test for List::AddCard which is very self-contained, and includes dependency injection as a by-product of the design (meaning the test is easily isolated from other, non-related changes in the app).

Secondly, as a Command is a model, the same code is easily used from both within models (as List#add_card) and from a controller. I can display a form at GET /lists/123/add_cards/new, process it at POST /lists/123/add_cards, displaying progress updates at GET lists/123/add_cards/456 (using TurboFrames and TurboStreams) - and I get all the ActiveRecord form handling goodness, such as validations, while still knowing that permissions and so on are taken care of. Even better, any APIs also use exactly the same, well-tested, routines as the human interface.

Thirdly, as the Command is stored in the database, I can easily show a list of them, with their parent or sub-commands, in the user interface. And that list can be filtered to show stuff relating to any other particular model, in the context of any other commands it happened to be part of.

"Why is that field set to X, when I was expecting it to be Y?".
"Well, looking at the command log, I can see it was updated here, which happened when User A performed Action B"
"It's not supposed to do that"
"Hmm, well, looking at the definition of Action B, I can see that you edited it last week, and that edit includes setting the field to Y"
"Oh … so that does make sense then … I better change it back"
Rahoul Baruah

Rahoul Baruah

Rubyist since 1.8.6. Freelancer since 2007, dedicated to building incredible, low-cost, bespoke software for tiny businesses. Also CTO at Collabor8Online.
Leeds, England