Is ActiveJob the best way to call external APIs – and how do I show the results in the UI?

So you’re fresh off a Node JS project and you’re all excited about how Node returns information back to the client immediately. Node isn’t all bad – that new-fangled ES6 is pretty good although the tooling still confuses me in places. And then you head back to Rubyland and start working on a Rails project.

You need to make an API call and then show the results to the user in the browser. The API call is going to be high-latency on an external service that you don’t control. It may return quickly. It may return slowly. It may not return at all. This is standard procedure in Node. What’s the equivalent in Rails?

Firstly, the reason Node works as it does is because of its architecture. From the off it was designed to be “evented” – that is, the current thread slices itself up across multiple tasks and flow is managed through callbacks. Ruby has its own evented system – Event Machine – so why not just use that? The key issue is that evented systems rely on you structuring your code so that you never place any long-running operations in the main thread. In Node, this is easy – every library you use is designed to work in an evented way. But in Ruby, most gems are designed for traditional, serial, style coding. So you can inadvertently block the main thread when making a library call, which will then bring your server grinding to a halt.

Which means a Node-style approach won’t work.

Instead, we turn to ActiveJob. Read on to find out how we do it…

Do you know what to do but not how it works?

Ever wanted to understand why Rails views work the way that they do? Why variables from your controllers are visible inside your views?

Sign up below to get a free 5 part email course on how Ruby on Rails view rendering works and gain a deep understanding of the Rails magic.

We will send you the course, plus the occasional update from this web-site. But no spam, we promise, and it's easy to unsubscribe

When making the API call, we use a background task to do the actual work. This means that no matter how long it takes, it won’t tie up the web-server. When the job kicks off, we persist a model object, giving it a state of “in progress”. When the job is completed, we update the model, giving it a state of “completed” (and storing the results somewhere within it). And if the call fails, we update the state to “error”.

Up till now, I would have used Redis to define this new model, instead of creating an extra model (and hence table) in my main database. It just needs to be a plain Ruby class that knows how to connect to the correct Redis database and has some sort of unique ID that it can use to store and retrieve its state. (Want to know more about how to do this – drop me an email and let me know)

Then, I would add in a “get job status” action into one of my controllers – pass it the unique ID, it pulls out the state from Redis and returns it to the caller. Finally, we use a bit of AJAX and a timer in the browser to poll what state the call is in – and when completed, grab those results and render them on the page. Polling is generally regarded as bad, and it probably is if you’re at Facebook’s scale – but you’re not. You can alleviate any pressure on your server by using proper HTTP caching and it’s guaranteed to work in pretty much any browser you come across.

However, as of Rails 5, you can also use ActionCable. This is a Rails implementation of web-sockets – in effect the browser opens a persistent connection to your server, and then your server sends a message to the client when the job completes. No polling at all! And even better, there’s no need to create the intermediate Redis model; ActionCable uses Redis internally to trigger the state change when you tell it the job is complete.

I’ll admit, I’ve not used it in anger yet, and there are some measurements to be done to see which is more efficient. Holding a persistent connection to a server (which means you cannot re-distribute that particular client to a different load-balanced server if it becomes busy) versus polling at regular intervals (resulting in load on the server even if nothing has changed).

But, once Rails 5 is officially released (and at the time of writing, it’s in production on Basecamp 3, even though the actual point-zero release isn’t out yet), there will be a standard way to handle high-latency external calls, built directly in to Rails. So the Node people can stop laughing at us.

Do you know what to do but not how it works?

Ever wanted to understand why Rails views work the way that they do? Why variables from your controllers are visible inside your views?

Sign up below to get a free 5 part email course on how Ruby on Rails view rendering works and gain a deep understanding of the Rails magic.

We will send you the course, plus the occasional update from this web-site. But no spam, we promise, and it's easy to unsubscribe