How to halve your support costs
When you freelance, or work in a small company, support is a big cost. It’s a big cost in big companies but they can afford to hire people to deal with it. In tiny companies, or when you’re on your own, support often falls to the developers because they’re the ones who know what’s going on.
But, in my experience, a huge chunk of all support requests follow a similar pattern.
- “It’s not working right”
- “What’s happening?”
- “Look at X, it should be Y”
- “OK, I’ll take a look”
- (spends an hour or two trying to recreate the issue and diving into the logs to see what’s been going on)
- “Actually, it’s working exactly as it’s supposed to - there’s this rule we added three months ago that means that X is supposed to be that way”
- “Oh yeah, I totally forgot about that”
- “Yeah, so did I”
I’ve got multiple clients and they don’t really care about the intricate details and edge cases of the system’s they’ve specified. They just want it to work and if it does something unexpected, they want to know why.
Which is why all the systems I design nowadays have an audit trail.
Logging is not enough
Any software system should have some form of logging. There’s so much complexity, so many possible unexpected interactions, that we need those logs to see what’s been happening.
But the logs are necessarily dense. They capture all sorts of information that may or may not be relevant and they take time to wade through and decipher.
In the past, I’ve used the likes of New Relic and AppSignal. And I’m currently trying to get my head around Open Telemetry - just so I can get a better handle of what’s going on inside my systems (especially the larger ones).
But, while those tools are useful for finding issues, the type and depth of data stored in them (even if you can front them with a nice dashboard) makes them unsuitable for most clients to make use of them.
Reduce your support costs
If you can present the client with a simple audit trail, that shows events in a format that makes sense to them, they will quickly turn to that as their first port of call. And they’ll only come to you for support when they get stuck.
They’re not interested in response times or IP addresses. They don’t want to know what a trace or a span is. They just want something like:
- George added a new Order “ABC-123” to the “Orders” Folder
- The “New Order” Automation was triggered
- Order “ABC-123” was added to the “Order Processing” Workflow with a Status of “Order Received”
- A Notification was sent to Rebecca with the subject line “Order ABC-123 requires processing”
- The “New Order” Automation was triggered
- Rebecca selected “Prepare for Dispatch” in the “Order Processing” workflow and completed the “Order Dispatch Details” form
- Order “ABC-123” was moved to Status “Order Ready for Dispatch”
- An email was sent to customer@exampl3.com with the subject line “Your order is ready to ship”
- The email was bounced due to an invalid email address
Now, if someone gets in touch complaining about a lack of email notifications regarding their order, our client can take a look at the audit trail and pinpoint exactly where the problem lies - they typed their address in wrong. Incidentally, probably a quarter of the support requests I get are to do with tracking down missing emails. It’s a royal pain in the arse.
What are we going to build?
Firstly, this audit trail only lists things that make sense to the client. There may be 100 database writes, numerous background jobs, all sorts of intercommunication between various APIs going on here. But the client does not want to know about those things. They only want to know “what happened to that Order?”
Secondly, by presenting the trail as a hierarchy, it makes it clear why things are happening. Rebecca received a notification because the “New Order” automation was triggered. The “New Order” automation was triggered because George posted the new Order.
So, we want something that takes meaningful “business actions” and writes them to some sort of log. And it nests those actions within each other, so a direct causal link can be drawn between them.
Now, think back to how we’ve implementing our “adding a new client organisation” specification. In order to make this work, we have a number of “features”, with names such as “create organisation”, “link folder” and “grant access”. These are all terms that our client will understand[1]. So if each of our feature classes could record what it was doing, we’ve got the basis of an audit trail right there. And, if when a feature is active, we know the context in which it is running, we can take those recordings and arrange them hierarchically.
We already have the beginnings of this.
Each of our features broadcasts a series of events - create_organisation_started
, create_organisation_completed
or maybe create_organisation_failed
. So if a create_organisation_started
event is received followed by create_user_group_started
, create_user_group_completed
and create_organisation_completed
, it’s probably safe to assume that the user group was created as a result of the organisation being created.
OK, that might be a bit simplistic, as we’re assuming a single thread of execution, with no parallelism and no background tasks. But you can see how we’ve got the building blocks already in place.
Show me the code
First thing is, while implementing my “feature” classes, I noticed that they all followed a very similar pattern.
In each of them, I’ve used a Dry::Struct
with attribute declarations to provide a bit of type-safety[2]. Then, there’s a call
method that checks permissions, then broadcasts a this_feature_started
event. It does some work, broadcasts a this_feature_completed
event and then returns the result of the work. Or if it failed, it broadcasts a this_feature_failed
event with the exception as part of the broadcast.
There’s no guarantee that all our features will follow this pattern, but I’ve got at least four that work that way. So let’s get rid of the duplication and clean this up a bit.
As ever, let’s start with a spec, so we can design our perfect interface. I’m going to add it in to spec/core/standard_procedure/feature_spec.rb
, implying this abstract class is going to be called StandardProcedure::Feature
. That sounds reasonable doesn’t it?[3]
Testing stuff that doesn’t exist
Now one problem with writing specs for abstract or base classes is that their actual functionality doesn’t exist.
The way round this is to create a fake, concrete, class inside your test[4]. You know what the thing as a whole should do, you know which bits you’ve added in to your fake class, so you should know that everything left over belongs your abstract class.
So, what does it need to do?
- We can rely on
Dry::Struct
to check our parameter types so no need to test that - We need to make sure the given user has permission to perform this action
- We broadcast a
started
event - We do some work
- We broadcast a
completed
event - Or we broadcast a
failed
event if an exception is raised
Let’s break our specification into clauses:
RSpec.describe StandardProcedure::Feature do
context "authorisation" do
it "raises a StandardProcedure::Unauthorised error if a user is not supplied"
it "raises a StandardProcedure::Unauthorised error if the user fails the authorisation check"
it "does not raise an error if the user passes the authorisation check"
end
context "starting" do
it "broadcasts a 'feature_started' event before the work is done"
it "includes the parameters in the `feature_started` event"
end
context "working" do
it "performs the work defined by the implementor"
it "returns the result of the work"
end
context "completing" do
it "broadcasts a 'feature_completed' event after the work is done"
it "includes the parameters, plus the result, in the `feature_completed` event"
end
context "failing" do
it "broadcasts a 'feature_failed' event if the work fails"
it "includes the parameters, plus the exception, in the `feature_failed` event"
end
end
I think that’s a pretty good definition of what we’re after. At least for now.
A restructure
As an aside, I started running in to problems with the StandardProcedure
class I had designed. Rails likes things done a certain way and I was going against the grain. Which is a bad idea. Plus, my implementation used a ton of class methods, so as my specs ran, there were relics of previous test runs still trapped within that single instance.
So I reworked it so every feature requires an actual instance of the StandardProcedure
class. Each spec creates a brand new instance and prepares it with just the dependencies it requires. This is a better design - the specs document each feature's dependencies and there's less scope for the unexpected. In the main application, I altered the initialiser so it creates and prepares a StandardProcedure
instance and then stores it in Rails.application.config.standard_procedure
. My base controller then has a protected method - app
that returns that single instance. Each test has its own isolated instance, but the Rails application has one global instance.
Finally, I moved all the non-Rails specific classes into the lib
folder. This keeps Rails happy and also makes it explicit which stuff "logical standard procedure" and which is "implementation standard procedure".
Writing the spec
I won't go through all the clauses, but I'll run through how I did the authorisation part.
Firstly, we need to set up this StandardProcedure
instance.
before do
@standard_procedure = StandardProcedure.new
end
And fill out our clauses.
context "authorisation" do
it "raises a StandardProcedure::Unauthorised error if a user is not supplied" do
@name = "Alice"
@implementation = StandardProcedure::Feature::FakeImplementation.new app: @standard_procedure, user: nil, name: @name
expect { @implementation.call }.to raise_error StandardProcedure::Unauthorised
end
it "raises a StandardProcedure::Unauthorised error if the user fails the authorisation check" do
@user = double "user", can?: false
@implementation = StandardProcedure::Feature::FakeUnauthorisedImplementation.new app: @standard_procedure, user: @user
expect { @implementation.call }.to raise_error StandardProcedure::Unauthorised
end
it "does not raise an error if the user passes the authorisation check" do
@user = double "user", can?: true
@name = "Alice"
@implementation = StandardProcedure::Feature::FakeImplementation.new app: @standard_procedure, user: @user, name: @name
expect { @implementation.call }.to_not raise_error
end
end
As our feature class is abstract, we're going to use some "actual" implementations to test things are working as we expect. These look like this:
# standard:disable Lint/ConstantDefinitionInBlock
class StandardProcedure::Feature::FakeImplementation < StandardProcedure::Feature
attribute :name, Types::Strict::String
def authorised?
true
end
end
class StandardProcedure::Feature::FakeUnauthorisedImplementation < StandardProcedure::Feature
def authorised?
false
end
end
# standard:enable Lint/ConstantDefinitionInBlock
The FakeImplementation
is always authorised?
whereas the FakeUnauthorisedImplementation
is never authorised?
.
In order to make this work, I then start implementing the Feature
class. As you can see I need to add something in to the call
method that checks authorised?
to see if we're allowed to proceed or not. Again, instead of running through every step, I'll just show you how Feature
ended up looking.
require "dry/struct"
require "dry/inflector"
require_relative "../standard_procedure/errors"
class StandardProcedure::Feature < Dry::Struct
module Types
include Dry::Types()
end
attribute :app, Types.Interface(:publish, :register)
attribute :user, Types::Interface(:can?).optional
def authorised?
false
end
def perform
end
def call
authorise!
started
perform.tap do |result|
completed_with result
end
rescue => error
failed_with error
raise error
end
def feature_name
self.class.feature_name
end
def authorise!
unauthorised! if user.nil? || !authorised?
end
def started
app.publish event_name(:started), **to_hash
end
def completed_with result, activity: nil
app.publish event_name(:completed), **to_hash.merge(result: result)
end
def failed_with error, activity: nil
app.publish event_name(:failed), **to_hash.merge(error: error)
end
def unauthorised!
raise StandardProcedure::Unauthorised.new "#{user} cannot execute #{self.class.name}"
end
class << self
def register_in app
app.register feature_name, self
%w[started completed failed].each do |event|
app.register_event event_name(event)
end
end
def feature_name
inflector.underscore name.to_s
end
def event_name event
"#{feature_name}.#{event}"
end
private
def inflector
@@inflector ||= Dry::Inflector.new
end
end
private
def event_name event
self.class.event_name event
end
end
Previously, every feature implemented the call
method. Now our Feature
base class implements call
and our subclasses just need to implement authorised?
to check the current user's permissions and perform
to actually do the work. Also note that we pass our StandardProcedure
instance in to each feature as the app
parameter, and we use the Dry::Inflector
[5] to calculate a default name for our class.
We can now go back and simplify our actual features. For example, our CreateOrganisation
class becomes:
require_relative "../standard_procedure/feature"
module Organisations
class CreateOrganisation < StandardProcedure::Feature
attribute :name, Types::Strict::String
def authorised?
user&.can? :create, :organisation
end
def perform
app["organisations"].create! name: name
end
end
end
Much, much simpler.
And now that's in place, we can return to the audit trail.
The auditor
As we've now got a base Feature
class we can hook our auditing into that class[6]. There is a helper method to access our "auditor" object and we hook in to the call
, started
(renamed record_start_of_activity
), completed_with
and failed_with
methods.
def call
authorise!
activity = record_start_of_activity
perform.tap do |result|
completed_with result, activity: activity
end
rescue StandardProcedure::Unauthorised => error
auditor&.record_authorisation_failure feature_name, to_hash
raise error
rescue => error
failed_with error, activity: activity
raise error
end
def authorise!
unauthorised! if user.nil? || !authorised?
end
def record_start_of_activity
app.publish event_name(:started), **to_hash
auditor&.start_activity feature_name, to_hash
end
def completed_with result, activity: nil
app.publish event_name(:completed), **to_hash.merge(result: result)
auditor&.record_completion_for activity, to_hash.merge(result: result)
end
def failed_with error, activity: nil
app.publish event_name(:failed), **to_hash.merge(error: error)
auditor&.record_failure_for activity, to_hash.merge(error: error)
end
Now we need to implement the Auditor. And we also need somewhere to store these "activities".
In the Rails side of the application, we create a new Activity
model. The migration looks like this:
class CreateActivities < ActiveRecord::Migration[7.1]
def change
create_table :activities do |t|
t.string :ancestry, null: false, index: true
t.belongs_to :user, null: true, foreign_key: :users
t.integer :status, default: 0, null: false
t.string :feature_name, null: false, index: true
t.text :parameters
t.timestamps
end
create_table :loggables do |t|
t.belongs_to :activity, null: false, foreign_key: true
t.string :key, null: false
t.belongs_to :value, polymorphic: true, null: false, index: true
end
end
end
Activities use the ancestry
gem to deal with the hierarchy. Each activity is associated with a user and has a status (implemented as an enum
). We also store the feature name and any parameters (using Rails' database serialization
). We also have an associated table and model, Loggable
which links an activity and any other model through a polymorphic association. This is so we can easily find all activities related to a model. For example, we might want to see "everything that has happened to this folder" - so we can use Loggable.where(value: @folder)
or Activity.includes(:loggables).where(loggable: { value: @folder })
and still get the benefit of our database indexes.
The auditor itself requires a start_activity
method, which returns some object that is then used in the subsequent calls to record_completion_for
and record_failure_for
. Plus there's a record_authorisation_failure
method too.
At first I was going to create an Auditor
class. But then I thought I could short-cut this - by adding these four methods as class methods on the ActiveRecord Activity
class and then registering Activity
in the locator with the name auditor
.
This means Activity
ends up looking like this:
class Activity < ApplicationRecord
has_ancestry
belongs_to :user, optional: true
has_many :loggables, dependent: :destroy
enum status: {pending: 0, in_progress: 1, completed: 100, failed: -1}
validates :feature_name, presence: true
serialize :parameters, coder: JSON, default: {}
class << self
def record_authorisation_failure feature_name, data
user = data.delete(:user)
create(user: user, status: "failed", feature_name: feature_name, parameters: parameters_from(data)).tap do |activity|
attach_loggables_from data, activity: activity
end
end
def start_activity feature_name, data
user = data.delete(:user)
create(user: user, status: "in_progress", feature_name: feature_name, parameters: parameters_from(data)).tap do |activity|
attach_loggables_from data, activity: activity
end
end
def record_completion_for activity, data
activity.update! status: "completed", parameters: parameters_from(data)
attach_loggables_from data, activity: activity
end
def record_failure_for activity, data
activity.update! status: "failed", parameters: parameters_from(data)
attach_loggables_from data, activity: activity
end
private
def parameters_from data
data.select { |key, value| !value.is_a? IsLoggable }.transform_values(&:to_s)
end
def attach_loggables_from data, activity:
loggables_from(data).each do |key, value|
activity.loggables.where(key: key).first_or_create!(value: value)
end
activity
end
def loggables_from data
data.select { |key, value| value.is_a? IsLoggable }
end
end
end
There's a slight complication when writing each activity record - the auditor
method has a data
parameter but the Activity
class needs to split that data
into ActiveRecord models and into "pure" data. The former are stored as Loggable
s, the latter in the parameters
attribute of the Activity
.
Give that a second to sink in.
From the point of view of the auditor and the features classes, we have a load of data of varying types and formats. But from the point of view of the Activity
and the Rails implementation, we have a set of ActiveRecord models and non-ActiveRecord models. The non-Rails side of the application doesn't care about this - it just passes around these objects and does what it wants with them. It's only when it comes to storing this data that the actual implementation actually matters.
Our application logic is independent of the underlying storage engine.
The final piece
Now when we run our full stack specification, we still get a failure. When looking at the audit trail, we fetch the root activities that we have permission to see. Then we search for our create_organisation
feature, then we fetch its child activities. In effect, we're exploring the tree of activities triggered by the original create_organisation
action.
step "I look at the audit trail" do
get "/api/activities?scope=roots", headers: @auth_headers
expect(last_response).to be_successful
end
step "I should see a log of how access to this folder was granted" do
data = JSON.parse(last_response.body)
activity = data.find { |d| d["featureName"] == "organisations/create_organisation" }
expect(activity).to be_present
@activity_id = activity["id"]
expect(activity["userId"]).to eq @me.id
expect(activity["name"]).to eq "TinyCo"
result = activity["items"].find { |d| d["key"] == "result" }
expect(result["id"]).to eq @tinyco_id
expect(result["name"]).to eq "TinyCo"
get "/api/activities/#{@activity_id}/children", headers: @auth_headers
expect(last_response).to be_successful
data = JSON.parse(last_response.body)
activity = data.find { |d| d["featureName"] == "authorisation/create_user_group" }
expect(activity).to be_present
expect(activity["user_id"]).to eq @me.id
expect(activity["name"]).to eq "TinyCo staff"
result = activity["items"].find { |d| d["key"] == "result" }
expect(result["id"]).to eq @user_group_id
expect(result["name"]).to eq "TinyCo staff"
activity = data.find { |d| d["featureName"] == "file_system/link_folder" }
expect(activity).to be_present
expect(activity["user_id"]).to eq @me.id
result = activity["items"].find { |d| d["key"] == "source_folder" }
expect(result["value"]["id"]).to eq @policies_folder.id
expect(result["value"]["name"]).to eq @policies_folder.name
result = activity["items"].find { |d| d["key"] == "user_group" }
expect(result["value"]["id"]).to eq @user_group_id
expect(result["value"]["name"]).to eq "TinyCo staff"
result = activity["items"].find { |d| d["key"] == "result" }
expect(result["value"]["id"]).to eq @linked_folder_id
expect(result["value"]["name"]).to eq @policies_folder.name
activity = data.find { |d| d["featureName"] == "authorisation/grant_access" }
expect(activity).to be_present
expect(activity["user_id"]).to eq @me.id
expect(activity["actions"]).to eq []
result = activity["items"].find { |d| d["key"] == "user_group" }
expect(result["value"]["id"]).to eq @user_group_id
expect(result["value"]["name"]).to eq "TinyCo staff"
result = activity["items"].find { |d| d["key"] == "resource" }
expect(result["value"]["id"]).to eq @linked_folder_id
expect(result["value"]["name"]).to eq @policies_folder.name
result = activity["items"].find { |d| d["key"] == "result" }
expect(result["value"]["id"]).to eq @permission_id
end
That's quite a lot of code and it's pretty impenetrable but, for now at least, I'm going to keep it all in one big block rather than trying to split it into pieces. This is because it's a simple linear flow - x follows y follows z - which, in theory at least, means there's less mental overhead when reading it and trying to figure out how it's supposed to work.
The spec fails a few lines after the get "/api/activities/#{@activity_id}/children", headers: @auth_headers
line. If I insert a puts Activity.all.inspect
statement in there[7], we see that all the activities are recorded as expected but they are not organised in a hierarchy.
So how do we get that to work?
Context
When CreateOrganisation
is call
ed, it is by the Api::OrganisationsController
. In other words, it's directly triggered by a user action (in this case via the API). But the subsequent actions, which are a consequence of the CreateOrganisation
call, occur within the context of that original call.
So, could we have a context
object that is optionally passed into each feature?
The feature doesn't actually care what this object contains, it just passes it to the auditor and any event handlers. The event handlers can use that to initialise any subsequent features within this context.
As for what the context actually is - it's something that's used by the auditor to decide which Activity
will be parent to the next Activity
. The context is an Activity
[8].
We already have the entry point for this. When we added auditor support to the base Feature
class, we added a record_start_of_activity
method which is created and updated by the auditor. The name record_start_of_activity
is pretty awkward and didn't really fit with the rest of the flow. Now it makes much more sense. If we add an optional context
parameter to each Feature
and rename record_start_of_activity
to create_context
(or something like that), it seems to fit a bit better.
In fact, I'd like it work something like this:
attribute :context, Types::Any.optional
def call
authorise!
within_new_context do
started
perform.tap do |result|
completed_with result
end
rescue => error
failed_with error
raise error
end
rescue StandardProcedure::Unauthorised => error
authorisation_failed_with error
raise error
end
within_new_context
already knows the current context, as we will set it as an input parameter to the feature. It can create a "child" context, store it in an instance variable, then the start
, completed_with
and failed_with
methods can access this nested context and supply this to any subscribers.
I updated the spec, updated the Feature
implementation to match and it passed. Then I tried the full stack spec again and it failed.
It turns out I've got my implementation parameters totally wrong. So I rework the Activity
class methods into some that actually do what's required, then go back and update the feature spec to match the new parameters. And eventually update the feature itself to get the spec to pass.
Result!
And that's it. Now the individual specs for each feature pass. And our "adding a new organisation", full-stack, specification that drives the system through a JSON API also passes.
We've just completed our first feature in this new application.
But even more, we've implemented an "architecture" that separates the logic of our application from the implementation details without being heavy-weight[9].
How it works
When the Rails application boots, we configure our StandardProcedure
object, filling it with the various classes and objects that will do the work. In my case, I will probably end up having different initialisers for each of my clients[10]. The configuration will know which features that client needs, so it will register those features into the locator. The configuration also knows how the database (and eventually user-interface) work, so it will register the relevant ActiveRecord models and event-subscribers in the locator.
Then, as web requests are received[11] they travel through the Rails stack until they reach a controller.
The controller creates an instance of the relevant Feature
, passing it an instance of the StandardProcedure
locator/event-stream, and giving it a context
of nil
[12].
The feature does its stuff, using the locator to access objects it needs to read and write the data. Those objects, in this example so far, are ActiveRecord models, but they don't have to be. Because the feature is isolated from the actual classes, we have the power to change our minds in the future with minimal consequences.
Plus, because the feature is broadcasting events, it means we can trigger our automations without the feature ever having to be aware of them. For example, if we have a real-time user-interface, we could easily hook a Rails channel that subscribes to various events and re-broadcasts them, over a web-socket, to clients that are interested. Or POST
data to a web-hook subscriber. All this stuff is set up by the initialiser at boot time rather than being hard-coded into the features themselves.
Eventually, control returns from the feature back to the controller and the controller can choose how it returns a response to the client, just like in a standard Rails application.
The good
- Our "application" (the thing that our clients are paying us to build) is separated from our "implementation" (the particular database, user-interface toolkit - even Rails itself[13]). If the code is in
app
then it's Rails specific. If the code is inlib/standard_procedure_app
then it's "core application functionality". - This means that we have options. At the start of the project, we know very little about what's down the road. Maybe it will be unsuccessful so we don't want to invest a ton in external services. Maybe it will scale beyond our wildest dreams, so we swap out Sqlite for Postgres and later one of those "big data" storage engines when we hit a billion users. It won't be a trivial task but the separation means this change is both a do-able and predictable task. And how often can you say that a task is predictable?
- We have hierarchical auditing of user actions, almost for free. Currently, these are stored in an
activities
table, but again, if we hit scale, we can easily swap that out for something like DynamoDB or some expensive Enterprise Logging Framework. - We have a simple way of authorising user actions - each
Feature
has anauthorised?
method that we override - and that ensures no-one bypasses the system's permissions. - The system is built around a core of a "locator" and an "events stream". Meaning individual features remain decoupled from each other - we can make changes over here and be confident we're not breaking something over there.
The bad
- I'm not sure about the
auditor
interface. Or rather I'm not sure about the dependency between aFeature
and theAuditor
. TheFeature
tells theAuditor
to start and finish its auditing tasks, but there's part of me that feels like the dependency should be the other way round. I suspect theAuditor
should listen to events from theFeature
and choose how it's going to audit them, with theFeature
being oblivious to the fact that it's being tracked. - However, to do this with the hierarchical audit trail, the
Auditor
would need to maintain some sort of state. Alice has started to create an organisation, and as a by-product of that, they have triggered a series of automations that run in the context of that original action. Currently, theFeature
asks theAuditor
for the context and passes that through to all subsequent actions. If the dependency was reversed, theAuditor
would have to know that those subsequent actions, triggered in Alice's name, belong underneath theCreateOrganisation
feature. Which might get messy if Alice is doing two separate tasks at once (maybe one in a browser whilst also doing stuff on their phone). - The feature specification doesn't say anything about this - it's just a feeling I have about the design[14]. If it turns out that's a better way to do things, remember, we have options. We can swap out the implementation of the
Auditor
, update theFeature
base class to match and it should have no bearing on the rest of the system. - Likewise, I'm still not sure about my file-system implementation. I have a base
Node
model with subclasses likeFolder
andOrganisation
. Again, this is not strictly demanded by the feature specification, but it seemed like the right way to implement the database[15]. - Again, if it turns out I'm wrong, we have options. Altering the database when the system has been live for a while is a risky business - because we have to protect against data loss or data mangling. But, certainly at the moment, before anyone's using the system in anger, I can just rip out the
nodes
and replace them with something else.
The ugly
- I think the configuration is a bit messy. I'm storing the
StandardProcedure
locator inRails.application.config.standard_procedure
after setting things up in an initialiser. This doesn't feel great to me, but I think it is what it is because of how Rails works. - A potential failure in the system is if an object, registered in the locator, does not implement the methods that the rest of the system expects. In the likes of Java, you would define an interface and ensure that the implementors use that interface. But that adds in another layer of dependency management with more symbols and names to learn. In Ruby we bypass that, making our code simpler. However, we already have a very simple check on types - our features use
Dry::Struct
andDry::Types
with aTypes.Interface(:method_1, :method_2)
definition. This ensures that the objects provided have certain methods. We can reuse this mechanism in the locator, but I've not thought through how I'm going to do it yet. - My
ResourcesController
[16] bypasses all this feature level stuff, relying onCanCanCan
to ensure it doesn't read data it's not supposed to. I think that should probably be the topic of a future article. - In fact, there's nothing to stop a controller from bypassing the features and just reading and writing ActiveRecord models directly. Partly this is how Ruby works (very little is private or inaccessible), partly that's because we're putting all the implementation details into a single Rails application. In theory we could split the application into multiple gems to hide things, but because the initialiser needs access to the models, it means the controller could still access them directly. Plus, at least while the application is small, it would be overkill and over-engineering.
Conclusion
So there you have it. A way of building a Rails application that keeps logic separate from implementation without incurring a huge overhead.
Now I've got a ton more features to write, plus we're going to need to hook some kind of user-interface to this code. So that's what we're going to be looking at next.
Even if it’s only a vague understanding ↩︎
Not too much. If we wanted our computers to dictate how we express ourselves, we’d be writing in Java ↩︎
Maybe it should be
AbstractFeature
orBaseFeature
but I like to avoid prefixes if I can ↩︎If you were paying attention, you’ll notice I did that last time. The spec for the
HasNodes
module defined an ActiveRecord model (called, unsurprisingly,HasNodes::FakeModel
) and dynamically created a database table for this fake model while testing the specification. ↩︎This is in the non-Rails specific part of the code so I don't want to use the Rails inflector. But having multiple inflectors is not good. Ideally, the inflector would be another service that's registered into the locator, so a Rails application uses the Rails inflector everywhere but we're not tied directly into it. ↩︎
This is wrong - we're now adding a dependency between every feature and the audit trail. We'll revisit it later, but the reason is because I'm using
Dry::Events
and it doesn't allow us to hook into every event. But, in the spirit of getting something that we can ship and works according to the original spec, we'll stick with this imperfect design for now. ↩︎When writing tests I rarely, if ever, need to use a debugger. I can insert a
puts
statement into my spec or the actual code, figure out what's going on, correct it and then remove theputs
statement. ↩︎For this implementation anyway. If we were storing our data elsewhere then that particular model might not fit. But ultimately it's only the auditor that cares what's inside the context and the auditor's workings is implementation specific. ↩︎
The reason Rails became popular, all those years ago, is the in-depth busywork that "proper software engineering architectures" required you to go through. I hope you'll agree, while there's a few more steps than the "Rails Way", it's not particularly heavy and it doesn't really ask you to do any work that you wouldn't already be doing - just putting it in a different place. ↩︎
I've already decided that they should not be multi-tenanted - each client will get their own copy of the application running in its own server. ↩︎
In this example, it's a JSON API call but the same applies if it were a HTML page request. ↩︎
As this is a top-level user-action, rather than something that is happening as a consequence of another action. ↩︎
We could extract the
lib/standard_procedure_app
folder, create a gem from it, and then reuse it in a Roda or Hanami system, build a command line interface using Thor, or even build a desktop application with some sort of Ruby-Native Widget bridge. ↩︎This might be "software engineering" but, for me, designs always come down to feelings. "That feels right" or "that feels wrong" prompt me to investigate and revise how things are done. I guess I'm pretty unusual in that. ↩︎
Remember, just because we're "storage independent" doesn't mean "the database doesn't matter". It absolutely matters, so we should take a lot of care in designing it, but it only matters to this particular implementation. ↩︎
Which I've not really talked about yet. ↩︎