Easy to read, Easy to write
I got my first "professional" software development job in 1998. I put professional in quotes because it was a bit of a mess - a tiny place with one guy who knew enough to knock a database together who hired me, not long out of university, who had never seen a SQL database before.
One of the things I very quickly learnt is that there's no such thing as a small job when building systems for other businesses. Because, even if it's an hour's work, you need to test it, ship it and then - and this is the most important part - maintain it. An hour's work may spend ten years in production.
This lead to my other conclusion - code is easy to write but hard to read.
It's why I fell in love with Ruby and Rails when I first met them - here is code that reads like english with lots of high-level abstractions and DSLs for making it simpler to understand what's going on. Plus ruby has the culture of test-driven development[1] so regressions (bugs caused by faulty maintenance) are catered for.
But all that has now changed.
LLMs are pretty good at writing code. Nowadays, they are also good at reading code.
Lots of the stuff I used to sweat over - putting in abstractions, DRYing my code, splitting the user interface into reusable components - it's nowhere near as important as it used to be. Don't get me wrong, it's still important - but, previously, I'd look at a function or class and think "oh, that's pretty complicated, I better break it up into pieces". Now I don't need to do that. Because the LLM makes that decision and I rarely need to read the code.
My current workflow involves writing high-level specifications and telling the LLM to figure out the best way to implement it. I just need to make sure that the steps it has written for testing the outputs match what the user will be expecting[2]. The specification tells the LLM what it needs to build, it does its own research on the code it needs to add, modify or delete (which is the read the code phase) and then it gets to work on making the specification pass (the write the code phase). We then run the entire test suite to ensure that there are no regressions.
As long as the outputs are, given a known set of inputs[3], what we expect, we can ship it.
Once shipped, I have the LLM pull statistics and logs from the server, while the system is in use, and it monitors for slow queries, 500 errors and other issues. It informs me of the problems, we decide on the best remedy for them (LLMs are also fantastic at analysing SQL queries) and we ship another update (again, using the specifications to show that there are no regressions). Because, database performance on your local machine, with a test dataset, is never anything like what happens in production, with lots of rows and years of accumulated bad data.
I don't need to spend hours crafting code into a particular shape (which, admittedly, I do miss - I'll write about that another time) - I can just ship features to our users quickly and get immediate feedback on them once they're live (Honeycomb's "I test in production" approach).
Rails was the first time I'd seen a framework automatically create a test database with support for fixtures, built right in ↩︎
Where outputs are "changes to the data", "emails and other notifications" and "user interface layouts that present that information in a way that makes sense to the user" ↩︎
And, of course, I have ensure that the edge cases are written into the specification ↩︎