More flexible is not always better
At work, I wrote a framework for writing applications. Because I wanted this framework to work for any programs, I designed it to be as flexible and as general as possible.
Other people at my company have written less general frameworks. They put restrictions on the type of programs that you’re allowed to write within their framework. This is all a little abstract, so here’s an example:
One common restriction is that your component structure will be a DAG. By DAG, I mean a “directed acyclic graph”. The key word here is acyclic. If component A knows about component B, then B can’t know about A. There is a sort of one-way directionality to the arrangement of the components.
Oh yeah, what even is a framework? Obviously they come in all shapes and sizes, but I think a pretty common theme is they have some conception of a “component” and that an application is built by composing the components together in some way. Again – very abstract, but useful as a way to talk about frameworks.
Generally speaking, components talk to each other. They pass data between one another. This leads me to another example of a framework which puts a rather severe restriction on the applications that can be built within it. The framework wants to control the flow of data between components, and therefore provides a single mechanism for writing data. The particular mechanism isn’t very interesting, it’s effectively a function you can call to write a single “atom” of data to downstream listeners.
This has big implications. For example, it means that data flow is push-based and not pull-based. You can’t “ask for the next piece of data”, you just get it whenever it’s available, process it, and then maybe push data downstream. Since the data flow is not pull based, it rules out interactions like a component saying “something changed and now I’d like data for stock B in addition to what you’re already sending me”.
My framework doesn’t constrain applications like that. It’s super flexible. I thought that was a good thing. But there are two major costs for that level of generality:
It’s hard to explain
People ask, so what can components of your framework do?. The answer is anything. That’s cool, I guess, but it certainly doesn’t help me understand.
The more concrete and specific a thing is, the easier it is to understand (usually). The more abstract and general a thing is, the harder it is to understand.
It can’t do much
What does the framework actually do? The answer is not much. It’s more of a way to structure your program than a thing that actually provides functionality to you at runtime.
Why? Because it kind of can’t do much. By being completely agnostic to, for example, how components communicate, it can’t publish metrics on how much data is flowing between which components. It definitely can’t run two components on different processes because it has no idea how to ferry data from one component to another.
One way to think about this is that every restriction a framework puts on the application is giving the framework information about how the application will (or will not) behave. Sometimes, that information is really useful and enables the framework to do non-trivial work for you.
The value comes from defining what you can’t do
Even if I think my own, extremely general framework. The most important part is what you aren’t allowed to do. Certain components aren’t allowed to do IO. By adding this restriction, we can create applications that are much more testable, and can even be simulated using historical data.
But what about the fact that every restriction limits the type of programs that work within that framework! Don’t you want your framework to be as widely applicable as possible?
Totally. Like almost everything else in life, it’s a trade-off. There’s a spectrum between broadly-applicable-but-only-a-little-helpful to narrowly-applicable-but-extremely-helpful.
Analogy to programming languages
The idea that the restrictions is where the value comes from reminds me of another domain: programming languages.
There are languages out there that are extremely flexible. They let you do basically anything. You want to add a number to a string? Sure! You want to subtract a number from a string? No problem!
'10' + 3; // '103'
'10' - 3; // 7
I don’t like these languages. Don’t get me wrong, I’m not here to hate on python or javascript. I use them and they’re incredibly useful. But given the choice, I’ll take a strongly typed compiled programming language any day of the week (especially for large programs). Why? Because it stops me from doing crazy things like trying to do math on strings.
Python is a nice example because you can write it with or without types. Adding type annotations to a python program is very clearly a restriction on the set of programs you can run. You write a program in python. You can run it! You add type annotations to that program and typecheck it. Maybe you can still run it? Or maybe it doesn’t compile. The “value” comes from stopping you from running programs that don’t typecheck. There is no new functionality magically comes from typechecking. It’s just a way to stop you from doing things that are probably (not not necessarily!) wrong.
I recently listened to a podcast about programming where they discussed this trade-off between broadly-applicable-but-only-a-little-helpful and narrowly-applicable-but-extremely-helpful in a different context. Here’s a quote (by Yaron Minsky):
But, like, in some sense the scale of optimizations are very different. Like, if you come up with a way of making your compiler faster that, like, takes most user programs and makes them 20% faster, that’s an enormous win. Like, that’s a shockingly good outcome. Whereas, if you give people good performance engineering tools, the idea that they can think hard about a particular program and make it five, or 10, or 100 times faster is like, in some sense, totally normal.