63 private links
This is a good way to see that the architecture questions are multi-layered. And yes, in enterprise contexts they go all the way to the company strategy level.
A bit of an advertisement toward the end. That said, the evaluated constraints are completely valid. You don't want to fit your whole code base into the "cloud function" model, only a few workloads will make sense there.
Indeed, in most case you don't need the extra complexity. Also interesting is showing that even if the application has to scale rapidly you still got quite some time to plan the transition to something else. It makes Postgres a sane default choice.
Some food for thought about the use of bounded contexts in Domain Driven Design.
Maybe it's time to stop obsessing about scale and distributed architectures? The hardware has been improved quite a bit at the right places, especially storage.
Yes an external cache is definitely faster. That said does your application need the extra complexity? Is the caching in database really the bottleneck? If not, the question of the external cache is still open.
I don't think it's always unfolding exactly like this but there's some truth to that. Most projects see a "let's rewrite it in X" phase, this is rarely the best outcome.
Databases do improve and provide more "cache like" features, but such caches are still needed for the time being.
A short and to the point reminder on how to manage properly a "layer cake" architecture.
Good reminder of this important but imperfect guide to software design. There is some ambiguity on what "simplest" actually means. Still it helps keeping in mind that simple is rarely easy to find.
Ever wondered about how Windows 3 was architectured? This is an interesting read. It was really complex though, you can really tell it's in the middle of several transitions.
Indeed, if you can guarantee your materialized views to always be up to date, you might be able to get rid of some caching... and thus some complexity can be avoided.
Indeed, let's not fall for the marketing. It's better to write less code if it's enough to solve actual problems.
A good debunk of that claim we sometime see. Of course the tests need to be designed and you need to have good architecture blueprints to follow, otherwise you'll be in trouble... TDD or not.
Observability is indeed not necessarily easy to fit into a code base. Here is a potential approach to make it easier. I wouldn't use it on a project where we're only logging, but once you add metrics to the mix, this kind of probes can be worthwhile.
This is a good list of guidelines to produce a service which is less of a pain to test locally, deploy and operate. Of course, don't take everything at face value (not all of it aged well) but it's a good source for inspiration.
Or why you should know why you're picking a new tech stack... or not. Don't just blindly follow fashions.
When you realize TDD is about units of behavior... then you can see what can be iterative and what can't in your process. In other word, what is dictated by the problem domain is iterative, what is dictated by system architecture is not. Luckily, the latter is often related to the user experience you're aiming for.
Or why the microservice cargo cult which has been going on for a while now infuriates me. It totally ignores the complexity it brings.
Good insight into why Dropbox rewrote its sync engine for desktop clients.