Interesting primer of the intricacies of database migrations. It can get complex fairly quickly.
Maybe you don't need to pull even more dependencies. Think of the operational costs and the complexity.
Definitely this, the message is often coming across lacking nuance. TDD can help you towards good design, but it's not ensuring you'll have a good design.
This is indeed a good thing to hide dependencies behind interfaces when it makes sense.
The claim is huge. The story doesn't quite say how much is really about Elixir and how much from the revised architecture. That being said, going for something like Elixir has definitely an impact on the architecture... could it be that it pushes for better patterns?
Another partial quote which led to misunderstanding. One should indeed think about performances early on.
Good reminder of the benefits of having a model of your architecture and keeping it up to date. It's something too often forgotten in teams I think. Interesting to see C4 getting some traction, I think it strikes a good balance.
Good thinking about abstraction levels on top of a platform. It's very much focused on the Web platform but applies more generally. Good food for thought on the libraries vs framework debate, why escape hatches matter and why you want a layered architecture.
Excellent points. Don't be fooled by alluring architecture changes. Always keep the complexity in check and favor tuning what's already here or changing your use patterns to meet the performance you need.
Interesting look at module systems and what they entail. It's funny to see that most languages do things slightly differently in this area.
I think this is the right way to look at the problem space. The analysis provides the right pros and cons to look at when picking a frontend framework.
Nice little article about Conway's Law. Shows nicely all the ramifications it has.
I always felt uneasy around this "law" as well. It's a good deconstruction of it and proposes proper alternatives. It's all about dependencies really.
Good piece on how to reduce uncertainty before something is built and ready to be in front of users. It starts with prototyping but goes all the way to feature flags and deployment
Very interesting case full of lessons. Of course, increasing the complexity of the system overall can lead to such hard to find issues. It's also a tale into how seemingly innocuous settings can interact in unexpected ways. I also like the lessons learn pointing to the fact that you can and should debug even the systems you use through abstractions, diving into the code is almost always a good thing (even if in this particular case it wasn't strictly necessary in the end). And last but not least it shows the tension between mastery and automation... the more you automate the least you master the system, and at the same time this automation is necessary for building resilience in the system.
Nothing really new but well written. This highlights fairly well the importance of decomposing projects, having at least the broad strokes of the architecture laid down and how automated tests help drive the progress. It's nice to see it all put together.
Interesting way to list all the data stores of your system and map them. Has the advantage of being very lean and simple to apply.
A good counterpoint to the "choose boring tech" which I tend to agree with. Sometimes you need to look into unusual tech and it's fine. Just have to do it rarely and responsibly. The context matters.
Definitely this. This is an interesting talk, most thing shouldn't be shiny. It's not about stagnating of course, but you should think more than twice before adding a new technology to your stack. Mastery is when you know everything that's wrong with a piece of tech, before that keep in mind the amount of unknown unknowns and the cost of exploiting something new.
Even the giants are slowly moving back from microservices. DHH has a very cruel way to point it out, still that's true. Let's hope people realize the mistake it was in term of complexity.