Or why it's hard to truly evaluate performance in complex systems. We often test things in the optimistic case.
A bit of an unusual view about cohesion. The approach is interesting though.
At some point the complexity is high enough that you indeed need more tools than only handcrafted tests to discover bugs.
Nice little article. It's a good way to point out that aiming for the lowest Big-O approach is often not what you want in terms of performance. Always keep the context in mind, and in doubt measure.
Personal backups don't have to be fancy... And probably shouldn't.
A good reminder of why you often don't want to follow an architecture pattern to the letter. They should be considered like guidelines and depending on your technical context you should properly balance the costs. Here is an example with the Ports and Adapters pattern in the context of an ASP.NET application.
Or why analogies with physical work don't work...
Nice exploration of the microcode patching flaw which was disclosed recently. This gives a glimpse at the high level of complexity the x86 family brings on the table.
Translation and localisation is a complex topic too often overlooked by developers. This is a modest list of widespread misconceptions. If you get in the details it get complex fairly quickly.
Nice little paper I overlooked. I agree with it obviously. More tests are not a free pass to let complexity go wild. Architecture and design concerns are still very important even if you TDD properly.
Very interesting discussion weighting the main differences and disagreements between a Philosophy of Software Design, and Clean Code. I read and own both books and those differences were crystal clear, it's nice to see the authors debate them. I'm a bit disappointed at the section about TDD though, I think it could have been a bit more conclusive. It gives me food for thought about my TDD teaching though and confirms some of the messages I'm trying to push to reduce confusion.
Indeed there is a tension between both approaches in package ecosystems.
This is a worthy questioning... We try to reuse, but maybe we do it too much? For sure some ecosystems quickly lead to hundreds of dependencies even for small features.
Nice musing on how a type system can be a way to tame complexity or at least isolate it explicitly in one place.
It becomes clear that there are more and more reasons to move back to simpler times regarding the handling of web frontends.
The Web standards are indeed too complex. That severely limits the possibility of browser engine incumbents. I agree there's a deeper lesson here about the scale of technologies.
Or why you shouldn't insert an abstraction just for the sake of it. Also clearly not all abstractions are created equal.
This highlights quite well the limits of the models used in LLMs.
There is some good advice in this piece. If you want to maintain something for a long time, some aspects need to be thought out from the beginning.
It tries hard at not being a "get off my lawn" post. It clearly points some kind of disconnects though. They're real. I guess it's to be expected with the breadth of our industry. There are so many abstractions piled onto each other that it's difficult to explore them all.