Another rebuttal of Clean Code. Most of it makes sense if not overdone. There's the usual confusion around the "unit tests" term though, so take that section with a pinch of salt.
A nice list of the techniques used to render shadows in games.
Indeed a good way to reason about tests and the value they bring.
Another example of why pytest is really a nice test runner. I really miss it on projects which don't have it.
This is an important trait to have for a developer. If you're content of things working without knowing why and how they work, you're looking for a world of pain later.
This could be a game changer to collaborative editing. Clearly a good competitor to CRDTs, should make it easier to build such features without a central server.
That sounds like a very interesting tool to simulate and test potential data loss scenarios. This is generally a bit difficult to do, should make it easier.
Interesting reason which would explain the Selenium flakiness. It's just harder to write tests with race conditions using Playwright.
This can definitely come in handy. I can see myself using it for testing behaviors in the past or the future on a real application. This should also help writing automated tests in some cases.
Three good advices on writing automated tests. This is necessary but not sufficient though.
Starting from a wrong analogy to raise real thinking and questions about TDD.
Interesting tool for diffing database tables. Should come in handy for tests.
Ever wondered where fuzz testing is coming from? This is an important bit of history.
This is a technique which is definitely underestimated. There are plenty of libraries out there allowing to use them.
Good advice indeed. Having asserts using appropriate matchers can go a long way understanding what went wrong.
Maybe a bit dry, but gives a good idea of how a fuzz testing harness works. And also how it can be tweaked.
Interesting use of database templates and memory disks to greatly speed up test executions.
This is indeed too often overlooked. Producing a test list and picking the tests in the right order is definitely a crucial skill to practice TDD. It goes hand in hand with software design skills.
Indeed, don't mindlessly add tests. I find interesting that the doubts raised in this piece are once again concluded using an old quote of Kent Beck. What he was proposing was fine and then over time people became clearly unreasonable.
This is a good explanation of why you should limit your use of mocks. It also highlights some of the alternatives.
Very interesting tools for testing and verifying concurrent code.
Nice talks, debunks very well quite a bit of the fallacies around people wrongly practicing TDD. I never realized how the root cause of those fallacies was the misusing of the "unit tests" term instead of "developers test". This was indeed the wrong term, knew it, but first time I realize how profound the effects were.
Interesting story about using unit tests by someone who thought it was a waste of time... until, they helped uncover a bug which was widespread. Also it was in an embedded context which comes with its own challenges.
This is apparently a much needed clarification. Let's get back to basics.
Interesting approach to reduce the amount of undocumented features because we simply forgot to update the documentation. Shows a few inspiring tricks to get there.
What are the outcomes of TDD? Do you want them? If yes, is the context compatible?
There will always be some design and some testing. The intensity of both activities needs to be properly managed over time though.
This can come in handy for automated tests which need to be ran from within a shell.
Definitely this, the message is often coming across lacking nuance. TDD can help you towards good design, but it's not ensuring you'll have a good design.
A little article which serves as a good introduction to the pytest fixtures. They are indeed very useful I think.
This is about behavior and not structure indeed. Put the focus at the right place otherwise your tests will quickly become expensive to update.
Calls a bit too much everything mocks while the term test double would have done the job. Still it stresses fairly well the importance of being as close to reality as possible and the tradeoffs we have to make.
Where does this style of tests shine? A few elements to consider.
In praise of property based testing. This definitely completes well the tests you write by hand.
Especially important in the context of tests indeed. As soon as you depend on some form of elapsed time you get into the territory of flaky tests.
Kind of sad to see asserts misused so much in the Python community. Still that's a good lesson for everyone: when using an assert, expect it won't get executed when in production.
Interesting way to reason about tests and classify them. I think it can be more useful than the strict "unit" vs "integration" that is way too widespread.
Nothing really new but well written. This highlights fairly well the importance of decomposing projects, having at least the broad strokes of the architecture laid down and how automated tests help drive the progress. It's nice to see it all put together.
Looks like a really neat tool for testing low level and kernel dependent details in a reproducible way.
Neat, short and simple post highlighting the important traits of TDD.
Often overlooked in test cases. Still it's not that complicated to setup.
Looks like a nice Faker alternative for Java projects. Turns out I was looking for something like that.
Interesting new compression format around the corner. Might turn out useful in some cases. I could definitely have used it last year for a test harness with very large reference data (so no, not gaming).
Looks like an interesting tool to deal with dependencies in some tests.
This what we should strive for with our tests. I like how he keeps it flexible though, again it's likely a trade-off so you can't have all the properties fully all the time. Still you need to know what you give up, how much of it and why.
Good piece. I like how it frames the debate, asking the right questions on where might be the assumptions on how testing is done.
Looks like an interesting tool when you need to diff databases. Definitely something I'd see myself using for large pin tests.
Indeed, I encounter that same idea in some people. I'm unsure where it comes from, it feels like reading and extrapolating from something more reasonable (it's like the "one test per line" I sometimes hear about). Bad idea indeed, it's fine to have several assertions, it's probably often required to avoid complexity explosion in your tests. This of course doesn't mean your test should become unfocused.
Interesting bug in SQLite. In particular look for the conclusion regarding tests and coverage. It's something I often have to remind people of.
Lots of very good points, I'm a proponent of TDD but I strongly agree with this. If it's painful and not fun, find a way to do it differently. There's likely a tool or some API you miss. Manage your effort.