63 private links
Very Rust focused, still it's an interesting debate. It gives a good overview of the different types of lock behaviors in case of failures. It's very much advocating for the poisoning approach which is indeed an interesting one (coming with its own tradeoffs of course).
If you're dealing with multithreading you should not turn to mutexes by default indeed. Consider higher level primitives and patterns first.
Does a nice job explaining how the scheduling can be investigated from outside the kernel. It is a good introduction on the topic.
Early days but this looks like interesting tooling to inspect and debug programs using Rust channels.
A few ideas to dig deeper into for better multi threaded throughput.
Its use cases are indeed limited. It's a success for network IO. For everything else, the free threading might be the path forward once it stabilizes.
This can be useful indeed to explore concurrency issues. It requires some work though.
This is an interesting and deeply buried optimization for the GNU C++ STL implementation. I didn't expect anything like this.
This is a nice summary of the Send and Sync traits semantic in Rust.
Since atomics are really a hard topic, this article is welcome. It does a good job explaining what memory ordering does. It helps to debunk some common misconceptions.
A good example of how you can get bitten by cache coherency algorithms in the CPU.
Nice deep dive into a bug lurking inside a lock implementation.
Or why I'm still on the fence regarding async/await. It's rarely the panacea we pretend it to be.
A bit dated perhaps, and yet most of the lessons in here are still valid. If performance and parallelism matter, you better keep an eye on how the cache is used.
Nice approach, especially useful if you need to split work to distribute it to threads.
Not a huge fan of the writing style. Still this gives a good idea of what you have to deal with when you're trying to build lock free data structures. Here it's illustrated with Rust, but it's not Rust specific.
Nice trick for highly performance sensitive data structures. Making data CPU local instead of thread local you can make a mechanism which is especially cache friendly.
Interesting trick to check at runtime that you always acquire mutexes in the same order.
A good look back at parallelisation and multithreading as a mean to optimise. This is definitely a hard problem, and indeed got a bit easier with recent languages like Rust.
This is indeed a nice pattern to obtain a value, brings neat advantages.