Unsurprisingly, hiring scams are becoming more elaborate. Keep it in mind for your upcoming interviews.
There's some truth to this. It's easier to market yourself as a specialist rather than a generalist... This doesn't make it easy.
This is clearly needed. This should increase the maturity of the security practice around Fediverse software.
It's indeed a vicious circle. Also it seems easy to fall into this particular trap, I see it in many places.
Good approach to detect problems early and manage the risks they'll bear.
Sure, you get good memory safety with Rust. It's important and welcome, but it's just the beginning of the story.
A friendly reminder that Javascript is an endless pit of surprising behaviors. Watch out!
The "asleep at the wheel" effect is real with such tools. The consequences can be dire in quite a few fields. Here is a good illustration with OSINT.
Hear, hear! It sucks up all the air in conversation and obliterate imagination. As if we couldn't do better.
Interesting trick to check at runtime that you always acquire mutexes in the same order.
Even if you use LLMs, make sure you don't depend on them in your workflows. Friction can indeed have value. Also if you're a junior you should probably seldom use them, build your skill and knowledge first... otherwise you'll forever be a beginner and that will bite you hard.
Nice feature, but more interesting in its explanation is the topic of static initializers in Rust. They're clearly not a settled area in the language, that's in part because of how low level static analyzers are.
Interesting JS library for animation on the Web. It's nice that it seems really small.
Rust itself might bring interesting properties in term of safety. As soon as it needs to interact with other languages though the chances of undefined behavior increase drastically. This definitely pushes towards using more dynamic analysis tools to catch those.
We just can't leave the topic of how the big model makers are building their training corpus unaddressed. This is both an ethics and economics problem. The creators of the content used to train such large models should be compensated in a way.
Between this, the crawlers they use and the ecological footprint of the data centers, there are so many negative externalities to those systems that law makers should have cease the topic a while ago. The paradox is that if nothing is done about it, the reckless behavior of the model makers will end up hurting them as well.
Unsurprisingly, Wikimedia is also badly impacted by the LLM crawlers... That puts access to curated knowledge at risk if the trend continues.
This is a very smart way to create pure CSS placeholders.
Could be interesting if it gets standardized. Maybe other forges than Gerrit will start leveraging the concept, this would improve the review experience greatly on those.
I somehow recognise myself in this piece. Not completely though, I disagree with some of the points... but we share some baggage so I recognize another fellow.
A good look back at parallelisation and multithreading as a mean to optimise. This is definitely a hard problem, and indeed got a bit easier with recent languages like Rust.