64 private links
Indeed, innovation is far from being a linear process. It's actually messy, the breakthroughs already happened already and we describe it after the facts.
Some areas of our industry are more prone to the "fashion of the day" madness than others. Still there's indeed some potential decay in what we learn, what matters is finding and focusing on what will last.
An old one and a bit all over the place. Still, plenty of interesting advice and insights.
If you're not recklessly accumulating technical debt, this is an interesting way to frame the conversation around it.
There's some truth in this piece. We never quite managed to really have a semantic web because knowledge engineering is actually hard... and we publish mostly unstructured or badly structured data. LLMs are thus used as a brute force attempt at layering some temporary and partial structure on top of otherwise unstructured data. They're not really up to the task of course but it gives us a glimpse into what could have been.
Very nice article on the Wikipedia success. Or why being boring and the ultimate process pettiness became the crucial part of the formula. This community really developed a fascinating culture which so far resists to mounting political pressure... But will the editors morale hold?
Looks like the writing is on the wall indeed... The paradox is that an important training corpus for LLMs will disappear if it dies for good. Will we see output quality lower? Or ossification of knowledge instead?
Quite some good advice in here. I like being around people who proactively communicate, mind the quality of the communication and look for new things to work on. Who wouldn't?
Or why it's important to deeply understand what you do and what you use. Cranking features and throwing code to the wall until it sticks will never lead to good engineering. Even if it's abstractions all the way, it's for convenience but don't treat them as black boxes.
Interestingly this article draws a parallel with organizations too. Isn't having very siloed teams the same as treating abstractions as black boxes?
Quite some food for thought here.
Even if you use LLMs, make sure you don't depend on them in your workflows. Friction can indeed have value. Also if you're a junior you should probably seldom use them, build your skill and knowledge first... otherwise you'll forever be a beginner and that will bite you hard.
Unsurprisingly, Wikimedia is also badly impacted by the LLM crawlers... That puts access to curated knowledge at risk if the trend continues.
Some powerful bullies want to make the life of editors impossible. Looks like the foundation has the right tools in store to protect those contributors.
Very good background information on the latest attempt at discrediting Wikipedia.
Very nice piece. This is indeed mostly about building organizational knowledge. If someone leaves a project that person better not be alone to ensure some continuity... lost knowledge is very hard to piece back together.
OK, this is a nice parabole. I admit I enjoyed it.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.
I very much agree with this. The relationship between developers and their frameworks is rarely healthy. I think the author misses an important advice though: read the code of your frameworks. When stuck invest sometime stepping into the frameworks with the debugger. Developers too often treat those as a black box.
Very interesting research. Looks like we're slowly moving away from the "language and thinking are intertwined" hypothesis. This is probably the last straw for Chomsky's theory of language. It served us well but neuroscience points that it's time to leave it behind.
Now the impact seems clear and this is mostly bad news. This reduces the production of public knowledge so everyone looses. Ironically it also means less public knowledge available to train new models. At some point their only venue to fine tune their models will be user profiling which will be private... I've a hard time seeing how we won't end up stuck with another surveillance apparatus providing access to models running on outdated knowledge. This will lock so many behaviors and decisions in place.
Very good point. You might not remember the content, but if it impacted the way you think it did its job.