One more example that it should be used for NLP tasks, not knowledge related tasks. The model makers are consuming so much data indiscriminately that they can't easily fine comb everything to remove the poisoned information.
Clearly the author is angry and he has every right to be. By closing platforms and fighting against tinkering, the big tech companies try to kill of the power user and hacker cultures. By letting this happen we all loose as a society.
Definitely required more preparation work than brainstorming. That said it's a nice alternative, maybe easier to get right.
I think I prefer friction as well. It's not about choosing discomfort all the time, but there's clearly a threshold not to cross. If things get too convenient there's a point where we get disconnected from the human condition indeed. I prefer a fuller and imperfect life.
Indeed, innovation is far from being a linear process. It's actually messy, the breakthroughs already happened already and we describe it after the facts.
Some areas of our industry are more prone to the "fashion of the day" madness than others. Still there's indeed some potential decay in what we learn, what matters is finding and focusing on what will last.
An old one and a bit all over the place. Still, plenty of interesting advice and insights.
If you're not recklessly accumulating technical debt, this is an interesting way to frame the conversation around it.
There's some truth in this piece. We never quite managed to really have a semantic web because knowledge engineering is actually hard... and we publish mostly unstructured or badly structured data. LLMs are thus used as a brute force attempt at layering some temporary and partial structure on top of otherwise unstructured data. They're not really up to the task of course but it gives us a glimpse into what could have been.
Very nice article on the Wikipedia success. Or why being boring and the ultimate process pettiness became the crucial part of the formula. This community really developed a fascinating culture which so far resists to mounting political pressure... But will the editors morale hold?
Looks like the writing is on the wall indeed... The paradox is that an important training corpus for LLMs will disappear if it dies for good. Will we see output quality lower? Or ossification of knowledge instead?
Quite some good advice in here. I like being around people who proactively communicate, mind the quality of the communication and look for new things to work on. Who wouldn't?
Or why it's important to deeply understand what you do and what you use. Cranking features and throwing code to the wall until it sticks will never lead to good engineering. Even if it's abstractions all the way, it's for convenience but don't treat them as black boxes.
Interestingly this article draws a parallel with organizations too. Isn't having very siloed teams the same as treating abstractions as black boxes?
Quite some food for thought here.
Even if you use LLMs, make sure you don't depend on them in your workflows. Friction can indeed have value. Also if you're a junior you should probably seldom use them, build your skill and knowledge first... otherwise you'll forever be a beginner and that will bite you hard.
Unsurprisingly, Wikimedia is also badly impacted by the LLM crawlers... That puts access to curated knowledge at risk if the trend continues.
Some powerful bullies want to make the life of editors impossible. Looks like the foundation has the right tools in store to protect those contributors.
Very good background information on the latest attempt at discrediting Wikipedia.
Very nice piece. This is indeed mostly about building organizational knowledge. If someone leaves a project that person better not be alone to ensure some continuity... lost knowledge is very hard to piece back together.
OK, this is a nice parabole. I admit I enjoyed it.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.