63 private links
If there's one area where people should stay clear from LLMs, it's definitely when they want to learn a topic. That's one more study showing the knowledge you retain from LLMs briefs is shallower. The friction and the struggle to get to the information is a feature, our brain needs it to remember properly.
The findings in this paper are chilling... especially considering what fragile people are doing with those chat bots.
Looks like Mozilla is doing everything it can to alienate the current Firefox user base and to push forward its forks.
I was actually wondering when this would happen. Was just a matter of time, would have expected this move a couple of months ago.
I'm happy to see I'm actually very much aligned with one of the "Attention Is All You Need" co-authors. The current industry trend of "just scale the transformer architecture" is indeed stifling innovation and actual research. That said I find ironic that he talks about freedom to explore... well this is what public labs used to be about, but we decided to drastically reduce their funding and replace that with competition between startups. It's no surprise we have very myopic views on problems.
ETH Zurich keeps making progress on its model. It's exciting and nice to see an ethical offering develop in that space. It shows that when there is political will it can be treated as proper infrastructure.
There's some truth in this piece. We never quite managed to really have a semantic web because knowledge engineering is actually hard... and we publish mostly unstructured or badly structured data. LLMs are thus used as a brute force attempt at layering some temporary and partial structure on top of otherwise unstructured data. They're not really up to the task of course but it gives us a glimpse into what could have been.
Apparently in the age of people using LLMs for their tests, there is a bias toward mockist tests being produced. It's a good time to remind why you likely don't want them in most cases and limit the use of mocks to consider fakes and checking system state instead.
Interesting stuff about the mathematics behind how embedding spaces work in LLMs.
OK, this is an interesting way for the Darwin Award to branch. Some of the 2025 nominees are indeed funny. Now I wonder which ones will win the award!
We can expect more misleading papers to be published by the big LLM providers. Don't fall in the trap, wait for actually peer reviewed papers from academia. Unsurprisingly the results aren't as good there.
Running interesting models locally gets more and more accessible.
Interesting comparison, indeed would a clock like this be useful?
And one more... it's clearly driven by an architecture pattern used by all vendors. They need to get their acts together to change this.
Good reminder that professional translators aren't gone... on the contrary. There's so many things in languages that you can't handle with a machine.
I recognize myself quite a bit in this opinion piece. It does a good job going through most of the ethical and practical reasons why you don't need LLMs to develop and why you likely don't want to.
OK, this is a serious and long paper. It shows quite well how over reliance on ChatGPT during the learning phase on some topics impacts people. It's mesurable both from their behavior and through EEG. Of course, it'd require more such studies with larger groups. Still those early signs are concerning.
A nice followup which acts as a TL;DR for the previous piece which was fairly long indeed.
Yep, there's no logic engine buried deep in those chatbots. Thinking otherwise is placing faith in some twisted view about emergence...
An excellent piece which explains well why the current "debate" is rotten to the core. There's no good way to engage with those tools without reinforcing some biases. Once the hype cycle is over we have a chance at proper research on the impacts... unfortunately it's not happening now when it's badly needed.