Such a nice business model... not. There's really a lack of regulation in this space.
They produced Apertus, and now this for the inference. There's really interesting work getting out of EPFL lately. It all helps toward more ethical and frugal production (and use) of LLMs. Those efforts are definitely welcome.
There is definitely something tragic at play here. As we're inundated in fake content, people are trying to find ways to detect when it's fake or not. While doing so we deny the humanity of some people because of their colonial past.
Very good distinction between creating and making. That might explain the distinction between people who love their craft and those who want to automate it away. The latter want instant gratification and this can't stand the process of making things.
Those AI scrapers are really out of control... the length one has to go to just to host something now.
Long but excellent opinion piece about everything which is wrong with the current AI-mania.
The trend keep being the same... And when the newer models will be trained on FOSS code which degraded in quality due to the use of the previous generation of models, things are going to get "interesting".
IDEs allowing to spawn actions in the user environment are still a big security risk.
This is getting more and more accessible. It's also one of the uses which makes sense for LLMs.
If there's one area where people should stay clear from LLMs, it's definitely when they want to learn a topic. That's one more study showing the knowledge you retain from LLMs briefs is shallower. The friction and the struggle to get to the information is a feature, our brain needs it to remember properly.
That's an interesting approach. Early days on this one, it clearly requires further work but it seems like the proper path for math related problems.
The findings in this paper are chilling... especially considering what fragile people are doing with those chat bots.
Looks like Mozilla is doing everything it can to alienate the current Firefox user base and to push forward its forks.
Clearly needs further exploration. I'd like to see it submitted in a peer reviewed journal but maybe that will come. Still it's nice to see people for new approaches. It's a breath of fresh air. I like it when there are actual research rather than hype. Hopefully the days of the "scale it up and magic will happen" are counted.
I was actually wondering when this would happen. Was just a matter of time, would have expected this move a couple of months ago.
The title is a bit misleading in a way (and I almost didn't click through for a start). That said, it is an interesting essay dealing with the topics of intelligence, problem solving etc. I'm not sure I agree with everything in it, but that's still good food for thought.
Indeed, if we weaken the learning loop by using coding assistants then we might feel we go faster while we're building up the maintenance cliff. We need to have an understanding of the system.
I'm happy to see I'm actually very much aligned with one of the "Attention Is All You Need" co-authors. The current industry trend of "just scale the transformer architecture" is indeed stifling innovation and actual research. That said I find ironic that he talks about freedom to explore... well this is what public labs used to be about, but we decided to drastically reduce their funding and replace that with competition between startups. It's no surprise we have very myopic views on problems.
ETH Zurich keeps making progress on its model. It's exciting and nice to see an ethical offering develop in that space. It shows that when there is political will it can be treated as proper infrastructure.
There's some truth in this piece. We never quite managed to really have a semantic web because knowledge engineering is actually hard... and we publish mostly unstructured or badly structured data. LLMs are thus used as a brute force attempt at layering some temporary and partial structure on top of otherwise unstructured data. They're not really up to the task of course but it gives us a glimpse into what could have been.