Probably one of the most important talks of 39C3. It's a powerful call to action for the European Union to wake up and do the right thing to ensure digital sovereignty for itself and everyone else in the world. The time is definitely right due to the unexpected allies to be found along the way. It'd be a way to turn the currently bad geopolitical landscape into a bunch of positive opportunities.
Long but interesting piece. There's indeed a lot to say about our relationships to tools in general and generative AI in particular. It's disheartening how it made obvious that collaborative initiatives are diminishing. In any case, ambivalence abounds in this text... for sure we can't trust the self-appointed stewards of the latest wave of such tools. The parallel with Spirited Away at the end of the article is very well chosen in my opinion. The context in which technologies are born and applied matters so much.
I think Rich Hickey hit that nail on the head.
Add to this how generative AI is used in the totally wrong context... and then I feel like I could have written this piece. I definitely agree with all that.
This is really a big problem that those companies created for Free Software communities. Due to the lack of regulation they're going around distributing copyright removal machines and profiting from them. They should have been barred from ingesting copyleft material in the first place.
Such a nice business model... not. There's really a lack of regulation in this space.
They produced Apertus, and now this for the inference. There's really interesting work getting out of EPFL lately. It all helps toward more ethical and frugal production (and use) of LLMs. Those efforts are definitely welcome.
There is definitely something tragic at play here. As we're inundated in fake content, people are trying to find ways to detect when it's fake or not. While doing so we deny the humanity of some people because of their colonial past.
Very good distinction between creating and making. That might explain the distinction between people who love their craft and those who want to automate it away. The latter want instant gratification and this can't stand the process of making things.
Those AI scrapers are really out of control... the length one has to go to just to host something now.
I think I prefer friction as well. It's not about choosing discomfort all the time, but there's clearly a threshold not to cross. If things get too convenient there's a point where we get disconnected from the human condition indeed. I prefer a fuller and imperfect life.
Long but excellent opinion piece about everything which is wrong with the current AI-mania.
If there's one area where people should stay clear from LLMs, it's definitely when they want to learn a topic. That's one more study showing the knowledge you retain from LLMs briefs is shallower. The friction and the struggle to get to the information is a feature, our brain needs it to remember properly.
The findings in this paper are chilling... especially considering what fragile people are doing with those chat bots.
Looks like Mozilla is doing everything it can to alienate the current Firefox user base and to push forward its forks.
I was actually wondering when this would happen. Was just a matter of time, would have expected this move a couple of months ago.
I'm happy to see I'm actually very much aligned with one of the "Attention Is All You Need" co-authors. The current industry trend of "just scale the transformer architecture" is indeed stifling innovation and actual research. That said I find ironic that he talks about freedom to explore... well this is what public labs used to be about, but we decided to drastically reduce their funding and replace that with competition between startups. It's no surprise we have very myopic views on problems.
ETH Zurich keeps making progress on its model. It's exciting and nice to see an ethical offering develop in that space. It shows that when there is political will it can be treated as proper infrastructure.
There's some truth in this piece. We never quite managed to really have a semantic web because knowledge engineering is actually hard... and we publish mostly unstructured or badly structured data. LLMs are thus used as a brute force attempt at layering some temporary and partial structure on top of otherwise unstructured data. They're not really up to the task of course but it gives us a glimpse into what could have been.
Apparently in the age of people using LLMs for their tests, there is a bias toward mockist tests being produced. It's a good time to remind why you likely don't want them in most cases and limit the use of mocks to consider fakes and checking system state instead.