Does a good job listing the main myths the marketing around generative AI is built on. Don't fall for the marketing, exert critical thinking and rely on real properties of those systems.
An excellent essay about generative AI and art. Goes deep in the topic and explains very well how you can hardly make art with those tools. It's just too remote from how they work. I also particularly like the distinction between skill and intelligence. Indeed, we can make highly skilled but not intelligent systems using this technology.
Interesting musing. The predictability in tone doesn't make for very funny content indeed. Also as a side-effect this might help people remember that Markov chain are a thing and much less expensive.
Exciting new type of neural networks. There are limits to use them at large scale for now. Still, they have very interesting properties like the interpretability. And also, they tend to give similar performance to traditional neural networks for a smaller size.
If you're wondering what people do with chat bots, there are some clues here.
This ought to be easier, this should help a bit.
Interesting finding. Looks like the trust is not very high in the general public towards products with AI.
More discussion about models collapse. The provenance of data will become a crucial factor to our ability to train further models.
Still not perfect, but that's an interesting development.
Content creators are clearly annoyed at the lack of consent. The more technical ones are trying to take the matter in their own hands.
Or examples of the collapse of a shared reality. This has nothing to do with "social" media anymore. Very nice investigation in any case.
I'm rarely on the side of a Goldman Sachs... Still this paper seems to be spot on. The equation between the costs (financial and ecological) and the value we get out of generative AI isn't balanced at all. Also, since it is stuck on trying to improve mostly on model scale and amount of data it is doomed to plateau in its current form.
Those brand new models keep failing at surprisingly simple tasks.
A new era of spam is on us... this is going to be annoying to filter out.
This arm race should be stopped... This is becoming an ecological disaster, so much wasted energy.
Makes a strong case about why LLMs are better described as "bullshit machine". In any case this is a good introduction into bullshit as a philosophical concept. I guess with our current relationship to truth these are products well suited to their context...
Further clues that transformer models can't learn logic from data.
Interesting paper showing a promising path to reduce the memory and workload of transformer models. This is much more interesting than the race to the gigantic size.
This is ignoring the energy consumption aspect. That said, it is spot on regarding the social and economics aspects of those transformer models. They have to be open and self hostable.
OK, this is a rant about the state of the market and people drinking kool-aid. A bit long but I found it funny and well deserved at times.