I mostly agree with this piece. There's lots of room for optimization still so we might see a temporary drop in the energy consumption of those systems. That said, longer term energy consumption is indeed the main leverage to improve performance of those systems. It can only get us so far, so new techniques will be needed. Hence why my position is that we'll come back to symbolic approaches at some point, there's a clear challenge at interfacing both worlds.
Excellent satire, it summaries the situation quite well.
Maybe at some point the big providers will get the message and their scrapers will finally respect robots.txt? Let's hope so.
I guess this was just a matter of time, the obsession of "just make it bigger" was making most player myopic. Now this obviously collides with geopolitics since this time it's about a Chinese company being ahead.
Yet another attempt at protecting content from AI scrapers. A very different approach for this one.
There was a time when scraping bots were well behaved... Now apparently we have to add software to actively defend against AI scrapers.
Good perspective on how the generative AI space evolved in 2024. There are good news and more concerning ones in there. We'll see what 2025 brings.
The results are unsurprising. They definitely confirm what we expected. The models are good at predicting the past, they're not so great at solving problems.
This highlights quite well the limits of the models used in LLMs.
Looks like we're still a long way from mathematical accuracy with the current generation of models. It made progress of course.
OK, this is a nice parabole. I admit I enjoyed it.
A good balanced post on the topic. Maybe we'll finally see a resurgence of real research innovation and not just stupid scaling at all costs. Reliability will stay the important factor of course and this one is still hard to crack.
Kind of unsurprising right? I mean LinkedIn is clearly a deformed version of reality where people write like corporate drones most of the time. It was only a matter of time until robot generated content would be prevalent there, it's just harder to spot since even humans aren't behaving genuinely there.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.
Excellent post showing all the nuances of AI skepticism. Can you find in which category you are? I definitely match several of them.
Let's hope security teams don't get saturated with low quality security reports like this...
Another lawsuit making progress against OpenAI and their shady practice.
More shady practices to try to save themselves. Let's hope it won't work.
The water problem is obviously hard to ignore. This piece does a good job illustrating how large the impact is.
Good reminder that models shouldn't be used as a service except maybe for prototyping. This has felt obvious to me since the beginning of this hype cycle... but here we are people are falling in the trap today.