Looks like the monopolists are vexed and are looking for arguments to discredit the competition... of all the arguments, this one is likely the most ridiculous seeing their own behavior.
I mostly agree with this piece. There's lots of room for optimization still so we might see a temporary drop in the energy consumption of those systems. That said, longer term energy consumption is indeed the main leverage to improve performance of those systems. It can only get us so far, so new techniques will be needed. Hence why my position is that we'll come back to symbolic approaches at some point, there's a clear challenge at interfacing both worlds.
Excellent satire, it summaries the situation quite well.
Maybe at some point the big providers will get the message and their scrapers will finally respect robots.txt? Let's hope so.
I guess this was just a matter of time, the obsession of "just make it bigger" was making most player myopic. Now this obviously collides with geopolitics since this time it's about a Chinese company being ahead.
Yet another attempt at protecting content from AI scrapers. A very different approach for this one.
There was a time when scraping bots were well behaved... Now apparently we have to add software to actively defend against AI scrapers.
Nice reminder that the tasks necessary to robotics are clearly much harder to develop through machine learning than language.
Good perspective on how the generative AI space evolved in 2024. There are good news and more concerning ones in there. We'll see what 2025 brings.
The results are unsurprising. They definitely confirm what we expected. The models are good at predicting the past, they're not so great at solving problems.
This highlights quite well the limits of the models used in LLMs.
Looks like we're still a long way from mathematical accuracy with the current generation of models. It made progress of course.
OK, this is a nice parabole. I admit I enjoyed it.
A good balanced post on the topic. Maybe we'll finally see a resurgence of real research innovation and not just stupid scaling at all costs. Reliability will stay the important factor of course and this one is still hard to crack.
It looks like analog chips for neural network workloads are on the verge of finally becoming reality. This would reduce consumption by an order of magnitude and hopefully more later on. Very early days for this new attempt, let's see if it holds its promises.
Kind of unsurprising right? I mean LinkedIn is clearly a deformed version of reality where people write like corporate drones most of the time. It was only a matter of time until robot generated content would be prevalent there, it's just harder to spot since even humans aren't behaving genuinely there.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.
Excellent post showing all the nuances of AI skepticism. Can you find in which category you are? I definitely match several of them.
Looks like a nice model to produce 3D assets. Should speed up a bit the work of artists for producing background elements, I guess there will be manual adjustments needed in the end still.
Let's hope security teams don't get saturated with low quality security reports like this...