71 private links
The security implications of using LLMs are real. With the high complexity and low explainability of such models it opens the door to hiding attacks in plain sight.
Of course it would be less of a problem if explainability was better with such models. It's not the case though, so it means they can spew very subtle propaganda. This is bound to become even more of a political power tool.
This is clearly pointing in the direction of UX challenges around LLM uses. For some tasks the user's critical thinking must be fostered otherwise bad decisions will ensue.
Again it's definitely not useful for everyone... it might even be dangerous for learning.
Be wary of the unproven claims that using LLMs necessarily leads to productivity gains. The impacts might be negative.
When you put the marketing claims aside, the limitations of those models become obvious. This is important, only finding the root cause of those limitations can give a chance to find a solution to then.
This will definitely push even more conservatism around the existing platforms. More articles mean more training data... The underdogs will then suffer.
Looks like the monopolists are vexed and are looking for arguments to discredit the competition... of all the arguments, this one is likely the most ridiculous seeing their own behavior.
I mostly agree with this piece. There's lots of room for optimization still so we might see a temporary drop in the energy consumption of those systems. That said, longer term energy consumption is indeed the main leverage to improve performance of those systems. It can only get us so far, so new techniques will be needed. Hence why my position is that we'll come back to symbolic approaches at some point, there's a clear challenge at interfacing both worlds.
Excellent satire, it summaries the situation quite well.
Maybe at some point the big providers will get the message and their scrapers will finally respect robots.txt? Let's hope so.
I guess this was just a matter of time, the obsession of "just make it bigger" was making most player myopic. Now this obviously collides with geopolitics since this time it's about a Chinese company being ahead.
Yet another attempt at protecting content from AI scrapers. A very different approach for this one.
There was a time when scraping bots were well behaved... Now apparently we have to add software to actively defend against AI scrapers.
Good perspective on how the generative AI space evolved in 2024. There are good news and more concerning ones in there. We'll see what 2025 brings.
The results are unsurprising. They definitely confirm what we expected. The models are good at predicting the past, they're not so great at solving problems.
This highlights quite well the limits of the models used in LLMs.
Looks like we're still a long way from mathematical accuracy with the current generation of models. It made progress of course.
OK, this is a nice parabole. I admit I enjoyed it.
A good balanced post on the topic. Maybe we'll finally see a resurgence of real research innovation and not just stupid scaling at all costs. Reliability will stay the important factor of course and this one is still hard to crack.