71 private links
As LLM assistants get more and more embedded in the development process, it gets harder to ensure they behave safely. Quite a few interesting attack vectors in that one.
This is a good rant, I liked it. Lots of very good points in there of course. Again: the area where it's useful is very narrow. I also nails down the consequences of a profession going full in with those tools.
Those hosted models really exhibit weird biases... The control of the context is really key.
That's a good overview of the energy demand, it doesn't account for all the resources needed of course. Now of course like most articles and studies on the topic, it's very inaccurate because of the opacity from the major providers in that space. The only thing we know is that the numbers here are likely conservative and the real impact higher. Mass use of those models inferences is already becoming a problem, and it's bound to get worse.
Somehow not surprising... There's an area where it works OK. That said, I think we don't have the right UX to exploit it safely and productively. The right practices still need to be found. This isn't helped by all the hype and crazy announcements.
If the funding dries up... we'll have another AI winter on our hands indeed.
Or how the current neural networks obsession is poisoning scientific fields. There was already a reproducibility crisis going on and it looks like it's been getting worse. The incentives are clearly wrong and that shows.
You can't be in the backseat when using those tools. Otherwise you might feel productive by cranking out code but it can't do the essential tasks for you (most notably actual problem solving or architecture thinking). The quality would clearly suffer.
Looks like the writing is on the wall indeed... The paradox is that an important training corpus for LLMs will disappear if it dies for good. Will we see output quality lower? Or ossification of knowledge instead?
A short collection of links (some already seen somewhere else in the review) which altogether draw a stark picture of the LLM industry.
Unsurprisingly it works OK when it's about finding syntax errors you made or about low stakes mechanical work you need to repeat. The leash has to be very short.
Such contributions still don't exist. Or their quality is so abyssal that they waste everyone's time. Don't fall for the marketing speak.
This is I think the most worrying consequences of this current hype. What happens when you get a whole generation which didn't learn anything related to their culture? Is Idiocracy the next step? How close are we?
At least, it will have made very obvious the problems with how our education system evolved the past twenty years (if not more).
It looks like desperate times for the venture capitalists behind the AI bubble...
Looks like the protocols landscape for writing LLM based agents will turn into a mess.
Even with conservative estimates some uses are very much energy hungry... Especially when they support a surveillance apparatus. Many reasons to not get there.
When hype meets group think, you can quickly create broken culture in your organization... We're seeing more examples lately.
I like this piece. In most case it's the thinking which is important not the writing. So even if the language is subpar it's better if you write yourself... except if it's not worth the effort in the first place.
Looks like the productivity gain promises are still mostly hypothetical. Except on specific limited tasks of course but that doesn't cover for a whole job. Also, when there is a gain it's apparently not the workers who benefit from them.
This also carries privacy concerns indeed even for local models. It all depends how it's inserted in the system.