Are we surprised they'll keep processing personal information as much as possible? Not really no...
Nice little satire, we could easily imagine some CEOs writing this kind of memo.
Interesting research, this gives a few hints at building tools to ensure some more transparency at the ideologies pushed by models. They're not unbiased, that much we know, characterising the biases are thus important.
Looks like it's getting there as a good help for auditing code, especially to find security vulnerabilities.
Or why CAPTCHA might become something of the past. I guess they'll live a bit longer as they become more and more privacy invasive.
It definitely has a point. The code output isn't really what matters. It is necessary at the end, but without the whole process it's worthless and don't empower anyone... It embodies many risks instead. I think my preferred quote in this article is this:
"We are teaching people that they are not worth to have decent, well-made things."
As LLM assistants get more and more embedded in the development process, it gets harder to ensure they behave safely. Quite a few interesting attack vectors in that one.
This is a good rant, I liked it. Lots of very good points in there of course. Again: the area where it's useful is very narrow. I also nails down the consequences of a profession going full in with those tools.
Those hosted models really exhibit weird biases... The control of the context is really key.
That's a good overview of the energy demand, it doesn't account for all the resources needed of course. Now of course like most articles and studies on the topic, it's very inaccurate because of the opacity from the major providers in that space. The only thing we know is that the numbers here are likely conservative and the real impact higher. Mass use of those models inferences is already becoming a problem, and it's bound to get worse.
Somehow not surprising... There's an area where it works OK. That said, I think we don't have the right UX to exploit it safely and productively. The right practices still need to be found. This isn't helped by all the hype and crazy announcements.
If the funding dries up... we'll have another AI winter on our hands indeed.
Or how the current neural networks obsession is poisoning scientific fields. There was already a reproducibility crisis going on and it looks like it's been getting worse. The incentives are clearly wrong and that shows.
You can't be in the backseat when using those tools. Otherwise you might feel productive by cranking out code but it can't do the essential tasks for you (most notably actual problem solving or architecture thinking). The quality would clearly suffer.
Looks like the writing is on the wall indeed... The paradox is that an important training corpus for LLMs will disappear if it dies for good. Will we see output quality lower? Or ossification of knowledge instead?
A short collection of links (some already seen somewhere else in the review) which altogether draw a stark picture of the LLM industry.
Unsurprisingly it works OK when it's about finding syntax errors you made or about low stakes mechanical work you need to repeat. The leash has to be very short.
Such contributions still don't exist. Or their quality is so abyssal that they waste everyone's time. Don't fall for the marketing speak.
This is I think the most worrying consequences of this current hype. What happens when you get a whole generation which didn't learn anything related to their culture? Is Idiocracy the next step? How close are we?
At least, it will have made very obvious the problems with how our education system evolved the past twenty years (if not more).
It looks like desperate times for the venture capitalists behind the AI bubble...