You can't be in the backseat when using those tools. Otherwise you might feel productive by cranking out code but it can't do the essential tasks for you (most notably actual problem solving or architecture thinking). The quality would clearly suffer.
Looks like the writing is on the wall indeed... The paradox is that an important training corpus for LLMs will disappear if it dies for good. Will we see output quality lower? Or ossification of knowledge instead?
A short collection of links (some already seen somewhere else in the review) which altogether draw a stark picture of the LLM industry.
Unsurprisingly it works OK when it's about finding syntax errors you made or about low stakes mechanical work you need to repeat. The leash has to be very short.
Such contributions still don't exist. Or their quality is so abyssal that they waste everyone's time. Don't fall for the marketing speak.
This is I think the most worrying consequences of this current hype. What happens when you get a whole generation which didn't learn anything related to their culture? Is Idiocracy the next step? How close are we?
At least, it will have made very obvious the problems with how our education system evolved the past twenty years (if not more).
It looks like desperate times for the venture capitalists behind the AI bubble...
Looks like the protocols landscape for writing LLM based agents will turn into a mess.
Even with conservative estimates some uses are very much energy hungry... Especially when they support a surveillance apparatus. Many reasons to not get there.
When hype meets group think, you can quickly create broken culture in your organization... We're seeing more examples lately.
I like this piece. In most case it's the thinking which is important not the writing. So even if the language is subpar it's better if you write yourself... except if it's not worth the effort in the first place.
Looks like the productivity gain promises are still mostly hypothetical. Except on specific limited tasks of course but that doesn't cover for a whole job. Also, when there is a gain it's apparently not the workers who benefit from them.
This also carries privacy concerns indeed even for local models. It all depends how it's inserted in the system.
This is an important position paper in my opinion. The whole obsession towards the ill defined "AGI" is shadowing important problems in the research field which have to be overcome.
This move is really unsurprising... It's bound to become another channel for advertisements to try to cover the costs of running it all.
Of course it helps also against DDoS attacks... tells something about the state of AI scrapers I guess.
Interesting point of view... what makes a tool really?
Clearly to really benefit from LLMs there's quite some thinking around UX design to be had. This can't be only chat bots all the way.
I willadmit it... I laughed. And that's just one business risk among many.
This is a question which I have been pondering for a while... what will be left when the generative AI bursts. And indeed it won't be the models as they won't age well. The conclusion of this article got a chill running down my spine. It's indeed likely that the conclusion will be infrastructure for a bigger surveillance apparatus.