I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
A long but important report in my opinion. Reading the executive summary is a must. This gives a good overview of the AI industrial complex and the type of society it's leading us into. The report algo gives a political agenda to put us on a better path.
This is a funny way to point out people jumping on LLMs for tasks where it doesn't make sense.
Interesting little experiment. It's clearly making progress for smaller tasks. The output is still a bit random and often duplicates code though.
Interesting research to determine how models relate to each other. This becomes especially important as the use of synthetic data increases.
Indeed feels bad when there are so many problems in the example of LLM based completion you put on the front page of your website...
Nice piece. In an age where we're drowning in bad quality content, those who make something with care will shine. They need to be supported.
Just another hype cycle... The developer profession being in danger is greatly exaggerated.
If you expected another outcome on the average developer job from the LLM craze... you likely didn't pay attention enough.
Not only the tools have ethical issues, but the producers just pretend "we'll solve it later". A bunch of empty promises.
LLMs are indeed not neutral. There's a bunch of ethical concerns on which you don't have control when you use them.
Are we surprised they'll keep processing personal information as much as possible? Not really no...
Nice little satire, we could easily imagine some CEOs writing this kind of memo.
Interesting research, this gives a few hints at building tools to ensure some more transparency at the ideologies pushed by models. They're not unbiased, that much we know, characterising the biases are thus important.
Looks like it's getting there as a good help for auditing code, especially to find security vulnerabilities.
It definitely has a point. The code output isn't really what matters. It is necessary at the end, but without the whole process it's worthless and don't empower anyone... It embodies many risks instead. I think my preferred quote in this article is this:
"We are teaching people that they are not worth to have decent, well-made things."
This is a good rant, I liked it. Lots of very good points in there of course. Again: the area where it's useful is very narrow. I also nails down the consequences of a profession going full in with those tools.
Those hosted models really exhibit weird biases... The control of the context is really key.
That's a good overview of the energy demand, it doesn't account for all the resources needed of course. Now of course like most articles and studies on the topic, it's very inaccurate because of the opacity from the major providers in that space. The only thing we know is that the numbers here are likely conservative and the real impact higher. Mass use of those models inferences is already becoming a problem, and it's bound to get worse.
If the funding dries up... we'll have another AI winter on our hands indeed.