71 private links
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
A long but important report in my opinion. Reading the executive summary is a must. This gives a good overview of the AI industrial complex and the type of society it's leading us into. The report algo gives a political agenda to put us on a better path.
This is a funny way to point out people jumping on LLMs for tasks where it doesn't make sense.
Interesting little experiment. It's clearly making progress for smaller tasks. The output is still a bit random and often duplicates code though.
We already had reproducibility issues in science. With such models which allow to produce hundreds of "novel" results in one paper, how can we properly keep up in checking all the produced data is correct? This is a real challenge.
Interesting research to determine how models relate to each other. This becomes especially important as the use of synthetic data increases.
Indeed feels bad when there are so many problems in the example of LLM based completion you put on the front page of your website...
Nice piece. In an age where we're drowning in bad quality content, those who make something with care will shine. They need to be supported.
Just another hype cycle... The developer profession being in danger is greatly exaggerated.
An honest attempt at "vibe coding"... but once again the conclusion is "when it grows to non-trivial size, I'm glad my experience allowed me to finish the thing myself".
If you expected another outcome on the average developer job from the LLM craze... you likely didn't pay attention enough.
Another example of attack vectors emerging with adding more and more LLM agents in the development process.
Not only the tools have ethical issues, but the producers just pretend "we'll solve it later". A bunch of empty promises.
LLMs are indeed not neutral. There's a bunch of ethical concerns on which you don't have control when you use them.
Are we surprised they'll keep processing personal information as much as possible? Not really no...
Nice little satire, we could easily imagine some CEOs writing this kind of memo.
Interesting research, this gives a few hints at building tools to ensure some more transparency at the ideologies pushed by models. They're not unbiased, that much we know, characterising the biases are thus important.
Looks like it's getting there as a good help for auditing code, especially to find security vulnerabilities.
Or why CAPTCHA might become something of the past. I guess they'll live a bit longer as they become more and more privacy invasive.
It definitely has a point. The code output isn't really what matters. It is necessary at the end, but without the whole process it's worthless and don't empower anyone... It embodies many risks instead. I think my preferred quote in this article is this:
"We are teaching people that they are not worth to have decent, well-made things."