More signs of students being harmed in their learning... they get better grades when using gen AI but their skills drop.
Looks like a good approach to integrate LLMs in your development workflow. In case that would become something trustable.
OK, this is definitely concerning for the use of tools with so called coding agents. The trust model is really not appropriate at this stage and that opens the door to a wide range of attacks.
If there was still any doubt that the arguments coming from the big model providers were lies... Yes, you can train large models using a corpus of training data for which you respect the license. With the diminish return in performance of the newer families of models, the performance they got from the model trained on that corpus is not bad at all.
A personal experience which led to not using ChatGPT anymore. This kind of validates other papers on cognitive decline, the added value is in how it makes it more personal and concrete.
Somehow I missed this paper last year. Interesting review of studies on the use of gen AI chat systems in learning and research environments. The amount of ethical issues is non negligible as one would expect. It also confirms the negative impact of using those tools on cognitive abilities. More concerning is the creation of a subtle vicious circle as highlighted by this quote: "regular utilization of dialogue systems is linked to a decline in abilities of cognitive abilities, a diminished capacity for information retention, and an increased reliance on these systems for information".
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
A long but important report in my opinion. Reading the executive summary is a must. This gives a good overview of the AI industrial complex and the type of society it's leading us into. The report algo gives a political agenda to put us on a better path.
This is a funny way to point out people jumping on LLMs for tasks where it doesn't make sense.
Interesting little experiment. It's clearly making progress for smaller tasks. The output is still a bit random and often duplicates code though.
We already had reproducibility issues in science. With such models which allow to produce hundreds of "novel" results in one paper, how can we properly keep up in checking all the produced data is correct? This is a real challenge.
Interesting research to determine how models relate to each other. This becomes especially important as the use of synthetic data increases.
Indeed feels bad when there are so many problems in the example of LLM based completion you put on the front page of your website...
Nice piece. In an age where we're drowning in bad quality content, those who make something with care will shine. They need to be supported.
Just another hype cycle... The developer profession being in danger is greatly exaggerated.
An honest attempt at "vibe coding"... but once again the conclusion is "when it grows to non-trivial size, I'm glad my experience allowed me to finish the thing myself".
If you expected another outcome on the average developer job from the LLM craze... you likely didn't pay attention enough.
Another example of attack vectors emerging with adding more and more LLM agents in the development process.
Not only the tools have ethical issues, but the producers just pretend "we'll solve it later". A bunch of empty promises.
LLMs are indeed not neutral. There's a bunch of ethical concerns on which you don't have control when you use them.