64 private links
Running interesting models locally gets more and more accessible.
ETH Zurich spearheading an effort for more ethical and cleaner open models. That's good research, looking forward to the results.
Interesting comparison, indeed would a clock like this be useful?
And one more... it's clearly driven by an architecture pattern used by all vendors. They need to get their acts together to change this.
There's always been disinformation in time of wars. The difference is the scale and speed of producing fake images now.
Good reminder that professional translators aren't gone... on the contrary. There's so many things in languages that you can't handle with a machine.
I recognize myself quite a bit in this opinion piece. It does a good job going through most of the ethical and practical reasons why you don't need LLMs to develop and why you likely don't want to.
OK, this is a serious and long paper. It shows quite well how over reliance on ChatGPT during the learning phase on some topics impacts people. It's mesurable both from their behavior and through EEG. Of course, it'd require more such studies with larger groups. Still those early signs are concerning.
Interesting research. Down the line it could help better fine tune models and side step some of the attention system limitations. Of course it comes with its own downsides, more research is necessary.
A nice followup which acts as a TL;DR for the previous piece which was fairly long indeed.
Yep, there's no logic engine buried deep in those chatbots. Thinking otherwise is placing faith in some twisted view about emergence...
An excellent piece which explains well why the current "debate" is rotten to the core. There's no good way to engage with those tools without reinforcing some biases. Once the hype cycle is over we have a chance at proper research on the impacts... unfortunately it's not happening now when it's badly needed.
Or how the workflows are badly designed and we're forcing ourselves to adapt to them.
More signs of students being harmed in their learning... they get better grades when using gen AI but their skills drop.
Looks like a good approach to integrate LLMs in your development workflow. In case that would become something trustable.
OK, this is definitely concerning for the use of tools with so called coding agents. The trust model is really not appropriate at this stage and that opens the door to a wide range of attacks.
If there was still any doubt that the arguments coming from the big model providers were lies... Yes, you can train large models using a corpus of training data for which you respect the license. With the diminish return in performance of the newer families of models, the performance they got from the model trained on that corpus is not bad at all.
A personal experience which led to not using ChatGPT anymore. This kind of validates other papers on cognitive decline, the added value is in how it makes it more personal and concrete.
Somehow I missed this paper last year. Interesting review of studies on the use of gen AI chat systems in learning and research environments. The amount of ethical issues is non negligible as one would expect. It also confirms the negative impact of using those tools on cognitive abilities. More concerning is the creation of a subtle vicious circle as highlighted by this quote: "regular utilization of dialogue systems is linked to a decline in abilities of cognitive abilities, a diminished capacity for information retention, and an increased reliance on these systems for information".
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).