64 private links
Decades that our industry doesn't improve its track record. But there are real consequences for users. Some more ethics would be welcome in our profession.
This is an old one, but I think that even without DVDs in the mix the core of the stories are still valid.
ETH Zurich spearheading an effort for more ethical and cleaner open models. That's good research, looking forward to the results.
I recognize myself quite a bit in this opinion piece. It does a good job going through most of the ethical and practical reasons why you don't need LLMs to develop and why you likely don't want to.
If there was still any doubt that the arguments coming from the big model providers were lies... Yes, you can train large models using a corpus of training data for which you respect the license. With the diminish return in performance of the newer families of models, the performance they got from the model trained on that corpus is not bad at all.
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
Not only the tools have ethical issues, but the producers just pretend "we'll solve it later". A bunch of empty promises.
LLMs are indeed not neutral. There's a bunch of ethical concerns on which you don't have control when you use them.
This opinion piece is getting old... and yet, it doesn't feel like our professions made much progress on those questions.
We just can't leave the topic of how the big model makers are building their training corpus unaddressed. This is both an ethics and economics problem. The creators of the content used to train such large models should be compensated in a way.
Between this, the crawlers they use and the ecological footprint of the data centers, there are so many negative externalities to those systems that law makers should have cease the topic a while ago. The paradox is that if nothing is done about it, the reckless behavior of the model makers will end up hurting them as well.
I like this paper, it's well balanced. The conclusion says is all: if you're not actively working on reducing the harms then you might be doing something unethical. It's not just a toy to play with, you have to think about the impacts and actively reduce them.
Good article about the ethical implications of using AI in systems. I like the distinction about assistive vs automated. It's not perfect as it underestimates the "asleep at the steering wheel" effects, but this is a good starting point.
Makes a strong case about why LLMs are better described as "bullshit machine". In any case this is a good introduction into bullshit as a philosophical concept. I guess with our current relationship to truth these are products well suited to their context...
Very interesting move. I wish them well!
Chatbots can be useful in some cases... but definitely not when people expect to connect with other humans.
Well, maybe our profession will make a leap forward. If instead of drinking the generative AI cool aid, if we really get a whole cohort of programmers better at critical skills (ethical issues, being skeptical of their tools, testing, software design and debugging) it'll clearly be some progress. Let's hope we don't fall in the obvious pitfalls.
A good exploration of the Fediverse to Bluesky bridging debate from the angle of consent and the GDPR. It's complicated and that shouldn't come as unexpected.
As an industry we definitely should think more often about the consequences of our actions. The incentives are indeed pushing us to go faster without much critical thinking.
How does it feel to just want to put something creative out there without being exploited? Very touching comic on the topic.
Unsurprising move, they claim it's for the betterment of mankind but in practice it's mostly about capturing as much market share as possible.