Let's not forget the ethical implications of those tools indeed. Too often people put them aside simply on the "oooh shiny toys" or the "I don't want to be left behind" reactions. Both lead to a very unethical situation.
Interesting ideas on how to approach teaching at the university. It gives a few clue on how to deal with chatbots during exams, can be improved but definitely a good start.
I'm not sure the legal case is completely lost even though chances are slim. The arguments here are worth mulling over though. There's really an ethical factor to consider.
I agree with this so much. It's another one of those I feel I could have written. I have a hard time thinking I could use the current crop of "inference as a service" while they carry so many ethical issues.
There is a real question about the training data used for the coding assistant models. It's been a problem from the start raising ethical concerns, now it shows up with a different symptom.
Indeed I wish our profession would have a strong and binding set of ethics like doctors or lawyers. That wouldn't prevent all problems, but that'd tame some of the issues of our time.
Add to this how generative AI is used in the totally wrong context... and then I feel like I could have written this piece. I definitely agree with all that.
Decades that our industry doesn't improve its track record. But there are real consequences for users. Some more ethics would be welcome in our profession.
This is an old one, but I think that even without DVDs in the mix the core of the stories are still valid.
ETH Zurich spearheading an effort for more ethical and cleaner open models. That's good research, looking forward to the results.
I recognize myself quite a bit in this opinion piece. It does a good job going through most of the ethical and practical reasons why you don't need LLMs to develop and why you likely don't want to.
If there was still any doubt that the arguments coming from the big model providers were lies... Yes, you can train large models using a corpus of training data for which you respect the license. With the diminish return in performance of the newer families of models, the performance they got from the model trained on that corpus is not bad at all.
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
Not only the tools have ethical issues, but the producers just pretend "we'll solve it later". A bunch of empty promises.
LLMs are indeed not neutral. There's a bunch of ethical concerns on which you don't have control when you use them.
This opinion piece is getting old... and yet, it doesn't feel like our professions made much progress on those questions.
We just can't leave the topic of how the big model makers are building their training corpus unaddressed. This is both an ethics and economics problem. The creators of the content used to train such large models should be compensated in a way.
Between this, the crawlers they use and the ecological footprint of the data centers, there are so many negative externalities to those systems that law makers should have cease the topic a while ago. The paradox is that if nothing is done about it, the reckless behavior of the model makers will end up hurting them as well.
I like this paper, it's well balanced. The conclusion says is all: if you're not actively working on reducing the harms then you might be doing something unethical. It's not just a toy to play with, you have to think about the impacts and actively reduce them.
Good article about the ethical implications of using AI in systems. I like the distinction about assistive vs automated. It's not perfect as it underestimates the "asleep at the steering wheel" effects, but this is a good starting point.
Makes a strong case about why LLMs are better described as "bullshit machine". In any case this is a good introduction into bullshit as a philosophical concept. I guess with our current relationship to truth these are products well suited to their context...