Clearly Free Software projects will have to find a way to deal with LLM generated contributions. A very large percentage of them is leading to subtle quality issues. This also very taxing on the reviewers, and you don't want to burn them out.
Clearly Free Software projects will have to find a way to deal with LLM generated contributions. A very large percentage of them is leading to subtle quality issues. This also very taxing on the reviewers, and you don't want to burn them out.
Let's not forget the ethical implications of those tools indeed. Too often people put them aside simply on the "oooh shiny toys" or the "I don't want to be left behind" reactions. Both lead to a very unethical situation.
Interesting analysis. It gives a balanced view on the possible scenarios around the AI hype.
Very in depth review of the mess of a Matrix home server vide coded at Cloudflare... all the way to the blog announcing it. Unsurprisingly this didn't go well and they had to cover their tracks several times. The response from the Matrix foundation is a bit underwhelming, it's one thing to be welcoming, it's another to turn a blind eye to such obvious failures. This doesn't reflect well on both Cloudflare and the Matrix Foundation I'm afraid.
Interesting point. As we see the collapse of public forums due to the usage of AI chatbots, we're in fact witnessing a large enclosure movement. And it'll reinforce itself as the vendors are training on the chat sessions. What used to be in public will be hidden.
Interesting ideas on how to approach teaching at the university. It gives a few clue on how to deal with chatbots during exams, can be improved but definitely a good start.
Sounds like a very interesting model (pun intended). It's really nice to pack that much performance in a smaller neural network.
I'm not sure the legal case is completely lost even though chances are slim. The arguments here are worth mulling over though. There's really an ethical factor to consider.
I agree with this so much. It's another one of those I feel I could have written. I have a hard time thinking I could use the current crop of "inference as a service" while they carry so many ethical issues.
Is this really to improve your work? Or make you dependent? In the end it might be the user which looses.
There is a real question about the training data used for the coding assistant models. It's been a problem from the start raising ethical concerns, now it shows up with a different symptom.
This looks like an interesting way to frame problems. It can give an idea of how likely they can be tackled with LLMs. It also shows that the architecture and the complexity greatly matter.
Probably one of the most important talks of 39C3. It's a powerful call to action for the European Union to wake up and do the right thing to ensure digital sovereignty for itself and everyone else in the world. The time is definitely right due to the unexpected allies to be found along the way. It'd be a way to turn the currently bad geopolitical landscape into a bunch of positive opportunities.
Long but interesting piece. There's indeed a lot to say about our relationships to tools in general and generative AI in particular. It's disheartening how it made obvious that collaborative initiatives are diminishing. In any case, ambivalence abounds in this text... for sure we can't trust the self-appointed stewards of the latest wave of such tools. The parallel with Spirited Away at the end of the article is very well chosen in my opinion. The context in which technologies are born and applied matters so much.
I think Rich Hickey hit that nail on the head.
Very comprehensive resource to make your own recommender model.
Add to this how generative AI is used in the totally wrong context... and then I feel like I could have written this piece. I definitely agree with all that.
Interesting research. Can it give insights on the pervasive views of the time?
This is really a big problem that those companies created for Free Software communities. Due to the lack of regulation they're going around distributing copyright removal machines and profiting from them. They should have been barred from ingesting copyleft material in the first place.