Clearly Free Software projects will have to find a way to deal with LLM generated contributions. A very large percentage of them is leading to subtle quality issues. This also very taxing on the reviewers, and you don't want to burn them out.
Clearly Free Software projects will have to find a way to deal with LLM generated contributions. A very large percentage of them is leading to subtle quality issues. This also very taxing on the reviewers, and you don't want to burn them out.
Let's not forget the ethical implications of those tools indeed. Too often people put them aside simply on the "oooh shiny toys" or the "I don't want to be left behind" reactions. Both lead to a very unethical situation.
Interesting analysis. It gives a balanced view on the possible scenarios around the AI hype.
Very in depth review of the mess of a Matrix home server vide coded at Cloudflare... all the way to the blog announcing it. Unsurprisingly this didn't go well and they had to cover their tracks several times. The response from the Matrix foundation is a bit underwhelming, it's one thing to be welcoming, it's another to turn a blind eye to such obvious failures. This doesn't reflect well on both Cloudflare and the Matrix Foundation I'm afraid.
Interesting point. As we see the collapse of public forums due to the usage of AI chatbots, we're in fact witnessing a large enclosure movement. And it'll reinforce itself as the vendors are training on the chat sessions. What used to be in public will be hidden.
I'm not sure the legal case is completely lost even though chances are slim. The arguments here are worth mulling over though. There's really an ethical factor to consider.
I agree with this so much. It's another one of those I feel I could have written. I have a hard time thinking I could use the current crop of "inference as a service" while they carry so many ethical issues.
Is this really to improve your work? Or make you dependent? In the end it might be the user which looses.
There is a real question about the training data used for the coding assistant models. It's been a problem from the start raising ethical concerns, now it shows up with a different symptom.
This looks like an interesting way to frame problems. It can give an idea of how likely they can be tackled with LLMs. It also shows that the architecture and the complexity greatly matter.
Probably one of the most important talks of 39C3. It's a powerful call to action for the European Union to wake up and do the right thing to ensure digital sovereignty for itself and everyone else in the world. The time is definitely right due to the unexpected allies to be found along the way. It'd be a way to turn the currently bad geopolitical landscape into a bunch of positive opportunities.
I think Rich Hickey hit that nail on the head.
This is really a big problem that those companies created for Free Software communities. Due to the lack of regulation they're going around distributing copyright removal machines and profiting from them. They should have been barred from ingesting copyleft material in the first place.
Very good distinction between creating and making. That might explain the distinction between people who love their craft and those who want to automate it away. The latter want instant gratification and this can't stand the process of making things.
Long but excellent opinion piece about everything which is wrong with the current AI-mania.
The trend keep being the same... And when the newer models will be trained on FOSS code which degraded in quality due to the use of the previous generation of models, things are going to get "interesting".
IDEs allowing to spawn actions in the user environment are still a big security risk.
Indeed, if we weaken the learning loop by using coding assistants then we might feel we go faster while we're building up the maintenance cliff. We need to have an understanding of the system.
Apparently in the age of people using LLMs for their tests, there is a bias toward mockist tests being produced. It's a good time to remind why you likely don't want them in most cases and limit the use of mocks to consider fakes and checking system state instead.