64 private links
The trend keep being the same... And when the newer models will be trained on FOSS code which degraded in quality due to the use of the previous generation of models, things are going to get "interesting".
IDEs allowing to spawn actions in the user environment are still a big security risk.
Indeed, if we weaken the learning loop by using coding assistants then we might feel we go faster while we're building up the maintenance cliff. We need to have an understanding of the system.
Apparently in the age of people using LLMs for their tests, there is a bias toward mockist tests being produced. It's a good time to remind why you likely don't want them in most cases and limit the use of mocks to consider fakes and checking system state instead.
Honestly, it took much longer than I expected. Now you know that GitHub has really become a conduit for Microsoft's AI initiatives.
And one more... it's clearly driven by an architecture pattern used by all vendors. They need to get their acts together to change this.
I recognize myself quite a bit in this opinion piece. It does a good job going through most of the ethical and practical reasons why you don't need LLMs to develop and why you likely don't want to.
A nice followup which acts as a TL;DR for the previous piece which was fairly long indeed.
An excellent piece which explains well why the current "debate" is rotten to the core. There's no good way to engage with those tools without reinforcing some biases. Once the hype cycle is over we have a chance at proper research on the impacts... unfortunately it's not happening now when it's badly needed.
Or how the workflows are badly designed and we're forcing ourselves to adapt to them.
Looks like a good approach to integrate LLMs in your development workflow. In case that would become something trustable.
OK, this is definitely concerning for the use of tools with so called coding agents. The trust model is really not appropriate at this stage and that opens the door to a wide range of attacks.
I don't think I'm ready to give up just yet... Still, I recognise myself so much in this piece it feels like I could have written it (alas I don't write as well).
Interesting little experiment. It's clearly making progress for smaller tasks. The output is still a bit random and often duplicates code though.
Indeed feels bad when there are so many problems in the example of LLM based completion you put on the front page of your website...
Just another hype cycle... The developer profession being in danger is greatly exaggerated.
An honest attempt at "vibe coding"... but once again the conclusion is "when it grows to non-trivial size, I'm glad my experience allowed me to finish the thing myself".
If you expected another outcome on the average developer job from the LLM craze... you likely didn't pay attention enough.
Another example of attack vectors emerging with adding more and more LLM agents in the development process.
Looks like it's getting there as a good help for auditing code, especially to find security vulnerabilities.