74 private links
People really need to be careful about the short term productivity boost... If it kills maintainability in the process you're trading that short term productivity for a crashing long term productivity.
This is definitely a problem. It's doomed to influence how tech are chosen on software projects.
The security implications of using LLMs are real. With the high complexity and low explainability of such models it opens the door to hiding attacks in plain sight.
This is an interesting way to frame the problem. We can't rely too much on LLMs for computer science problems without loosing important skills and hindering learning. This is to be kept in mind.
Again it's definitely not useful for everyone... it might even be dangerous for learning.
Be wary of the unproven claims that using LLMs necessarily leads to productivity gains. The impacts might be negative.
This will definitely push even more conservatism around the existing platforms. More articles mean more training data... The underdogs will then suffer.
The results are unsurprising. They definitely confirm what we expected. The models are good at predicting the past, they're not so great at solving problems.
Using the right metaphors will definitely help with the conversation in our industry around AI. This proposal is an interesting one.
How shocking! This was all hype? Not surprised since we've seen the referenced papers before, but put all together it makes things really clear.
Or why we shouldn't trust marketing survey... they definitely confuse perception and actual results. Worse they do it on purpose.
Unsurprisingly the productivity gains announced for coding assistants have been greatly exaggerated. There might be cases of strong gains but it's still unclear in which niches this is going to happen.
The creative ways to exfiltrate data from chat systems built with LLMs...
Definitely this. In a world where LLM would actually be accurate and would never spit outright crappy code, programmers would still be needed. It'd mean spending less time writing but more time investigating and debugging the produced code.
Interesting data point. This is a very specialized experience but the fact that those systems are kind of random and slow clearly play a good part in limiting the productivity you could get from them.
Well, maybe our profession will make a leap forward. If instead of drinking the generative AI cool aid, if we really get a whole cohort of programmers better at critical skills (ethical issues, being skeptical of their tools, testing, software design and debugging) it'll clearly be some progress. Let's hope we don't fall in the obvious pitfalls.
It is an interesting essay. It leans on the side of "assistants are useful for simple coding tasks" and it's a bit more critical when it's about writing. The stance is original I find, yes it can help with some writing tasks, but if you look at the writing tasks you can expedite this way... if you wish to expedite them isn't it a sign that they were providing little value in the first place? Is the solution the assistant or changing the way you work? Indeed this might hide some busy work otherwise.
Wondering how one can design a coding assistant? Here is an in depth explanation of the choices made by one of the solutions out there. There's quite some processing before and after actually running the inference with the LLM.
All the good reasons why productivity increases with code assistants are massively overestimated. To be used why not, but with a light touch.
You should be mindful of the dependencies you add. Even more so when the name of the dependency has been proposed by a coding assistant.