Well, maybe our profession will make a leap forward. If instead of drinking the generative AI cool aid, if we really get a whole cohort of programmers better at critical skills (ethical issues, being skeptical of their tools, testing, software design and debugging) it'll clearly be some progress. Let's hope we don't fall in the obvious pitfalls.
It is an interesting essay. It leans on the side of "assistants are useful for simple coding tasks" and it's a bit more critical when it's about writing. The stance is original I find, yes it can help with some writing tasks, but if you look at the writing tasks you can expedite this way... if you wish to expedite them isn't it a sign that they were providing little value in the first place? Is the solution the assistant or changing the way you work? Indeed this might hide some busy work otherwise.
Wondering how one can design a coding assistant? Here is an in depth explanation of the choices made by one of the solutions out there. There's quite some processing before and after actually running the inference with the LLM.
All the good reasons why productivity increases with code assistants are massively overestimated. To be used why not, but with a light touch.
You should be mindful of the dependencies you add. Even more so when the name of the dependency has been proposed by a coding assistant.
The price hike on the generative AI services will happen sooner or later. That much is clear.
Definitely this. It might ultimately impact the abstraction levels accessible to us for coding... but the skills will still be needed. Natural language is too ambiguous for the task.
Faster with less effort doesn't seem to lead to quality code overall.
Basically the wording allows them to feed whatever system they want with your code... even in private repositories.
Are we surprised? Not at all... this is an ethical problem, this is a legal risk. The alternatives will hopefully know better.
Interesting reverse engineering job of Copilot's client side to have a better idea at which information it actually feeds to the model. A couple of funny tricks to prepare the prompt are involved. Obviously some telemetry involved as well, again with interesting heuristics to try to figure out if the user kept the suggestion or not.
Alright, this going to be interesting. Pass me the pop corn. It's definitely a welcome move in any case.
Indeed, this is going to be "interesting" in educational situations... I guess that'll at least push into richer assignments.