71 private links
Interesting questions and state of the art around model "unlearning". This became important due to the opacity of data sets used to train some models. It'll also be important in any case for managing models over time.
Nice article. It's a good reminder that the benchmarks used to evaluate generative AI systems have many caveats.
It is an interesting essay. It leans on the side of "assistants are useful for simple coding tasks" and it's a bit more critical when it's about writing. The stance is original I find, yes it can help with some writing tasks, but if you look at the writing tasks you can expedite this way... if you wish to expedite them isn't it a sign that they were providing little value in the first place? Is the solution the assistant or changing the way you work? Indeed this might hide some busy work otherwise.
Interesting take on why people see more in LLM based systems than there really is. The parallels with psychics and mentalists tricks are well thought out.
This is how it should be done. This one comes with everything needed to reproduce the results. This is necessary to gain insights into how such models work internally.
All the good reasons why productivity increases with code assistants are massively overestimated. To be used why not, but with a light touch.
AI supercharged scam. I guess we'll see more of those.
You should be mindful of the dependencies you add. Even more so when the name of the dependency has been proposed by a coding assistant.
Excellent work to improve Llama execution speed on CPU. It probably has all the tricks of the trade to accelerate this compute kernel.
Interesting study on the impact generative AI can have on people performances in business settings. There are a few nuggets in there. In particular anything related to problem solving people do worse with generative AI tools than without. And even worse than that when they've been trained (probably due to overconfidence). The place where it seems to help is for more creativity related tasks... at the individual level, but at the collective level creativity decreases due to homogenization. Definitely things to keep in mind.
Very interesting piece. The chances that it is another bubble are high. It's currently surviving on a lot of wishful thinking and hypothetical. This really feels like borrowed time... I wonder what useful will remain once it all collapses. Coding assistants are very likely to survive. Clearly there could be interesting uses in a more sober approach.
The price hike on the generative AI services will happen sooner or later. That much is clear.
Definitely this. It might ultimately impact the abstraction levels accessible to us for coding... but the skills will still be needed. Natural language is too ambiguous for the task.
This is one of the main problems with using those generative models as currently provided. It's time for the legislators to step up, we can't let a couple of players hoard energy and water for themselves.
Might be an interesting trick to reduce the computation and energy costs of large language models. Let's see if it gets replicated and generalized, this is a single short paper not peer reviewed anywhere as far as I can tell.
Interesting paper attempting to prove that hallucinations are unavoidable in those models. It is well balanced though, and explains why it's not necessarily a bad thing in theory. In my opinion, the problem is the marketing talk around those models making grand claims or denying the phenomenon.
Interesting paper evaluating a Life Cycle Assessment (LCA) method to estimate the power consumption and environmental impact of generative AI services. This is illustrated on a single service, hopefully we'll see more such assessments.
Very nice progress on this type of architecture. It's definitely needed in part because it lowers the inference cost quite a lot. It's also nice to see it released with under the Apache 2 license and the training set be documented.
This is an interesting move, we'll see if this certification gets any traction.
The tone pointing at "open models" is wrong but the research is interesting. It still proves models can be poisoned (open or not) so traceability and secured supply-chains will become very important when using large language models.