Interesting take on why people see more in LLM based systems than there really is. The parallels with psychics and mentalists tricks are well thought out.
This is how it should be done. This one comes with everything needed to reproduce the results. This is necessary to gain insights into how such models work internally.
All the good reasons why productivity increases with code assistants are massively overestimated. To be used why not, but with a light touch.
AI supercharged scam. I guess we'll see more of those.
You should be mindful of the dependencies you add. Even more so when the name of the dependency has been proposed by a coding assistant.
Excellent work to improve Llama execution speed on CPU. It probably has all the tricks of the trade to accelerate this compute kernel.
Interesting study on the impact generative AI can have on people performances in business settings. There are a few nuggets in there. In particular anything related to problem solving people do worse with generative AI tools than without. And even worse than that when they've been trained (probably due to overconfidence). The place where it seems to help is for more creativity related tasks... at the individual level, but at the collective level creativity decreases due to homogenization. Definitely things to keep in mind.
Very interesting piece. The chances that it is another bubble are high. It's currently surviving on a lot of wishful thinking and hypothetical. This really feels like borrowed time... I wonder what useful will remain once it all collapses. Coding assistants are very likely to survive. Clearly there could be interesting uses in a more sober approach.
The price hike on the generative AI services will happen sooner or later. That much is clear.
Definitely this. It might ultimately impact the abstraction levels accessible to us for coding... but the skills will still be needed. Natural language is too ambiguous for the task.
This is one of the main problems with using those generative models as currently provided. It's time for the legislators to step up, we can't let a couple of players hoard energy and water for themselves.
Might be an interesting trick to reduce the computation and energy costs of large language models. Let's see if it gets replicated and generalized, this is a single short paper not peer reviewed anywhere as far as I can tell.
Interesting paper attempting to prove that hallucinations are unavoidable in those models. It is well balanced though, and explains why it's not necessarily a bad thing in theory. In my opinion, the problem is the marketing talk around those models making grand claims or denying the phenomenon.
Interesting paper evaluating a Life Cycle Assessment (LCA) method to estimate the power consumption and environmental impact of generative AI services. This is illustrated on a single service, hopefully we'll see more such assessments.
Very nice progress on this type of architecture. It's definitely needed in part because it lowers the inference cost quite a lot. It's also nice to see it released with under the Apache 2 license and the training set be documented.
This is an interesting move, we'll see if this certification gets any traction.
The tone pointing at "open models" is wrong but the research is interesting. It still proves models can be poisoned (open or not) so traceability and secured supply-chains will become very important when using large language models.
Unsurprising move, they claim it's for the betterment of mankind but in practice it's mostly about capturing as much market share as possible.
When bug bounty programs meet LLM hallucinations... developer time is wasted.
It was only a question of time until we'd see such lawsuits appear. We'll see where this one goes.