Using the right metaphors will definitely help with the conversation in our industry around AI. This proposal is an interesting one.
More signs of the current bubble being about to burst?
Now this is an interesting paper. Neurosymbolic approaches are starting to go somewhere now. This is definitely helped by the NLP abilities of LLMs (which should be used only for that). The natural language to Prolog idea makes sense, now it needs to be more reliable. I'd be curious to know how many times the multiple-try path is exercised (the paper doesn't quite focus on that). More research is required obviously.
Now the impact seems clear and this is mostly bad news. This reduces the production of public knowledge so everyone looses. Ironically it also means less public knowledge available to train new models. At some point their only venue to fine tune their models will be user profiling which will be private... I've a hard time seeing how we won't end up stuck with another surveillance apparatus providing access to models running on outdated knowledge. This will lock so many behaviors and decisions in place.
Finally a path forward for logic programming? An opportunity to evolve beyond Prolog and its variants? Good food for thought.
Of course I recommend reading the actual research paper. This article is a good summary of the consequences though. LLMs definitely can't be trusted with formal reasoning including basic maths. This is a flaw in the way they are built, the bath forward is likely merging symbolic and sub-symbolic approaches.
Indeed, we should stop listening to such people who are basically pushing fantasies in order to raise more money.
OK, this paper picked my curiosity. The limitations of the experiments makes me wonder if some threshold effects aren't ignored. Still this is a good indication that the question is worth pursuing further.
How shocking! This was all hype? Not surprised since we've seen the referenced papers before, but put all together it makes things really clear.
The arm race is still on-going at a furious pace. Still wondering how messy it will be when this bubble bursts.
Doxxing will get easier and easier. Con men are likely paying attention.
Good article about the ethical implications of using AI in systems. I like the distinction about assistive vs automated. It's not perfect as it underestimates the "asleep at the steering wheel" effects, but this is a good starting point.
If you run the number, we actually can't afford this kind of generative AI arm race. It's completely unsustainable both for training and during use...
This is a short article summarizing a research paper at the surface level. It is clearly the last nail in the coffin for the generative AI grand marketing claims. Of course, I recommend reading the actual research paper (link at the end) but if you prefer this very short form, here it is. It's clearly time to go back to the initial goals of the AI field: understanding cognition. The latest industrial trends tend to confuse too much the map with the territory.
Or why we shouldn't trust marketing survey... they definitely confuse perception and actual results. Worse they do it on purpose.
Unsurprisingly the productivity gains announced for coding assistants have been greatly exaggerated. There might be cases of strong gains but it's still unclear in which niches this is going to happen.
Maybe extrapolating a bit more than it should. Still this leads to worrying uses of AI generated images.
I definitely agree with this. I'm sick of the grand claims around what is essentially a parlor trick. Could we tone down the marketing enough so that we can properly think about making useful products again?
They're trying a come back... of course they added layers of security to pretend it's all solved and shiny. They totally ignore the social implications or if something like this even needs to be done. At least one can remove it... for now...
People are putting LLM related feature out there too hastily for my taste. At least they should keep in mind the security and safety implications.