71 private links
Kind of unsurprising right? I mean LinkedIn is clearly a deformed version of reality where people write like corporate drones most of the time. It was only a matter of time until robot generated content would be prevalent there, it's just harder to spot since even humans aren't behaving genuinely there.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.
Excellent post showing all the nuances of AI skepticism. Can you find in which category you are? I definitely match several of them.
Let's hope security teams don't get saturated with low quality security reports like this...
Another lawsuit making progress against OpenAI and their shady practice.
More shady practices to try to save themselves. Let's hope it won't work.
The water problem is obviously hard to ignore. This piece does a good job illustrating how large the impact is.
Good reminder that models shouldn't be used as a service except maybe for prototyping. This has felt obvious to me since the beginning of this hype cycle... but here we are people are falling in the trap today.
More signs of the generative AI companies hitting a plateau...
It shouldn't be, but it is a big deal. Having such training corpus openly available is one of the big missing pieces to build models.
This is an interesting and balanced view. Also nice to see that local inference is really getting closer. This is mostly a UI problem now.
This is what you get by making bots spewing text based on statistics without a proper knowledge base behind it.
More marketing announcement than real research paper. Still it's nice to see smaller models being optimized to run on mobile devices. This will get interesting when it's all local first and coupled to symbolic approaches.
This is still an important step with LLM. It's not because the models are huge that tokenizers disappeared or that you don't need to clean up your data.
Using the right metaphors will definitely help with the conversation in our industry around AI. This proposal is an interesting one.
More signs of the current bubble being about to burst?
Now this is an interesting paper. Neurosymbolic approaches are starting to go somewhere now. This is definitely helped by the NLP abilities of LLMs (which should be used only for that). The natural language to Prolog idea makes sense, now it needs to be more reliable. I'd be curious to know how many times the multiple-try path is exercised (the paper doesn't quite focus on that). More research is required obviously.
Now the impact seems clear and this is mostly bad news. This reduces the production of public knowledge so everyone looses. Ironically it also means less public knowledge available to train new models. At some point their only venue to fine tune their models will be user profiling which will be private... I've a hard time seeing how we won't end up stuck with another surveillance apparatus providing access to models running on outdated knowledge. This will lock so many behaviors and decisions in place.
Of course I recommend reading the actual research paper. This article is a good summary of the consequences though. LLMs definitely can't be trusted with formal reasoning including basic maths. This is a flaw in the way they are built, the bath forward is likely merging symbolic and sub-symbolic approaches.
Indeed, we should stop listening to such people who are basically pushing fantasies in order to raise more money.