71 private links
Unsurprising move, they claim it's for the betterment of mankind but in practice it's mostly about capturing as much market share as possible.
When bug bounty programs meet LLM hallucinations... developer time is wasted.
It was only a question of time until we'd see such lawsuits appear. We'll see where this one goes.
When underfunded schools systems preaching obedience and conformity meet something like large language models, this tips over the balance enough that no proper learning can really happen anymore. Time to reform our school systems?
Very interesting paper about the energy footprint of the latest trend in generator models. The conclusion is fairly clear: we should think twice before using them.
Interesting inference engine. The design is clever with an hybrid CPU-GPU approach to limit the memory demand on the GPU and the amount of data transfers. The results are very interesting, especially surprising if the apparently very limited impact on the accuracy.
Here we are... We're really close to crossing into this territory where any fiction can disguise itself for reality. The problem is that we'll literally be drowning in such content. The social impacts can't be underestimated.
Interesting technique to speed up the generation of large language models.
That's a very good question. What will be left once all the hype is gone? Not all bubbles leaving something behind... we can hope this one will.
When SEO and generated content meet... this isn't pretty. The amount of good content on the web reduced in the past decade, it looks like we're happily crossing another threshold in mediocrity.
The actual dangers of generative AI. Once the web is flooded with generated content, what will happen to knowledge representation and verifiability?
Important and interesting study showing how the new generation of models are driving energy consumption way up. As a developer, do the responsible thing and use smaller, more specific models.
The Large Language Model arm race is still going strong. Models are still mostly hidden behind APIs of course, and this is likely consuming lots of energy to run. Results seem interesting though, even though I suspect they're over inflating the "safety" built in all this. Also be careful of the demo videos, they've been reported as heavily edited and misleading...
Definitely one of the worrying aspects of reducing human labor needs for analyzing texts. Surveillance is on the brink of being increased thanks to it.
A glimpse into how those generator models can present a real copyright problem... there should be more transparency on the training data sets.
This is clearly an uphill battle. And yes, this is because it's broken by design, it should be opt-in and not opt-out.
Very interesting review, we can see some interesting strengths and weaknesses. Also gives a good idea of the different ways to evaluate such models.
LLMs training had a bias from the start... and now we got a feedback loop since people are posting generated content online which is then used for training again. Expect idiosyncrasies to increase with time.
Now this is a very interesting use of generator models. I find this more exciting than the glorified chatbots.
Interesting experiment. It makes for a very large file but there are a few clever tricks in there.