77 private links
Looks like an interesting tool to run LLMs on your own hardware.
Maybe it's time to make so called "reinforcement learning from human feedback" actually humane? It's not the first account along those lines in the industry.
Interesting ideas for using large language models. There is a world beyond the chatbot interface and it might bring more value to users and avoid some of the pitfalls of anthropomorphisation.
Nice piece which shows how easy it is to get such models to produce nonsense.
So close... and still. This is clearly still in the uncanny valley department at times.
This is early research of course but still the results are interesting. Once again, we're much easier to influence than we'd like.
The copyright problem in all this is becoming more and more obvious...
Very good interview. She really point out the main issues. Quite a lot of the current debate is poisoned by simplistic extrapolations based on sci-fi. This distracts everyone from the very real and present problems.
Looks like a promising way to reduce the training cost of large language models.
And yet another set of open source models. This is really democratizing quickly.
Truly open source models are pouring in. This is showing more transparency and I hope it will lead to better uses even though some of the concerns will stay true.
Excellent opinion piece. Sure, "A.I." is a tool, but who is wielding that tool currently? Whom needs is it designed to fulfill? This is currently very much of a problem. The comparison with McKinsey although surprising is an interesting thought.
Also I appreciate the clarification on the Luddites movement... they were not anti-technology.
Interesting experiment even though it's still early days for this kind of research and we'd need more such evaluations. They found that it produces mostly insecure code. This is not really surprising in the end, this manipulates language but has not execution model. It can be fixed only by coupling to some outside system.
OK, this is a pre-print so to take with a truckload of salt. If further nice results get built up on this it could turn out interesting though. This is a much more intellectually satisfying approach than the current arm race of "let's throw bigger models at the problem". This has the potentially of reducing the computational complexity of those models, this is definitely welcome in term of energy and hardware requirements. Let's wait and see...
This was only a matter of time. It'll be interesting to see how this will unfold. Potentially it could turn into lawsuit cases being built up, it could also mean content producers get a cut down the line... of course could be both. Since FOSS code also ends up in training those models I'm even wondering if that could lead to money going back to the authors. We'll see where that goes.
Definitely this! Major FOSS projects should think twice before giving their street creds to such closed systems. They've been produced with dubious ethics and copyright practices and since they're usable only through APIs the induced vendor lock-in will be strong.
There's the carbon footprint but of course there's also the water consumption... and with increased droughts this will become more and more of a problem.
This is important. We need truly open generator models. This can't be left in the hands of a few with only API access, especially since they lack basic transparency.
I'm still doubtful about it but maybe I'm wrong so a counterpoint to my own opinions. Of course this is a purely productivity standpoint in here which overlooks my main concerns with how this is currently deployed and used.
Good reasons to leave indeed. Better host your projects somewhere else.