This is a hard problem to solve, and going multi-modal makes it harder in my opinion.
I agree with this, be precise about what you're using. Blanket terms won't lead you anywhere to creating proper systems.
Looks like it was a very interesting talk. Situation still needs to be monitored in any case, it's uncertain how those cases will be ruled.
Lays out the ethical problems with the current trend of AI system very well. They're definitely not neutral tools and currently suffer from major issues.
Now this is a very good article highlighting the pros and cons of large language models for natural language processing tasks. It can help on some things but definitely shouldn't be relied on for longer term systems.
Now this is actually an interesting and good use of the latest trend in large language models. You can simulate difficult conversations, getting more experience there can help.
Are we surprised? Not at all...
The level of details these techniques are giving now... this is very impressive.
Interesting opinion piece about GPT and LLMs. When you ignore the hype, consider the available facts, then you can see how it's another extra tool and unlikely to replace many people.
Now this could turn out to be interesting. To be confirmed when this get closer to production (if it does), especially on the power consumption side. This will be the important factor to make this viable I think.
It'll be interesting to see where this complaint goes.
Excellent piece from an excellent artist. I really thought this through and I think he's going in the right direction.
Very comprehensive (didn't read it all yet) guide about self-supervised learning. It'll likely become good reference material.
It smells a bit like hypocrisy isn't it? On one hand they claim it can make developers more productive on the other they think they shouldn't use it.
Oh the bad feedback loop this introduces... this clearly poison the well of AI training when it goes through such platforms.
Looks like an interesting tool to run LLMs on your own hardware.
Maybe it's time to make so called "reinforcement learning from human feedback" actually humane? It's not the first account along those lines in the industry.
Interesting ideas for using large language models. There is a world beyond the chatbot interface and it might bring more value to users and avoid some of the pitfalls of anthropomorphisation.
Nice piece which shows how easy it is to get such models to produce nonsense.
This is looking like a bad move. Clearly the fault of western countries though which let things unfold ambiguously regarding copyright... Now Japan is weakening copyright for everyone.