On specialized and formalized domains like this it might lead to something interesting. That said there's a tension with the fact that it doesn't know when it doesn't know which might be problematic. Also I wonder how it fares compared to computational models like WolframAlpha. In the end very formal domains like this have large knowledge bases already available.
A good way to get some control back if you want to use a LLM. You can host it locally, it's free software. Definitely a step in the right direction.
This is a hard problem to solve, and going multi-modal makes it harder in my opinion.
Looks like it was a very interesting talk. Situation still needs to be monitored in any case, it's uncertain how those cases will be ruled.
Lays out the ethical problems with the current trend of AI system very well. They're definitely not neutral tools and currently suffer from major issues.
Now this is a very good article highlighting the pros and cons of large language models for natural language processing tasks. It can help on some things but definitely shouldn't be relied on for longer term systems.
Now this is actually an interesting and good use of the latest trend in large language models. You can simulate difficult conversations, getting more experience there can help.
Interesting opinion piece about GPT and LLMs. When you ignore the hype, consider the available facts, then you can see how it's another extra tool and unlikely to replace many people.
Interesting analysis around the current situation around web scraping and intellectual property. This moved to being mostly dealt with using contract law which makes it a terrible minefield. Lots of hypocrisy all around too which doesn't help. GPT and the likes will likely be the next area where cases will rise.
It'll be interesting to see where this complaint goes.
It smells a bit like hypocrisy isn't it? On one hand they claim it can make developers more productive on the other they think they shouldn't use it.
Looks like an interesting tool to run LLMs on your own hardware.
Maybe it's time to make so called "reinforcement learning from human feedback" actually humane? It's not the first account along those lines in the industry.
Interesting ideas for using large language models. There is a world beyond the chatbot interface and it might bring more value to users and avoid some of the pitfalls of anthropomorphisation.
Nice piece which shows how easy it is to get such models to produce nonsense.
So close... and still. This is clearly still in the uncanny valley department at times.
This is early research of course but still the results are interesting. Once again, we're much easier to influence than we'd like.
The copyright problem in all this is becoming more and more obvious...
Very good interview. She really point out the main issues. Quite a lot of the current debate is poisoned by simplistic extrapolations based on sci-fi. This distracts everyone from the very real and present problems.
Looks like a promising way to reduce the training cost of large language models.