So much data trapped in PDFs indeed... Unfortunately VLM are still not reliable enough to be unleashed without tight validation of the output.
I like this kind of research as it also says something about our own cognition. The results comparing two models and improving them are fascinating.
Are we surprised? Not really... This kind of struggle was an obvious outcome from the heavy dependencies between both companies.
Here we go for a brand new marketing stunt from OpenAI. You can also tell the pressure is rising since all of this is still operating at a massive loss.
Sure it makes generating content faster... but it's indeed so bland and uniform.
Friendly reminder that AI was also supposed to be a field about studying cognition... There's so many things we still don't understand that the whole "make it bigger and it'll be smart" obsession looks like it's creating missed opportunities to understand ourselves better.
This is one of the handful of uses where I'd expect LLMs to shine. It's nice to see some tooling to make it easier.
Early days but this looks like an interesting solution to democratize the inference of large models.
I like this paper, it's well balanced. The conclusion says is all: if you're not actively working on reducing the harms then you might be doing something unethical. It's not just a toy to play with, you have to think about the impacts and actively reduce them.
Interesting research, looking forward to the follow ups to see how it evolves over time. For sure the number of issues is way to high still to make trustworthy systems around search and news.
This might be accidental but this highlights the lack of transparency on how those models are produced. It also means we should get ready for future generation of such models to turn into very subtle propaganda machines. Indeed even if for now it's accidental I doubt it'll be the case much longer.
People really need to be careful about the short term productivity boost... If it kills maintainability in the process you're trading that short term productivity for a crashing long term productivity.
This is definitely a problem. It's doomed to influence how tech are chosen on software projects.
The security implications of using LLMs are real. With the high complexity and low explainability of such models it opens the door to hiding attacks in plain sight.
This is an interesting way to frame the problem. We can't rely too much on LLMs for computer science problems without loosing important skills and hindering learning. This is to be kept in mind.
Of course it would be less of a problem if explainability was better with such models. It's not the case though, so it means they can spew very subtle propaganda. This is bound to become even more of a political power tool.
This is clearly pointing in the direction of UX challenges around LLM uses. For some tasks the user's critical thinking must be fostered otherwise bad decisions will ensue.
Again it's definitely not useful for everyone... it might even be dangerous for learning.
Be wary of the unproven claims that using LLMs necessarily leads to productivity gains. The impacts might be negative.
When you put the marketing claims aside, the limitations of those models become obvious. This is important, only finding the root cause of those limitations can give a chance to find a solution to then.