This also carries privacy concerns indeed even for local models. It all depends how it's inserted in the system.
This is an important position paper in my opinion. The whole obsession towards the ill defined "AGI" is shadowing important problems in the research field which have to be overcome.
Sourcehut pulled the trigger on their crawler deterrent. Good move, good explanations of the reasons too.
Unsurprisingly, Wikimedia is also badly impacted by the LLM crawlers... That puts access to curated knowledge at risk if the trend continues.
I somehow recognise myself in this piece. Not completely though, I disagree with some of the points... but we share some baggage so I recognize another fellow.
More details about the impacts of the LLM companies acting like vandals... This is clearly widespread and generating work for everyone for nothing.
Those bots are really becoming the scourge of the Internet... Is it really necessary to DDoS every forge out there to build LLMs? And that's not even counting all the other externalities, the end of the article make it clear: "If blasting CO2 into the air and ruining all of our freshwater and traumatizing cheap laborers and making every sysadmin you know miserable and ripping off code and books and art at scale and ruining our fucking democracy isn’t enough for you to leave this shit alone, what is?"
Interesting piece... why talking about microservices generally leads nowhere. This is a too loosely defined term and we're often confusing means and ends.
Simple steps to escape the algorithmic social media circus.
Sure it makes generating content faster... but it's indeed so bland and uniform.
Another example that on such ecosystems you're not really owning your device. Seek alternatives!
I'm still baffled people are coming with ideas like this for their businesses... The level of cynicism you must have to build such a startup.
This is an interesting way to frame the problem. We can't rely too much on LLMs for computer science problems without loosing important skills and hindering learning. This is to be kept in mind.
Alright, this piece is full of vitriol... And I like it. The CES has clearly become a mirror of the absurdities our industry is going through. The vision proposed by a good chunk of the companies is not appealing and lazy.
Of course it would be less of a problem if explainability was better with such models. It's not the case though, so it means they can spew very subtle propaganda. This is bound to become even more of a political power tool.
Maybe at some point the big providers will get the message and their scrapers will finally respect robots.txt? Let's hope so.
Good perspective on how the generative AI space evolved in 2024. There are good news and more concerning ones in there. We'll see what 2025 brings.
OK, this is a nice parabole. I admit I enjoyed it.
A good balanced post on the topic. Maybe we'll finally see a resurgence of real research innovation and not just stupid scaling at all costs. Reliability will stay the important factor of course and this one is still hard to crack.
Excellent post showing all the nuances of AI skepticism. Can you find in which category you are? I definitely match several of them.