And yet another reverse proxy to use as a scraper deterrent... It looks like several are popping every week lately.
Don't underestimate how much of a skill making a stupid crawler can be...
Despite the marketing speak... it's definitely not there yet. So far all the attempts at using LLM for coding larger pieces end up in this kind of messy results. It helps kickstarting a project indeed but quickly degenerates after that.
More details about the impacts of the LLM companies acting like vandals... This is clearly widespread and generating work for everyone for nothing.
Those bots are really becoming the scourge of the Internet... Is it really necessary to DDoS every forge out there to build LLMs? And that's not even counting all the other externalities, the end of the article make it clear: "If blasting CO2 into the air and ruining all of our freshwater and traumatizing cheap laborers and making every sysadmin you know miserable and ripping off code and books and art at scale and ruining our fucking democracy isn’t enough for you to leave this shit alone, what is?"
People really need to be careful about the short term productivity boost... If it kills maintainability in the process you're trading that short term productivity for a crashing long term productivity.
This is definitely a problem. It's doomed to influence how tech are chosen on software projects.
The security implications of using LLMs are real. With the high complexity and low explainability of such models it opens the door to hiding attacks in plain sight.
This is an interesting way to frame the problem. We can't rely too much on LLMs for computer science problems without loosing important skills and hindering learning. This is to be kept in mind.
Again it's definitely not useful for everyone... it might even be dangerous for learning.
Be wary of the unproven claims that using LLMs necessarily leads to productivity gains. The impacts might be negative.
This will definitely push even more conservatism around the existing platforms. More articles mean more training data... The underdogs will then suffer.
The results are unsurprising. They definitely confirm what we expected. The models are good at predicting the past, they're not so great at solving problems.
Using the right metaphors will definitely help with the conversation in our industry around AI. This proposal is an interesting one.
How shocking! This was all hype? Not surprised since we've seen the referenced papers before, but put all together it makes things really clear.
Or why we shouldn't trust marketing survey... they definitely confuse perception and actual results. Worse they do it on purpose.
Unsurprisingly the productivity gains announced for coding assistants have been greatly exaggerated. There might be cases of strong gains but it's still unclear in which niches this is going to happen.
The creative ways to exfiltrate data from chat systems built with LLMs...
Definitely this. In a world where LLM would actually be accurate and would never spit outright crappy code, programmers would still be needed. It'd mean spending less time writing but more time investigating and debugging the produced code.
Interesting data point. This is a very specialized experience but the fact that those systems are kind of random and slow clearly play a good part in limiting the productivity you could get from them.