When hype meets group think, you can quickly create broken culture in your organization... We're seeing more examples lately.
I like this piece. In most case it's the thinking which is important not the writing. So even if the language is subpar it's better if you write yourself... except if it's not worth the effort in the first place.
Looks like the productivity gain promises are still mostly hypothetical. Except on specific limited tasks of course but that doesn't cover for a whole job. Also, when there is a gain it's apparently not the workers who benefit from them.
This also carries privacy concerns indeed even for local models. It all depends how it's inserted in the system.
This is an important position paper in my opinion. The whole obsession towards the ill defined "AGI" is shadowing important problems in the research field which have to be overcome.
This move is really unsurprising... It's bound to become another channel for advertisements to try to cover the costs of running it all.
Of course it helps also against DDoS attacks... tells something about the state of AI scrapers I guess.
They've been warned of this leak by GitGuardian weeks ago... and did nothing. For people manipulating such sensitive data their security practices are preposterous.
Interesting point of view... what makes a tool really?
A look back at the limitations of deep learning in the context of computer vision. We're better at avoiding over fitting nowadays but the shallowness of the available data is still a problem.
The metaphors are... funny. But still I think there's good lesson in there. If you use generative AI tools for development purposes, don't loose sight of the struggle needed to learn and improve. Otherwise you won't be able to properly drive those tools after a while.
Clearly to really benefit from LLMs there's quite some thinking around UX design to be had. This can't be only chat bots all the way.
Are they really believing their own lies now? More likely they're trying to manipulate clueless lawmakers at this point. They can't afford to let the circus end.
I willadmit it... I laughed. And that's just one business risk among many.
This is a question which I have been pondering for a while... what will be left when the generative AI bursts. And indeed it won't be the models as they won't age well. The conclusion of this article got a chill running down my spine. It's indeed likely that the conclusion will be infrastructure for a bigger surveillance apparatus.
Sourcehut pulled the trigger on their crawler deterrent. Good move, good explanations of the reasons too.
This matches what I see. For some tasks these can be helpful tools, but it definitely need a strong hand to steer them in the right direction and to know when to not use them. If you're a junior you'd better invest in the craft rather than such tools. If you got experience, use with care and keep the ethical conundrum in mind.
Don't confuse scenarios for predictions... Big climate improvements due to AI tomorrow after accepting lots of emissions today is just a belief. There's nothing to back up it would really happen.
I hope people using Grok enjoy their queries... Because they come with direct environmental and health consequences.
This is very interesting research. This confirms that LLMs can't be trusted on any output they make about their own inference. The example about simple maths is particularly striking, the real inference and what it outputs if you ask about its inference process are completely different.
Now for the topic dearest to my heart: It looks like there's some form of concept graph hiding in there which is reapplied across languages. Now we don't know if a particular language influences that graph. I don't expect the current research to explore this question yet, but looking forward to someone tackling it.