Or why you can't trust large language model for any fact or knowledge related tasks...
It's limits and biases are well documented. But, what about the ideologies of the people behind those models? What can it tell us about their aims behind those models? Questions worth exploring in my opinion.
Interesting work, trying to get back to the source material used by a generative model. This is definitely necessary as well.
A few interesting points in there. Too much hype and important points are glanced over, we'd all benefit from them being more actively explored.
Very nice summary of the architecture in the latest trend of transformer models. Long but comprehensive, a good way to start diving in the topic.
Such generative models are getting more and more accessible. You can play with them using a few lines of python now.
The human labor behind AI training is still on going. This is clearly gruesome and sent over to other countries... ignoring the price for a minute this is also a good way to hide its consequences I guess.
Very good piece about that dangerous moment in the creation of the latest large language models. We're about to drown in misinformation, can we get out of it?
A few compelling arguments for the impact of the latest strain of generative neural networks. The consequences for the eroded trust about online content are clear. I'm less convinced about some of the longer term predictions this piece proposes though.
There are a few reasons to worry about the latest strain of generative neural networks. One of them is the trust we can place in new generated content. The other one is indeed the impact on our culture. There's been already a trend at focusing on what sells rather than what's truly novel or avant-garde. This could well push it further. We'll we drown in mediocre content?
Interesting tool to for the automatic transcription and translation of videos using off the shelf components. Seems to work nicely.
Don't worry, so called AI isn't going to take away your jobs. But do worry though, this marks the end of trusting any pictures or texts you see in the media. Everything needs to be challenged, even more so now.
Interesting reverse engineering job of Copilot's client side to have a better idea at which information it actually feeds to the model. A couple of funny tricks to prepare the prompt are involved. Obviously some telemetry involved as well, again with interesting heuristics to try to figure out if the user kept the suggestion or not.
At least a good balanced post about Generative AI and programming. It's not overestimating abilities of the latest trend in large language models and moves away from the "I'll loose my job, developers will be replaced with AI" stance.
A few months old but a good piece to put things in perspective after the recent craze around large language models in general and GPT in particular. Noteworthy is the "wishful mnemonics" phrase mentioned and how it impacts the debate. Let's have less talks about AIs and more about SALAMIs please?
Nice article, gives a few clues to get a grasp on how GPT-3 works.
Words of caution regarding the use of language models for producing code. This can derail fairly quickly and earlier than you'd expect... without noticing it.
Hard not to have at least some ChatGPT related news this week. Plenty of impressive experiments out there... I think this one is hands down the most impressive one. I'm biased though, I like linguistics inspired works.
Interesting food for thought. Not necessarily easy to see it used in as many fields as the article claims. Maybe a bit too much on the techno solutionist side at times. Still, that sounds like an interesting guideline and path to explore.
Very early days for research on this topic and the sample is rather small still. That said the results are interesting, there seems to have a few biases inherent to the use of such an assistant, there's also clearly a link between the AI agency and the quality of what gets produced. We'll see if those result holds to larger studies.