Such generative models are getting more and more accessible. You can play with them using a few lines of python now.
The human labor behind AI training is still on going. This is clearly gruesome and sent over to other countries... ignoring the price for a minute this is also a good way to hide its consequences I guess.
Very good piece about that dangerous moment in the creation of the latest large language models. We're about to drown in misinformation, can we get out of it?
A few compelling arguments for the impact of the latest strain of generative neural networks. The consequences for the eroded trust about online content are clear. I'm less convinced about some of the longer term predictions this piece proposes though.
There are a few reasons to worry about the latest strain of generative neural networks. One of them is the trust we can place in new generated content. The other one is indeed the impact on our culture. There's been already a trend at focusing on what sells rather than what's truly novel or avant-garde. This could well push it further. We'll we drown in mediocre content?
Interesting tool to for the automatic transcription and translation of videos using off the shelf components. Seems to work nicely.
Don't worry, so called AI isn't going to take away your jobs. But do worry though, this marks the end of trusting any pictures or texts you see in the media. Everything needs to be challenged, even more so now.
Interesting reverse engineering job of Copilot's client side to have a better idea at which information it actually feeds to the model. A couple of funny tricks to prepare the prompt are involved. Obviously some telemetry involved as well, again with interesting heuristics to try to figure out if the user kept the suggestion or not.
At least a good balanced post about Generative AI and programming. It's not overestimating abilities of the latest trend in large language models and moves away from the "I'll loose my job, developers will be replaced with AI" stance.
A few months old but a good piece to put things in perspective after the recent craze around large language models in general and GPT in particular. Noteworthy is the "wishful mnemonics" phrase mentioned and how it impacts the debate. Let's have less talks about AIs and more about SALAMIs please?
Nice article, gives a few clues to get a grasp on how GPT-3 works.
Words of caution regarding the use of language models for producing code. This can derail fairly quickly and earlier than you'd expect... without noticing it.
Hard not to have at least some ChatGPT related news this week. Plenty of impressive experiments out there... I think this one is hands down the most impressive one. I'm biased though, I like linguistics inspired works.
Interesting food for thought. Not necessarily easy to see it used in as many fields as the article claims. Maybe a bit too much on the techno solutionist side at times. Still, that sounds like an interesting guideline and path to explore.
Very early days for research on this topic and the sample is rather small still. That said the results are interesting, there seems to have a few biases inherent to the use of such an assistant, there's also clearly a link between the AI agency and the quality of what gets produced. We'll see if those result holds to larger studies.
Not much new in this article regarding Stable Diffusion. That being said the section about ethics is spot on. This is the toughest part to grapple with regarding the latest batch of generative AIs. The legal traceability of the training set might end up being required (even though I doubt it will happen).
It was only a matter of time I guess... this is sad.
I'd lie if I said I'm not slightly fascinated by what you can do with Stable Diffusion...
Interesting first exploration of a tiny part of the data set. If you read closely, this shows some of the potential biases in there.
The come back of the Dead Internet Theory? Getting more and more probable by the minute indeed thanks to the newer wave of generative AI text and art.