Not much new in this article regarding Stable Diffusion. That being said the section about ethics is spot on. This is the toughest part to grapple with regarding the latest batch of generative AIs. The legal traceability of the training set might end up being required (even though I doubt it will happen).
It was only a matter of time I guess... this is sad.
I'd lie if I said I'm not slightly fascinated by what you can do with Stable Diffusion...
Interesting first exploration of a tiny part of the data set. If you read closely, this shows some of the potential biases in there.
The come back of the Dead Internet Theory? Getting more and more probable by the minute indeed thanks to the newer wave of generative AI text and art.
OK, now this is out and potentially a big deal to have this in open source.
Mysterious event... that's the problem with such centralized and homogeneous system, when something fails it is quickly at scale.
This sounds like it could be a game changer for some uses including robotics or XR. Will need to look at this deeper.
At last we might wake up from the "deep learning alone can solve every problems" fantasy. Looking forward to seeing human interactions and symbol manipulation come back in the AI field. Finding ways to pick and mix approaches is essential. Otherwise it's meant to stagnate and lead to industrial hazards.
Very well makes the point on why general AI or good conversational bots are nowhere in sight with neural networks. It's just freaking hard to push general knowledge into those networks... Also there's the limit of not having a body and not feeling pain. This is indeed still a requirement to learn things and give them meaning.
Interesting article about how we badly design AI systems which make them very vulnerable to the quality of the data they receive. That's in part why I'd expect that somehow we'll see knowledge representation somehow come back in fashion because they have some potential to lead to better explicability in models.
Or why we should keep an eye on transfer learning. This is one of the promising way to get a more efficient machine learning process. Might come with its own challenges in methodology complexity though, it'll likely be easy to do it wrong and to notice that too late.
A reminder of why machine learning is currently so power hungry. It's in fact (still) highly inefficient.
Very nice piece about one of those important pieces of simulation used on screen. Multi-agent systems for the win on that one. ;-)
Good reminder of the limits of machine learning. There's no clear path from machine learning to more general intelligence. This article doesn't account for emergence effects though. They are a possibility but that's a long stretch from what being exhibited so far: it's "just" statistics.
Looks like an interesting system to recognize bird sounds in the wild. I'll definitely test it.
I think this is the best analysis about GitHub Copilot so far. Clearly using it in production today carries lots of risks. It might improve in the future but only marginally and likely with quite some effort. Not sure it'll pass the threshold to be anything else than a funny toy.
After the denoiser of raytracing images from Nvidia, here is a neural network approach from Intel to make game output photorealistic. Using the G-buffers as input is particularly clever.
Looks like an interesting engine for offline intent recognition.
Very nice tutorial, explores a good set of common biases. Also show that it's not that simple to get rid of them.