It looks like analog chips for neural network workloads are on the verge of finally becoming reality. This would reduce consumption by an order of magnitude and hopefully more later on. Very early days for this new attempt, let's see if it holds its promises.
Kind of unsurprising right? I mean LinkedIn is clearly a deformed version of reality where people write like corporate drones most of the time. It was only a matter of time until robot generated content would be prevalent there, it's just harder to spot since even humans aren't behaving genuinely there.
Indeed, we'll have to relearn "internet hygiene", it is changing quickly now that we prematurely unleashed LLM content on the open web.
Excellent post showing all the nuances of AI skepticism. Can you find in which category you are? I definitely match several of them.
Looks like a nice model to produce 3D assets. Should speed up a bit the work of artists for producing background elements, I guess there will be manual adjustments needed in the end still.
Let's hope security teams don't get saturated with low quality security reports like this...
Another lawsuit making progress against OpenAI and their shady practice.
Nice vision model. Looks like it strikes and interesting balance between performance and memory consumption. Looks doable to run cheaply and on premise.
More shady practices to try to save themselves. Let's hope it won't work.
The water problem is obviously hard to ignore. This piece does a good job illustrating how large the impact is.
Good reminder that models shouldn't be used as a service except maybe for prototyping. This has felt obvious to me since the beginning of this hype cycle... but here we are people are falling in the trap today.
More signs of the generative AI companies hitting a plateau...
It shouldn't be, but it is a big deal. Having such training corpus openly available is one of the big missing pieces to build models.
This is an interesting and balanced view. Also nice to see that local inference is really getting closer. This is mostly a UI problem now.
Like me, you find the Open Source AI Definition weak on the training data information side? You'd be right and there's a reason for it... it's probably hiding quite some open washing for the larger models. This is a good explanation of the motives and consequences.
I definitely like the approach of having vectorisation in the RDBMS directly. This is one less moving part, less complexity at the application level to synchronize everything together. In this case it's a Postgres extension.
Nice initiative from the OSI. It is timely, such a definition was surely needed. The data information part seems fairly weak though... for sure you could make a system which doesn't respect the four freedoms that way.
This is what you get by making bots spewing text based on statistics without a proper knowledge base behind it.
More marketing announcement than real research paper. Still it's nice to see smaller models being optimized to run on mobile devices. This will get interesting when it's all local first and coupled to symbolic approaches.
This is still an important step with LLM. It's not because the models are huge that tokenizers disappeared or that you don't need to clean up your data.