65 private links
Or how the current neural networks obsession is poisoning scientific fields. There was already a reproducibility crisis going on and it looks like it's been getting worse. The incentives are clearly wrong and that shows.
This is definitely an interesting declarative language. Looking forward to more such neurosymbolic approaches.
I mostly agree with this piece. There's lots of room for optimization still so we might see a temporary drop in the energy consumption of those systems. That said, longer term energy consumption is indeed the main leverage to improve performance of those systems. It can only get us so far, so new techniques will be needed. Hence why my position is that we'll come back to symbolic approaches at some point, there's a clear challenge at interfacing both worlds.
It looks like analog chips for neural network workloads are on the verge of finally becoming reality. This would reduce consumption by an order of magnitude and hopefully more later on. Very early days for this new attempt, let's see if it holds its promises.
Of course I recommend reading the actual research paper. This article is a good summary of the consequences though. LLMs definitely can't be trusted with formal reasoning including basic maths. This is a flaw in the way they are built, the bath forward is likely merging symbolic and sub-symbolic approaches.
Exciting new type of neural networks. There are limits to use them at large scale for now. Still, they have very interesting properties like the interpretability. And also, they tend to give similar performance to traditional neural networks for a smaller size.
Interesting how much extra performance you can shave off the GPU by going back to how the hardware works.
Friendly reminder that the neural networks we use are very much artificial. They're also far from working like biological ones do.
Now this could turn out to be interesting. To be confirmed when this get closer to production (if it does), especially on the power consumption side. This will be the important factor to make this viable I think.
Looks like a promising way to reduce the training cost of large language models.
And yet another set of open source models. This is really democratizing quickly.
Truly open source models are pouring in. This is showing more transparency and I hope it will lead to better uses even though some of the concerns will stay true.
There's the carbon footprint but of course there's also the water consumption... and with increased droughts this will become more and more of a problem.
The climate constraints are currently not compatible with the ongoing arm race on large neural networks models. The training seems kinda OK, but the inferences... and it's currently just rolled out as shiny gadgets. This really need to be rethought.
Interesting strategy, shows a fascinating blind spot in the typical AIs used for Go nowadays. It kind of hints to the fact that the neural networks abstract knowledge much less than advertised.
A few compelling arguments for the impact of the latest strain of generative neural networks. The consequences for the eroded trust about online content are clear. I'm less convinced about some of the longer term predictions this piece proposes though.
Interesting tool to for the automatic transcription and translation of videos using off the shelf components. Seems to work nicely.
At last we might wake up from the "deep learning alone can solve every problems" fantasy. Looking forward to seeing human interactions and symbol manipulation come back in the AI field. Finding ways to pick and mix approaches is essential. Otherwise it's meant to stagnate and lead to industrial hazards.
Very well makes the point on why general AI or good conversational bots are nowhere in sight with neural networks. It's just freaking hard to push general knowledge into those networks... Also there's the limit of not having a body and not feeling pain. This is indeed still a requirement to learn things and give them meaning.
After the denoiser of raytracing images from Nvidia, here is a neural network approach from Intel to make game output photorealistic. Using the G-buffers as input is particularly clever.