63 private links
A good reminder that "push it to the GPU and it'll be faster" isn't true. If you move a workload to the GPU you likely have to rethink quite a bit how it's done.
More studies needed to confirm this, it is a single data point. Still it looks like Rust could take the HPC world by storm once it gets a better GPGPU story (still early days there).
And now we got all the pieces to run CUDA code in the browser. How will you like your cryptominer? Joke aside this opens interesting use cases.
Looks like an interesting library to build portable GPU compute workloads. Cleverly tries to leverage WebGPU.
This was only a matter of time before we'd see such a move. This doesn't bode well for things like ZLUDA.
Looks like a nice list of resources to dive deeper with WebGPU
Interesting to see WebGPU bindings for Python.
Interesting project, could bring a boost in AMD GPUs usage for CUDA workloads.
A good reminder that even though GPU tend to be faster, the added complexity and price might not be worth it in the end.
Nice primer on how computation works on GPUs. Goes a bit into the architecture as well. Good starting point.
Very thorough paper on optimization techniques when dealing with GPUs. Can be a useful reference or starting point to then dig deeper. Should also help to pick the right technique for your particular problem.
Now, this starts to become interesting. This is a first example of trying to plug symbolic and sub-symbolic approaches together in the wild. This highlights some limitations of this particular (quite a bit rough) approach, we'll see how far that can go before another finer approach is needed.