4011 shaares
74 private links
74 private links
Very interesting work on the use of sparse neural networks for faster inference. This is a good way to run originally larger models with less CPU and memory penalty than using the original model while retaining result quality.