64 private links
Indeed, innovation is far from being a linear process. It's actually messy, the breakthroughs already happened already and we describe it after the facts.
This is indeed one of the big issues of the computer science research community. It's also something of importance in fields relying on simulations... which is almost all scientific fields nowadays. Peer reviewing the paper is well practiced, but the software is another story entirely. It'd require some investment in research... but that's not where we're headed at all.
Definitely fun research. Let's not be fooled though it also has practical use.
A paper showing that social media algorithms foster political polarization and societal division. Who knew?? Sarcasm aside, the real value of the paper is showing that by modifying those algorithms we could quickly have positive effects. Most of the participants didn't even notice they changed how they perceive others.
Excellent news! It is long overdue that such organisations switch to open access.
Clearly needs further exploration. I'd like to see it submitted in a peer reviewed journal but maybe that will come. Still it's nice to see people for new approaches. It's a breath of fresh air. I like it when there are actual research rather than hype. Hopefully the days of the "scale it up and magic will happen" are counted.
I had a few moment like this in my life. I definitely recommend it. I've never been more productive than isolated in a mountain with only books, notebooks and pens.
I'm happy to see I'm actually very much aligned with one of the "Attention Is All You Need" co-authors. The current industry trend of "just scale the transformer architecture" is indeed stifling innovation and actual research. That said I find ironic that he talks about freedom to explore... well this is what public labs used to be about, but we decided to drastically reduce their funding and replace that with competition between startups. It's no surprise we have very myopic views on problems.
Interesting stuff, very rich I think I'll have to get back to it. This gives good clues and ideas of metrics to look at when evaluating teams output. Some of the findings confirm hunches which is welcome. It also shows that measuring productivity keeps being a messy business, there are so many factors influencing it in some way.
Interesting approach to gauge how accurate a profiler is. With some results in the Java ecosystem, so now you know which profiler to pick there.
We can expect more misleading papers to be published by the big LLM providers. Don't fall in the trap, wait for actually peer reviewed papers from academia. Unsurprisingly the results aren't as good there.
An important essay in my opinion. It reminds us quite well what the core drive of scientific research is about.
A nice little survey of what the academia already had to say about TDD a few years ago. Clearly the outcome seems mostly positive.
ETH Zurich spearheading an effort for more ethical and cleaner open models. That's good research, looking forward to the results.
There are indeed fields where this matters a lot. It is far from being an easy problem to solve though.
Not fond of the metaphor used here which leads to quite some noise. Still, this article contains interesting ideas to try to push R&D initiatives forward. Definitely needed to improve any kind of organisation.
We already had reproducibility issues in science. With such models which allow to produce hundreds of "novel" results in one paper, how can we properly keep up in checking all the produced data is correct? This is a real challenge.
Interesting research to determine how models relate to each other. This becomes especially important as the use of synthetic data increases.
Interesting research, this gives a few hints at building tools to ensure some more transparency at the ideologies pushed by models. They're not unbiased, that much we know, characterising the biases are thus important.
Interesting new proof on the relationships between P and PSPACE. Let's see where this leads.