New parallel-programming research may help make our computers faster
Computer chips have stopped getting faster: The regular performance improvements we’ve come to expect are now the result of chipmakers’ adding more cores, or processing units, to their chips, rather than increasing their clock speed.
In theory, doubling the number of cores doubles the chip’s efficiency, but splitting up computations so that they run efficiently in parallel isn’t easy. On the other hand, say a trio of computer scientists from MIT, Israel’s Technion, and Microsoft Research, neither is it as hard as had been feared.
In a paper to be presented in May, CSAIL's Nir Shavit and two other researchers demonstrate a new analytic technique suggesting that, in a wide range of real-world cases, so-called "lock-free" algorithms - the ones used most frequently by commercial developers - actually perform at the same level of the "wait-free" algorithms that have previously been relegated to the world of theoretical computer science.