Until recently semiconductor manufacturers could depend on a predictable rate of improvement in their underlying implementation technologies. This progress enabled large improvements in capacity and performance of what were essentially legacy architectures. The academic architecture community helped; few papers described or evaluated revolutionary approaches to computing. The software industry also benefitted from this approach, since old programs could be run on these evolutionary systems, which provided backward compatibility at every step. This ability to improve the performance of these legacy architectures has now stopped due to limits on frequency scaling, power consumption, and design complexity. Semiconductor makers will continue to put more transistors on a chip of a given size every year, since Moore's law has not yet run out, but we will have to use these transistors in new ways. This change will have profound effects on the way computers are built and on the software they will run. It will require a level of innovation and cooperation between hardware and software architects that we haven't seen for many years. In this talk, I will describe the origin and nature of the barriers we now face, and speculate on some ways that we could avoid or overcome the limitations they impose.