The engineering of computer systems is distinguished by a long-standing tradition of building on quicksand. Even the most venerable and critical systems have a history of serious bugs and security vulnerabilities. Human fallibility continues to prevail.
Computer-checked mathematical proofs of software correctness have emerged as a promising method to rule out large classes of bugs. However, the appropriate notion of correctness for a computer-systems component is exceedingly difficult to specify correctly in isolation, and unrelated verification of adjacent components does not rule out bugs due to their interactions. Therefore, I argue for (1) centering systems-verification efforts around interface specifications within a proof assistant, (2) proving both clients and implementations of an interface, and (3) using these results to prove an integrated-correctness theorem stated without referencing the internal interface.
I present a serious (several-year, several-person) exploration of what formally proven computer-systems development would look like if this practice were standard, culminating in precedent-setting case studies involving embedded implementations of networked software and elliptic-curve cryptography code. Whole-system correctness theorems spanning from application behavior to hardware designs are proven by instantiating correctness proofs of compilers, translation validators, processor implementations, and mathematical theories. Assurance and modular-development-scaling benefits of proof assistants motivate improving their performance and support for tooling-quality implementations of component-specific proof methods.