Interesting, DJB and Peter Schwabe have a demo program for verifying the correctness of finite-field arithmetic in Curve25519 code, it's almost fully automatic, only needs minimum annotations. It simply extracts the calculation you are trying do to, generates a SageMath script, and calls SageMath to crosscheck the math. Not real "formal verification", but certainly better than staring at the screen with pen and paper...
Perhaps it's really possible to hack the code and use it to verify my PDP-11 assembly code. http://gfverif.cryptojedi.org/index.shtml
gfverif's way of doing quick and dirty verification of existing C code is somewhere between genius and madness - define a new "mockup" integer type and use C++ to overload the operators like plus or minus, instead of doing calculations, it generates SageMath scripts of equivalent operations in algebra. This way, the entire C algorithm can be automatically extracted without writing any parser or compiler... The only problem: conditional branches using data are not supported, but you're not supposed to do that in crypto anyway... 🤣
Bignum addition code for PDP-11 has been formally verified by CBMC automatically in 0.2 seconds, showing its equivalence to the reference implementation. It also verified that a carry cannot occur after a specific addition, magic! Now trying to see whether it could handle multiplication... I don't expect success, the SMT solver will probably get stuck somewhere...
#Why3 looks like an interesting platform. Unfortunately, little documentation exists for WhyML as a standalone programming language. Unless you're already have ML/OCaml/F# experience, all can you do is asking "why" 3 times. It comes with a handy IDE that lists all the goals and options you need to prove a program, but something is missing: for newcomers the first goal should always be "close the IDE, go and learn Standard ML for 2 months before you come back..."
GCC optimizes an 1-bit leftshift on a 17-bit integer to...
addl %eax, %eax
andl $0x1FFFE, %eax
The unused top 15 bits are cleared, which makes sense. But why does it clear the 0th bit too? It's guaranteed to be 0 unless my understanding of a computer is horribly wrong. Spent 20 minutes on this question... I was overthinking it, it's probably just a meaningless compiler artifact.
@amiloradovsky I thought about it too, the 17-bit integer is simulated using 32-bit word, it should never carry out, and ADD ignores the carry flag. Perhaps it's useful on other architectures. Still, I can't think of a computer that doesn't allow you to ignore the carry flag, other than the 6502.