Last year, the Rust unsafe code guidelines strike team was founded, and I am on it. :-) So, finally, just one year later, this post is my take at what the purpose of that team is. Warning: This post may contain opinions. You have been warned.
When are Optimizations Legal?
Currently, we have a pretty good understanding of what the intended behavior of safe Rust is. That is, there is general agreement (modulo some bugs) about the order in which operations are to be performed, and about what each individual operation does.
For unsafe Rust, this is very different. There are multiple reasons for this. One particularly nasty one is related to compiler optimizations that rustc/LLVM either already perform today, or want to perform some day in the future. Consider the following simple function:
We would like the compiler to be able to reorder these two stores without changing program behavior.
y are both mutable references, which the type system ensures are unique pointers, so they cannot possibly alias (i.e., the memory ranges they point to cannot overlap).
After this transformation, the code contains
*x = 3; *x, which can be further optimized to
*x = 3; 3, saving a memory access.
Compilers are able to get a lot of performance out of code by figuring out which operations are independent of each other, and then moving code around to either eliminate certain operations entirely (like the load of
x), or making code faster to execute with clever scheduling that exploits the parallelism in modern CPU cores (this is per-core parallelism we are talking about, not the parallelism arising from having multiple cores).
Optimizations like reordering stores are based on the compiler making assumptions about the code, and then using these assumptions to justify a program transformation. In this case, the assumption is that the two stores never affect the same address. Usually, if a compiler wants to make such an assumption, it has to do some static analysis to prove that this assumption actually holds in any possible program execution. After all, if there is any execution for which the assumption does not hold, the optimization may be incorrect – it could change what the program does!
Now, it turns out that it is often really hard to obtain precise aliasing information. This could be the end of the game: No alias information, no way to verify our assumptions, no optimizations.
However, it turns out that compiler writers consider these optimizations important enough that they came up with an alternative solution: Instead of having the compiler verify such assumptions, they declared the programmer responsible.
For example, the C standard says that memory accesses have to happen with the right “effective type”: If data was stored with a
float pointer, it must not be read with an
If you violate this rule, your program has undefined behavior (UB) – which is to say, the program may do anything when executed.
Now, if the compiler wants to make a transformation like reordering the two stores in our example, it can argue as follows:
In any particular execution of the given function, either
y alias or they do not.
If they do not, reordering the two writes is just fine.
However, if they do alias, that would violate the effective type restriction, which would make the code UB – so the compiler is permitted to do anything.
In particular, it is permitted to reorder the two writes.
As we have seen, in both of the possible cases, the reordering is correct; the compiler is thus free to perform the transformation.
Undefined behavior moves the burden of proving the correctness of this optimization from the compiler to the programmer.
In the example above, what the “effective type” rule really means is that every single memory read of a
float comes with a proof obligation:
The programmer has to show that that the last write to this memory actually happened through a
float pointer (baring some exceptions around union and character pointers).
Similarly, the (in)famous rule that signed integer overflow is undefined behavior means that every single arithmetic operation on signed integers comes with the proof obligation that this operation will never, ever, overflow.
The compiler performs its optimization under the assumption that the programmer actually went through the effort and convinced itself that this is the case.
Considering that the compiler can only be so smart, this is a great way to justify optimizations that would otherwise be difficult or impossible to perform.
Unfortunately, it is often not easy to say whether a program has undefined behavior or not – after all, such an analysis being difficult is the entire reason compilers have to rely on UB to perform their optimizations.
Furthermore, while C compilers are happy to exploit the fact that a particular program has UB, they do not provide a way to test that executing a program does not trigger UB.
It also turns out that programmers’ intuition often does not match what the compiler does, which leads to miscompilations (in the eye of the programmer) and sometimes to security vulnerabilities.
As a consequence, UB has a pretty bad reputation.
(The fact that most people will not expect an innocent-looking
+ operation to come with subtle proof obligations concerning overflow probably also plays a role in this.
In other words, this is also an API design problem.)
There are various sanitizers that watch a program while it is being executed and try to detect UB, but they are not able to catch all possible sources of UB. Part of the reason this is so hard is that the standard has not been written with such sanitizers in mind. This recent blog post discusses the situation in much more detail. For example, for the effective type restriction (also sometimes called “strict aliasing” or “type-based alias analysis”) we discussed above, the mitigation – the way to check or otherwise make sure your programs are not affected – is to turn off optimizations that rely on this. That is not very satisfying.
Undefined Behavior in Rust
Coming back to Rust, where are we at?
Safe Rust is free from UB, but we still have to worry about unsafe Rust.
For example, what if unsafe code crafts two aliasing mutable references (something that is prevented in safe Rust) and passes them to our
This violates the assumptions we made when we reordered the two writes.
If we want to permit this optimization (which we do!), we have to argue why it cannot change program behavior.
It should be forbidden for unsafe Rust code to pass aliasing pointers to
simple; doing so should result in UB.
So we have to come up with rules for when Rust code is UB.
This is what the unsafe code guidelines strike team set out to do.
We could of course just copy what C does, but I hope I convinced you that this is not a great solution. When defining UB for Rust, I hope we can do better than C. I think we should strive for programmers’ intuition agreeing with the standard and the compiler on what the rules are. If that means we have to be a little more conservative around our optimizations, that seems to be a prize worth paying for more confidence in the compiled program.
I also think that tooling to detect UB is of paramount importance, and can help shape intuition and maybe even permit us to be less conservative. To this end, the specification should be written in a way that such tooling is feasible. In fact, specifying a dynamic UB checker is a very good way to specify UB! Such a specification would describe the additional state that is needed at run-time to then check at every operation whether we are running into UB. It is with such considerations in my mind that I have previously written about miri as an executable specification.
Coming up next on this channel: During my internship, I am working on such a specification. My ideas are concrete enough now that I can write down a draft, which I will share with the world to see what the world thinks about it.
Update: Writing down has happened.
Update: Clarified “Shifting Responsibility”.