May 23, 2017 • Internship, RustEditsPermalink

How to Specify Program (Undefined) Behavior?

This summer, I am given the awesome opportunity of spending three months in the Mozilla offices in Portland, working on Rust. I am extremely grateful that Mozilla is providing this opportunity to me; hopefully I can give something back by making some hands-on contributions to the Rust ecosystem.

Update: This post was originally titled “Day 1 of My Mozilla Internship, or: How to Specify Program Behavior?” but I feel that did not really reflect its content very well. /Update

Today is day 1 of my internship – at least if you start counting at 0, which of course we all do. I meant to write this yesterday, but the morning was filled with bureaucratics and in the afternoon I got so jetlagged, I essentially was sleeping with my eyes open. (But don’t tell Aaron, my manager, or Niko, my mentor – I was of course supposed to be working. ;)

So, what concretely will I be doing in the next three months? Based on my prior posts and me being in the unsafe code guidelines strike team (which I totally forgot to announce here… I’m not good at this blogging thing, am I?), it should not be surprising that my main project is going to be related to unsafe code. More specifically, we want to figure out ways to specify what unsafe code is and what it is not allowed to do (i.e., when its behavior is well-defined, and when it is undefined). Ultimately, the way I think this should be specified (and lucky enough, my new bosses agree on this) is by defining, for every possible Rust (or rather, MIR) program one could write, what that program is going to do when it is executed.

Now, one could go all committee and initiate a huge process involving drafting a standard and writing a 700 page document in English prose, but experience shows that this is not a good way to go: English prose is ambiguous, and it is very easy to forget about some odd combination of corner cases which then ends up not being described by the standard. One could also go all math and write down a beautifully complicated definition that rigorously defines everything. That would be huge amounts of work, and in the end, virtually nobody would be able to read such a specification. (Which doesn’t mean I think this shouldn’t also happen; such a definition is indeed needed to perform formal proofs – but at this point, we are not yet concerned about proofs.)

We are going to do neither of these. Instead, the goal is to have an executable specification for MIR – in other words, an interpreter. To understand an interpreter, one has to just read code, so this should be much more accessible to programmers than mathematical definitions. At the same time, if we forget to cover a case, this will be blatantly obvious – in fact, most of the time, exhaustiveness checking will catch this. And finally, an interpreter can be used for testing: We could actually check whether some particular unsafe code has undefined behavior or not. This will be very important while we are still developing the specification (to check whether our intuition for what should and should not be allowed matches the interpreter), but it is also extremely useful later for authors of unsafe code to check whether they are violating the rules we are going to put in place.

Lucky enough, such an interpreter already exists: miri! That’s great, because it means I do not have to write an interpreter from scratch, defining what a stack is and how to perform integer operations and whatnot. Instead, I can concentrate on the interesting questions coming up in the unsafe code guidelines: What exactly do the guarantees “mutable borrows don’t have aliases” and “the pointees of shared borrows are not mutated” mean? How should they be reflected in a semantics of MIR – in miri – such that the desired optimizations are actually legal, while at the same time the unsafe code people write has the desired behavior?

That’s it for now. I don’t have the answers to these questions, but hopefully my work will help getting closer to an answer. I will keep you posted on my progress (or lack thereof), probably on a weekly or bi-weekly basis.

Update: I realized I should probably expand on the parenthetical remark about specifying MIR rather than specifying Rust. What we are planning to do here is to specify Rust by specifying (a) how Rust code translates to MIR, and (b) specifying MIR. This has two advantages. First of all, part (a) actually is already done and implemented in the Rust compiler! We have a complete and non-ambiguous definition of how Rust code is to be translated to MIR code. (Currently, this definition exists only in code; ideally, one day, it will also exist in natural language as part of a specification document.) As a second advantage, MIR is much simpler than Rust: It has way fewer operations, no syntactic sugar, less redundancy. (In fact, that’s the entire reason why MIR was introduced into the compiler.) Specifying the behavior of a MIR program can concentrate on the parts that really matter, without focusing on details like how exactly a for-loop desugars into lower-level operations.

Now, this is not to say that every Rust compiler has to use MIR in its compilation pipeline. It just means that the way I imagine a specification of Rust to look like is as consisting of two parts: The Rust-to-MIR translation, and a specification for MIR. If another compiler uses a different implementation strategy, it can still be compliant with the specification; it just has to ensure that Rust programs behave as specified. This is a common approach that can also be found, e.g., in the specification of CPU instruction sets: The specification describes the behavior of a complex instruction as a series of commands in some lower-level language. The CPU does not actually use that language as part of its implementation, but it behaves as if it would, and that’s the only part that matters. /Update

Posted on Ralf's Ramblings on May 23, 2017.
Comments? Drop me a mail!