Less than a month ago, I announced Stacked Borrows 2. In particular, I hoped that that version would bring us closer to proper support for two-phase borrows. Turns out I was a bit too optimistic! Last week, @Manishearth asked on Zulip why Miri rejected a certain program, and it turned out that the issue was related to two-phase borrows: in combination with interior mutability, behavior wasn’t always what we wanted it to be. So, I went back to the drawing board and tried to adjust Stacked Borrows.
In the end, I decided to give up on “proper” support for two-phase borrows for now, which I explained here. But I also made some tweaks to Stacked Borrows that affect all accesses (not just two-phase borrows), and that’s what this post is about. I am referring to this as “Stacked Borrows 2.1”.
Recently, I have significantly updated Stacked Borrows in order to fix some issues with the handling of shared references that were uncovered in the previous version. In this post, I will describe what the new version looks like and how it differs from Stacked Borrows 1. I assume some familiarity with the prior version and will not explain everything from scratch.
Running your unsafe code test suite in Miri has just gotten even easier: Miri is now available as a
Huge thanks to @oli-obk and @mati865 who made this happen.
Miri can run your test suite in a very slow, interpreted mode that enables it to test for undefined behavior: if an out-of-bounds array access happens, uninitialized memory gets used the wrong way or a dangling raw pointer gets dereferenced, Miri will notice and tell you. However, Miri cannot execute all programs, and it also cannot detect all forms of misbehavior. For further information, please check out Miri’s README.
Last week, I was in Berlin at the Rust All-Hands 2019. It was great! I will miss nerding out in discussions about type theory and having every question answered by just going to the person who’s the expert in that area, and asking them. In this post, I am summarizing the progress we made in my main areas of interest and the discussions I was involved in—this is obviously just a small slice of all the things that happened.
It is that time of the year again where our feedreaders are filled with people’s visions for what Rust should become in the next year. Coming up with a vision is not exactly my strong suit, but it is probably something I should learn, so here’s my episode of the “Rust 2019” blog post series.
I think in 2019, we should make a coordinated effort to improving the foundations for writing unsafe code. Of course my particular Rust bubble is mostly about unsafe code, as you will know if you have read some of my previous posts – but safety is a core value proposition of Rust, and the entire ecosystem rests on a foundation of crates that make heavy use of unsafe code, so I believe that caring about unsafe code is an important piece to Rust’s overall success.
My internship (“research assistantship”) with Mozilla has ended several weeks ago, and this post is a report of the most recent tweaks I made to Miri and Stacked Borrows. Neither project is by any means “done”, of course. However, both have reached a fairly reasonable state, so I felt some kind of closing report made sense. Also, if I ever want to finish my PhD, I’ll have to seriously scale down the amount of time I work on Rust – so at least from my side, things will move more slowly from now on.
In particular, installing Miri and running your test suite in it is now just a single command away! Scroll all the way down if you are not interested in the rest.
Three months ago, I proposed Stacked Borrows as a model for defining what kinds of aliasing are allowed in Rust, and the idea of a validity invariant that has to be maintained by all code at all times. Since then I have been busy implementing both of these, and developed Stacked Borrows further in doing so. This post describes the latest version of Stacked Borrows, and reports my findings from the implementation phase: What worked, what did not, and what remains to be done. There will also be an opportunity for you to help the effort!
This post is a self-contained introduction to Stacked Borrows. Other than historical curiosity and some comparison with my earlier work on Types as Contracts there is no reason to read the original post at this point.
When talking about the Rust type system in the context of unsafe code, the discussion often revolves around invariants: Properties that must always hold, because the language generally assumes that they do. In fact, an important part of the mission of the Unsafe Code Guidelines strike force is to deepen our understanding of what these invariants are.
However, in my view, there is also more than one invariant, matching the fact that there are (at least) two distinct parties relying on these invariants: The compiler, and (authors of) safely usable code. This came up often enough in recent discussions that I think it is worth writing it down properly once, so I can just link here in the future. :)
In this post, I am proposing “Stacked Borrows”: A set of rules defining which kinds of aliasing are allowed in Rust. This is intended to answer the question which pointer may be used when to perform which kinds of memory accesses.
This is a long-standing open question of many unsafe code authors, and also by compiler authors who want to add more optimizations. The model I am proposing here is by far not the first attempt at giving a definition: The model is heavily based on ideas by @arielb1 and @ubsan, and of course taking into account the lessons I learned last year when I took my first stab at defining such a model, dubbed “Types as Contracts”.
This summer, I am again working on Rust full-time, and again I will work (amongst other things) on a “memory model” for Rust/MIR. However, before I can talk about the ideas I have for this year, I have to finally take the time and dispel the myth that “pointers are simple: they are just integers”. Both parts of this statement are false, at least in languages with unsafe features like Rust or C: Pointers are neither simple nor (just) integers.
I also want to define a piece of the memory model that has to be fixed before we can even talk about some of the more complex parts: Just what is the data that is stored in memory? It is organized in bytes, the minimal addressable unit and the smallest piece that can be accessed (at least on most platforms), but what are the possible values of a byte? Again, it turns out “it’s just an 8-bit integer” does not actually work as the answer.
I hope that by the end of this post, you will agree with me on both of these statements. :)
For some time now (since the 1.26 release, to be precise), Rust has a very powerful machinery for CTFE, or compile-time function evaluation. Since then, there have been various discussions about which operations should be allowed during CTFE, which checks the compiler should do, how this all relates to promotion and which kinds of guarantees we should be able to expect around CTFE. This post is my take on those topics, and it should not be surprising that I am going to take a very type-system centric view. Expect something like a structured brain dump, so there are some unanswered questions towards the end as well.
While I was busy doing Rust-unrelated research, RustBelt continues to move and recently found another bug (after a missing
impl !Sync that we found previously): It turns out that
Arc::get_mut did not perform sufficient synchronization, leading to a data race.
Notice that I am just the messenger here, the bug was actually found by Hai and Jacques-Henri. Still, I’d like to use this opportunity to talk a bit about weak memory, synchronization and data races. This is just a primer, there are tons of resources on the web that go into more detail (for example here).
After my internship last year has ended, I naturally became somewhat less active in the Rust community as I could not work on Rust full-time any more.
Well, for the following months I am going to be back full-time. :)
Thanks to @aturon, I am able to work as a research assistant for Mozilla during this summer (until the end of November).
I don’t really know what a “research assistant” is, but I am going to continue the work on Rust memory models, and hopefully also have some time to make progress on
In my last post, I talked about the new “pinned references” which guarantee that the data at the memory it points to will not, ever, be moved elsewhere.
I explained how they enable giving a safe API to code that could previously only be exposed with
unsafe, and how one could go about proving such a thing.
This post is about another application of pinned references—another API whose safety relies on the pinning guarantees: Intrusive collections.
It turns out that pinned references can almost be used for this, but not quite.
However, this can be fixed by extending the guarantees provided by pinned references, as suggested by @cramertj.
Recently, a new API for “pinned references” has landed as a new unstable feature in the standard library. The purpose of these references is to express that the data at the memory it points to will not, ever, be moved elsewhere. @withoutboats has written about how this solves a problem in the context of async IO. In this post, we take a closer, more formal look at that API: We are going to take a stab at extending the RustBelt model of types with support for pinning.
This post is about an aspect of the RustBelt paper. Concretely, I’d like to share some of our thoughts on the nature of types and shared references. Let’s see how well this goes. :)
Shared references are an extremely powerful mechanism in the Rust type system, and we’ve had quite some trouble finding a good way of handling them in our formal model. In this post, I will present the model that we came up with.
Last week, I was in Los Angeles at POPL 2018 to present our paper RustBelt: Securing the Foundations of the Rust Programming Language. The talk has been recorded, so if you want to get a 20-minute summary of the paper, go watch it on YouTube. If you have any questions, feel free to join the forum thread linked below :)
Yesterday was the last day of my Rust internship at Mozilla. I can hardly believe that twelve weeks have passed since that post, but my calendar insists. If you want to see what I have done, you can go read the posts that I wrote during the internship. (I almost kept up with the plan of bi-weekly blog posts…) You can also watch the 10-minute video version of this that I made for the internship presentation. The recording is available on Air Mozilla; my talk starts at 2h 14:30.
Some weeks ago, I described Types as Contracts as an approach for how to go about defining Rust’s aliasing-related undefined behavior. One key property of this approach is that it is executable, which means we can actually have a program (in this case, miri) tell us whether some particular Rust test case exhibits undefined behavior or not. I have since then spent most of my time completing and refining this implementation, and running it on miri’s test suite to exercise various bits of the standard library and see whether they are actually following the rules I have been suggesting.
This post is my report on what I found, and how the approach itself changed in response to these findings. It also explains how you can run miri yourself and have it check your code for undefined behavior according to this model. Finally, I discuss some of the things that I would like to look at going forward.
Over the last couple of weeks of my internship, I have been working on a proposal for the “Unsafe Code Guidelines”. I got some very encouraging feedback at the Mozilla All-Hands, and now I’d like to put this proposal out there and start having a discussion.
Last year, the Rust unsafe code guidelines strike team was founded, and I am on it. :-) So, finally, just one year later, this post is my take at what the purpose of that team is.
Just yesterday, we submitted our paper RustBelt: Securing the Foundations of the Rust Programming Language. Quoting from the abstract:
Rust is a new systems programming language that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust’s safety claims have been formally proven, and there is good reason to question whether they actually hold. Specifically, Rust employs a strong, ownership-based type system, but then extends the expressive power of this core type system through libraries that internally use unsafe features. In this paper, we give the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem.
A couple of weeks ago, our ongoing effort to formalize Rust’s type system lead to us actually discovering a bug in the Rust standard library:
Sync in cases where it should not. This could lead to data races in safe programs. Ouch.
It’s now been two weeks since my internship started (two weeks already, can you believe it?). In other words, if I want to post “weekly or bi-weekly” updates, I better write one today ;) .
As already mentioned, the goal for this internship is to experiment with unsafe code guidelines by implementing them in miri. Before I tackle that, however, it seemed sensible for me to grab some low-hanging fruit in miri just to make myself familiar with the codebase. It turns out I entered “unsafe code guidelines” territory much quicker than expected. This post is about what I found, and it also serves as a nice demonstration of how we envision my unsafe code guidelines workflow to look like.
This summer, I am given the awesome opportunity of spending three months in the Mozilla offices in Portland, working on Rust. I am extremely grateful that Mozilla is providing this opportunity to me; hopefully I can give something back by making some hands-on contributions to the Rust ecosystem.
This week, I have been at the Paris Rust Meetup. Meeting all sorts of Rust people was great fun, and the Mozilla offices in Paris are absolutely impressive. You should totally check them out if you have a chance.
On that meetup, I gave a short talk about the current status of my formalization of the Rust type system.
I’d like to talk about an important aspect of dealing with unsafe code, that still regularly seems to catch people on the wrong foot:
When checking unsafe code, it is not enough to just check the contents of every
The “scope” in the title refers to the extent of the code that has to be manually checked for correctness, once
unsafe is used.
What I am saying is that the scope of
unsafe is larger than the
unsafe block itself.
It turns out that the underlying reason for this observation is also a nice illustration for the concept of semantic types that comes up in my work on formalizing Rust (or rather, its type system). Finally, this discussion will once again lead us to realize that we rely on our type systems to provide much more than just type safety.
Update (Jan 11th): Clarified the role of privacy; argued why
evil is the problem.
My current research project – and the main topic of my PhD thesis – is about developing a semantic model of the Rust programming language and, most importantly, its type system. Rust is an attempt of Mozilla to find a sweet spot in the design space of programming languages: A language that provides low-level resource management (making it a systems language), is convenient for programmers and guards against memory errors and thread unsafety. Other have said and written a lot on why we need such a language, so I won’t lose any more words on this. Let me just use this opportunity for a shameless plug: If you are curious and want to learn Rust, check out Rust-101, a hands-on Rust tutorial I wrote. I am going to assume some basic familiarity with Rust in the following.
Why do we want to do research on Rust? First of all, I’m (becoming) a programming languages researcher, and Rust is an interesting new language to study. It’s going to be fun! Honestly, that’s enough of a reason for me. But there are other reasons: It shouldn’t be a surprise that bugs have been found in Rust. There are lots of things that can be done about such bugs – my take on this is that we should try to prove, in a mathematical rigorous way, that no such bugs exist in Rust. This goes hand-in-hand with other approaches like testing, fuzzing and static analysis. However, we (at my research group) are into formalizing things, so that’s what we are going to do as part of the RustBelt research project.
Update: Added link to RustBelt website.