Now Reading
Flattening ASTs (and Different Compiler Information Buildings)

Flattening ASTs (and Different Compiler Information Buildings)

2023-07-02 03:03:13

a normal AST
a flat AST
Regular and flattened ASTs for the expression a * b + c.

Arenas, a.k.a. regions, are in every single place in fashionable language implementations.
One type of arenas is each tremendous easy and surprisingly efficient for compilers and compiler-like issues.
Perhaps due to its simplicity, I haven’t seen the fundamental approach in lots of compiler programs—or wherever else in a CS curriculum for that matter.
This submit is an introduction to the thought and its many virtues.

Arenas or areas imply many alternative issues to totally different individuals, so I’m going to name the particular taste I’m keen on right here information construction flattening.
Flattening makes use of an area that solely holds one sort, so it’s truly only a plain array, and you need to use array indices the place you’ll in any other case want pointers.
We’ll focus right here on flattening summary syntax bushes (ASTs), however the thought applies to any pointer-laden information construction.

To study flattening, we’ll construct a primary interpreter twice:
first the conventional approach after which the flat approach.
Comply with together with the code in this repository, the place you may compare and contrast the two branches.
The important thing factor to note is that the modifications are fairly small,
however we’ll see that they make a microbenchmark go 2.4× quicker.
In addition to efficiency, flattening additionally brings some ergonomics benefits that I’ll define.

A Regular AST

Let’s begin with the textbook strategy to signify an AST. Think about the world’s easiest language of arithmetic expressions, the place all you are able to do is apply the 4 primary binary arithmetic operators to literal integers. Some “applications” you may write on this language embody 42, 0 + 14 * 3, and (100 - 16) / 2.

Perhaps the clearest strategy to write the AST for this language could be as an ML sort declaration:

sort binop = Add | Sub | Mul | Div
sort expr = Binary of binop * expr * expr
          | Literal of int

However for this submit, we’ll use Rust as a substitute. Listed below are the equivalent types in Rust:

enum BinOp { Add, Sub, Mul, Div }
enum Expr {
    Binary(BinOp, Field<Expr>, Field<Expr>),

If you happen to’re not a dedicated Rustacean, Field<Expr> could look somewhat bizarre, however that’s simply Rust for “a plain ol’ pointer to an Expr.” In C, we’d write Expr* to imply morally the identical factor; in Java or Python or OCaml, it might simply be Expr as a result of all the things is a reference by default.

With the AST in hand, we are able to write all of the textbook components of a language implementation, like a parser, a pretty-printer, and an interpreter.
All of them are totally unremarkable.
The entire interpreter is only one technique on Expr:

fn interp(&self) -> i64 {
    match self {
        Expr::Binary(op, lhs, rhs) => {
            let lhs = lhs.interp();
            let rhs = rhs.interp();
            match op {
                BinOp::Add => lhs.wrapping_add(rhs),
                BinOp::Sub => lhs.wrapping_sub(rhs),
                BinOp::Mul => lhs.wrapping_mul(rhs),
                BinOp::Div => lhs.checked_div(rhs).unwrap_or(0),
        Expr::Literal(num) => *num,

My language has keep-on-truckin’ semantics; each expression finally evaluates to an i64, even when it’s not the quantity you needed.

For further credit score, I additionally wrote somewhat random program generator. It’s additionally not all that fascinating to take a look at; it simply makes use of a recursively-increasing likelihood of producing a literal so it will definitely terminates. Utilizing mounted PRNG seeds, the random generator allows some straightforward microbenchmarking. By producing after which instantly evaluating an expression, we are able to measure the efficiency of AST manipulation with out the I/O prices of parsing and pretty-printing.

You possibly can take a look at the relevant repo and check out it out:

$ echo '(29 * 3) - 9 * 5' | cargo run
$ cargo run gen_interp  # Generate and instantly consider a random program.

Flattening the AST

The flattening thought has two items:

  • As a substitute of allocating Expr objects willy-nilly on the heap, we’ll pack them right into a single, contiguous array.
  • As a substitute of referring to kids by way of pointers, Exprs will discuss with their kids utilizing their indices in that array.
the same flat AST from earlier

Let’s look again on the doodle from the highest of the submit.
We wish to use a single Expr array to carry all our AST nodes.
These nodes nonetheless must level to one another; they’ll now do this by referring to “earlier” slots in that array.
Plain previous integers will take the place of pointers.

If that plan sounds easy, it’s—it’s in all probability even easier than you’re considering.
The primary factor we’d like is an array of Exprs.
I’ll use Rust’s newtype idiom to declare our area sort, ExprPool, as a shorthand for an Expr vector:

struct ExprPool(Vec<Expr>);

To maintain issues fancy, we’ll additionally give a name to the plain previous integers we’ll use to index into an ExprPool:

The concept is that, in every single place we beforehand used a pointer to an Expr (i.e., Field<Expr> or generally &Expr), we’ll use an ExprRef as a substitute.
ExprRefs are simply 32-bit unsigned integers, however by giving them this particular title, we’ll keep away from complicated them with different u32s.
Most significantly, we have to change the definition of Expr itself:

 enum Expr {
-    Binary(BinOp, Field<Expr>, Field<Expr>),
+    Binary(BinOp, ExprRef, ExprRef),

Subsequent, we have to add utilities to ExprPool to create Exprs (allocation) and look them up (dereferencing).
In my implementation, these little capabilities are referred to as add and get, and their implementations are extraordinarily boring.
To make use of them, we have to look over our code and discover each place the place we create new Exprs or comply with a pointer to an Expr.
For instance, our parse operate used to be a method on Expr, however we’ll make it a method on ExprPool instead:

-fn parse(tree: Pair<Rule>) -> Self {
+fn parse(&mut self, tree: Pair<Rule>) -> ExprRef {

And the place we used to return a newly allotted Expr straight, we’ll now wrap that in self.add() to return an ExprRef as a substitute.
Right here’s the match case for developing a literal expression:

 Rule::quantity => {
     let num = tree.as_str().parse().unwrap();
-    Expr::Literal(num)
+    self.add(Expr::Literal(num))

Our interpreter gets the same treatment.
It additionally turns into an ExprPool technique, and we’ve so as to add self.get() to go from an ExprRef to an Expr we are able to pattern-match on:

-fn interp(&self) -> i64 {
+fn interp(&self, expr: ExprRef) -> i64 {
-    match self {
+    match self.get(expr) {

That’s about it.
I believe it’s fairly cool how few modifications are required—see for your self in the complete diff.
You change Field<Expr> with ExprRef, insert add and get calls within the apparent locations, and also you’ve bought a flattened model of your code.

However Why?

Flattened ASTs include a bunch of advantages.
The basic ones most individuals cite are all about efficiency:

  1. Locality.
    Allocating regular pointer-based Exprs runs the danger of fragmentation.
    Flattened Exprs are packed collectively in a contiguous area of reminiscence, which is nice for spatial locality.
    Your information caches will work higher as a result of Exprs usually tend to share a cache line,
    and even easy prefetchers will do a greater job of predicting which Exprs to load earlier than you want them.
    A sufficiently smart memory allocator might achieve the same thing, particularly for those who allocate the entire AST up entrance and by no means add to it, however utilizing a dense array removes all uncertainty.
  2. Smaller references.
    Regular information buildings use pointers for references; on fashionable architectures, these are all the time 64 bits.
    After flattening, you need to use smaller integers—for those who’re fairly positive you’ll by no means want greater than 4,294,967,295 AST nodes,
    you may get by with 32-bit references, like we did in our instance.
    That’s a 50% area financial savings for all of your references, which might quantity to a considerable general reminiscence discount in pointer-heavy information buildings like ASTs.
    Smaller reminiscence footprints imply much less bandwidth stress and even higher spatial locality.
    And also you would possibly save much more if you may get away with 16- and even 8-bit references for particularly small information buildings.
  3. Low-cost allocation.
    In flatland, there isn’t any want for a name to malloc each time you create a brand new AST node.
    As a substitute, supplied you pre-allocate sufficient reminiscence to carry all the things, allocation can entail simply bumping the tail pointer to make room for yet one more Expr.
    Once more, a really fast malloc might be hard to compete with—however you principally can’t beat bump allocation on sheer simplicity.
  4. Low-cost deallocation.
    Our flattening setup assumes you by no means must free particular person Exprs.
    That’s in all probability true for a lot of, though not all, language implementations:
    you would possibly construct up new subtrees on a regular basis, however you don’t must reclaim area from many elderly ones.
    ASTs are inclined to “die collectively,” i.e., it suffices to deallocate the whole AST all of sudden.
    Whereas releasing a traditional AST entails traversing all of the tips to free every Expr individually, you may deallocate a flattened AST in a single fell swoop by simply releasing the entire ExprPool.

I believe it’s fascinating that many introductions to area allocation are inclined to give attention to low cost deallocation (#4) as the primary motive to do it.
The Wikipedia page, for instance, doesn’t (but!) point out locality (#1 or #2) in any respect.
You can also make an argument that #4 may be the least vital for a compiler setting—since ASTs are inclined to persist all the way in which to the tip of compilation, you may not must free them in any respect.

Past efficiency, there are additionally ergonomic benefits:

  1. Simpler lifetimes.
    In the identical approach that it’s simpler to your laptop to free a flattened AST all of sudden, it’s additionally simpler for people to consider reminiscence administration on the granularity of a whole AST.
    An AST with n nodes has only one lifetime as a substitute of n for the programmer to consider.
    This simplification is quadruply true in Rust, the place lifetimes will not be simply within the programmer’s head however within the code itself.
    Passing round a u32 is approach much less fiddly than rigorously managing lifetimes for all of your &Exprs: your code can rely as a substitute on the a lot easier lifetime of the ExprPool.
    I think that is why the approach is so common in Rust tasks.
    As a Rust partisan, nonetheless, I’ll argue that the identical simplicity benefit applies in C++ or every other language with handbook reminiscence administration—it’s simply latent as a substitute of express.
  2. Handy deduplication.
    A flat array of Exprs could make it enjoyable and simple to implement hash consing and even easier methods to keep away from duplicating similar expressions.
    For instance, if we discover that we’re utilizing Literal expressions for the primary 128 nonnegative integers rather a lot, we might reserve the primary 128 slots in our ExprPool only for these.
    Then, when somebody wants the integer literal expression 42, our ExprPool don’t must assemble a brand new Expr in any respect—we are able to simply produce ExprRef(42) as a substitute.
    This sort of sport is feasible with a traditional pointer-based illustration too, nevertheless it in all probability requires some type of auxiliary information construction.

Efficiency Outcomes

Since we’ve two implementations of the identical language, let’s measure these efficiency benefits.
For a microbenchmark, I randomly generated a program with about 100 million AST nodes and fed it straight into the interpreter (the parser and fairly printer will not be concerned).
This benchmark isn’t very sensible: all it does is generate after which instantly run one monumental program.
Some caveats embody:

  • I reserved enough space within the Vec<Expr> to carry the entire program; in the actual world, sizing your area requires extra guesswork.
  • I count on this microbenchmark to over-emphasize the efficiency benefits of low cost allocation and deallocation, as a result of it does little or no different work.
  • I count on it to under-emphasize the influence of locality, as a result of this system is so massive that solely a tiny fraction of it would match the CPU cache at a time.

Nonetheless, perhaps we are able to be taught one thing.

bar chart comparing the execution time of our normal and flat (and extra-flat) interpreters

I used Hyperfine to match the common operating time over 10 executions on my laptop computer.
Right here’s a graph of the operating instances (please ignore the “extra-flat” bar; we’ll cowl that subsequent).
The plot’s error bars present the usual deviation over the ten runs.
On this experiment, the conventional model took 3.1 seconds and the flattened model took 1.3 seconds—a 2.4× speedup.
Not dangerous for such an easy code change!

Of that 2.4× efficiency benefit, I used to be curious to understand how a lot comes from every of the 4 potential benefits I discussed above.
Sadly, I don’t know easy methods to isolate most of those results—however #4, cheaper deallocation, is very engaging to isolate.
Since our interpreter is so easy, it appears foolish that we’re spending any time on releasing our Exprs after execution finishes—this system is about to close down anyway, so leaking that reminiscence is totally innocent.

See Also

bar chart comparing versions of our interpreters with and without deallocation

So let’s construct variations of each of our interpreters that skip deallocation altogether and see how a lot time they save.
Unsurprisingly, the “no-free” model of the flattened interpreter takes about the identical period of time as the usual model, suggesting that it doesn’t spend a lot time on deallocation anyway.
For the conventional interpreter, nonetheless, skipping deallocation takes the operating time from 3.1 to 1.9 seconds—it was spending round 38% of its time simply on releasing reminiscence!

Even evaluating the “no-free” variations head-to-head, nonetheless, the flattened interpreter continues to be 1.5× quicker than the conventional one.
So even for those who don’t care about deallocation, the opposite efficiency substances, like locality and low cost allocation, nonetheless have measurable results.

Bonus: Exploiting the Flat Illustration

Up to now, flattening has occurred solely “underneath the hood”:
arenas and integer offsets function drop-in replacements for regular allocation and pointers.
What might we do if we broke this abstraction layer—if we exploited stuff concerning the flattened illustration that isn’t true about regular AST model?

that same flat AST, yet again

The concept is to construct a 3rd type of interpreter that exploits an additional truth about ExprPools that arises from the way in which we constructed it up.
As a result of Exprs are immutable, we’ve to assemble bushes of them “bottom-up”:
we’ve to create all youngster Exprs earlier than we are able to assemble their guardian.
If we construct the expression a * b, a and b should seem earlier of their ExprPool than the * that refers to them.
Let’s carry that doodle again once more: visually, you may think about that reference arrows all the time go backward within the array, and information all the time flows ahead.

Let’s write a new interpreter that exploits this invariant.
As a substitute of beginning on the root of the tree and recursively evaluating every youngster, we are able to begin firstly of the ExprPool and scan from left to proper.
This iteration is assured to go to dad and mom after kids, so we are able to make certain that the outcomes for subexpressions shall be prepared once we want them.
Right here’s the whole thing:

fn flat_interp(self, root: ExprRef) -> i64 {
    let mut state: Vec<i64> = vec![0; self.0.len()];
    for (i, expr) in self.0.into_iter().enumerate() {
        let res = match expr {
            Expr::Binary(op, lhs, rhs) => {
                let lhs = state[lhs.0 as usize];
                let rhs = state[rhs.0 as usize];
                match op {
                    BinOp::Add => lhs.wrapping_add(rhs),
                    BinOp::Sub => lhs.wrapping_sub(rhs),
                    BinOp::Mul => lhs.wrapping_mul(rhs),
                    BinOp::Div => lhs.checked_div(rhs).unwrap_or(0),
            Expr::Literal(num) => num,
        state[i] = res;
    state[root.0 as usize]

We use a dense state desk to carry one consequence worth per Expr.
The state[i] = res line fills this vector up each time we end an expression.
Critically, there’s no recursion—binary expressions can get the worth of their subexpressions by wanting them up straight in state.
On the finish, when state is totally filled with outcomes, all we have to do is return the one equivalent to the requested expression, root.

This “extra-flat” interpreter has two potential efficiency benefits over the recursive interpreter:
there’s no stack bookkeeping for the recursive calls,
and the linear traversal of the ExprPool might be good for locality.
Then again, it has to randomly entry a very massive state vector, which might be dangerous for locality.

the same bar chart comparing the execution time for normal, flat, and extra-flat interpreters

To see if it wins general, let’s return to our bar chart from earlier.
The additional-flat interpreter takes 1.2 seconds, in comparison with 1.3 seconds for the recursive interpreter for the flat AST.
That’s marginal in comparison with how a lot better flattening does by itself than the pointer-based model,
however an 8.2% efficiency enchancment ain’t nothing.

My favourite remark about this system, as a consequence of a Reddit comment by Bob Nystrom, is that it basically reinvents the thought of a bytecode interpreter.
The Expr structs are bytecode directions, they usually comprise variable references encoded as u32s.
You may make this interpreter even higher by swapping out our easy state desk for some type of stack, after which it might actually be no totally different from a bytecode interpreter you would possibly design from first rules.
I simply suppose it’s fairly nifty that “merely” altering our AST information construction led us straight from the land of tree strolling to the land of bytecode.

Additional Studying

I asked on Mastodon some time again for tips to different writing about information construction flattening,
and folk actually got here by (thanks, everyone!).
Listed below are another locations it got here up in a compilers context:

Past simply language implementation, comparable ideas present up in different performance-oriented domains.
I admit that I perceive these items much less, particularly the issues from the world of video video games:

  • A line of work from Purdue and Indiana is about compiling applications to function straight on serialized information. Gibbon specifically is just about a translator from “regular”-looking code to flattened implementations.
  • Flattening-like concepts seem rather a lot in data-oriented design, a broadly outlined idea that I solely partially perceive. For instance, Andrew Kelley argues in a talk on the topic for utilizing indices instead of pointers.
  • Try this overview of arena libraries in Rust and its dialogue of the ergonomics of arena-related lifetimes.
  • Right here’s a post comparing handles vs. pointers in game development that advocates for packing homogeneously typed objects into arrays and utilizing indices to discuss with them.
  • Related concepts present up in entity-component systems (ECS), a giant thought from sport growth that I additionally don’t utterly perceive. This post covers most of the identical locality-related themes as we did above.

After I revealed this submit, many individuals pointed me towards a post from last year by Inanna Malick that exhibits the identical approach utilized to identical type of toy “calculator” language carried out in Rust.
That submit additionally makes use of recursion schemes, a sublime thought from the Haskell world that helps summary over totally different concrete representations.
I extremely suggest checking that submit out.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top