# An interactive introduction to the floor code

*by*Phil Tadros

Final July, the quantum staff at Google launched a milestone paper, during which they present the primary experimental demonstration of quantum error correction under threshold. What this implies is that their experiment reached noise ranges which might be low sufficient such that by growing the scale of their code, they progressively decreased the variety of errors within the encoded qubit. And the way did they obtain such a milestone? You guessed it, by utilizing the floor code to guard their qubits! The explanations they selected this code are plentiful: it may be layed down on a 2D lattice, it has a really excessive threshold in comparison with different codes, it doesn’t require too many qubits in its smallest situations, and so forth. And Google is way from the one firm contemplating the floor code (or a few of its variants) as a part of their their fault-tolerant structure!

Aside from its experimental relevance, the floor code can be some of the lovely concepts of quantum computing, and for those who ask me, of all physics. Impressed by the condensed matter ideas of topological order and anyonic particle, it was found by Alexei Kitaev in 1997, in a paper during which he additionally introduces the concept of topological quantum computing. And certainly, the floor code has deep connections to many areas of maths and physics. For example, in condensed matter idea, the floor code (extra usually referred to as *toric code* on this context^{) is used as a major instance of topological section of matter and is said to the extra common household of spin liquids. Whereas this publish might be primarily involved with the quantum error correction properties of the floor code, my pal Dominik Kufel wrote a superb complementary post which describes the condensed matter perspective.}

Lastly, the floor code is the best instance as an instance the extra common idea of topological quantum error-correction. Whereas we’ll solely contemplate floor codes outlined on a 2D sq. lattice right here, they’ll really be generalized to the cellulation of any manifold in any dimension (together with hyperbolic!). Ideas from algebraic topology (resembling homology and cohomology teams, chain complexes, and so forth.) can then be used to know and analyze these codes. Whereas we’ll take a quick look at topology right here, I’m planning to put in writing a separate weblog publish absolutely devoted to the maths behind topological quantum error-correction. The purpose of this publish is to achieve a primary intuitive understanding of the floor code.

You bought it, I like the floor code, and I’m tremendous excited to speak about it on this publish!

To do that code the honour it deserves, I’ve determined to utilize the interactive code visualizer I’ve been creating for a while with my collaborator Eric Huang. I’ve embedded it on this publish^{, and you’ll subsequently have the ability to play with a few of the lattices offered right here (e.g. by inserting errors and stabilizers, visualizing decoding, and so forth.). I hope you’ll get pleasure from!}

So right here is the plan for right now’s publish. We’ll first have a look at the definition of the floor code on a torus by laying out its stabilizers, see easy methods to detect errors, and delve into topology with a view to perceive the logical operators of the code. We’ll then examine the decoding drawback and see how it may be simplified to an identical drawback on a graph. Subsequent, we’ll introduce the planar code and the rotated code, variants of the floor code with extra sensible boundary circumstances and a decrease overhead. Lastly, we’ll outline the notion of error correction threshold and provides its completely different values for the floor code.

This publish assumes familiarity with the stabilizer formalism, so don’t hesitate to learn my blog post series on the subject for those who want a reminder.

## Definition of the floor code

The floor code may be outlined on a sq. grid of dimension `L instances L`

, the place qubits sit on the perimeters, as proven right here for `L=4`

:

The code is less complicated to research at first when contemplating periodic boundary circumstances. Due to this fact, within the image above, we determine the left-most and right-most qubits, in addition to the top-most and bottom-most qubits. The grid is subsequently equal to a pac-man grid, or in topological phrases, a **torus**. A torus as you may think it may be obtained from the grid by wrapping it round high to backside (gluing the highest and backside edges collectively), making a cylinder, then left to proper (gluing the left and proper edges collectively) to complete the torus:

The subsequent step is then to explain the **stabilizers** of the code. The floor code is certainly an instance of stabilizer codes and might subsequently be fully specified by its stabilizer group. In our case, we’ve two forms of stabilizer turbines: the **vertex stabilizers**, outlined on each vertex of the lattice as a cross of 4 `Z`

operators, and the **plaquette stabilizers**, outlined on each face as a sq. of 4 `X`

operators. Examples of vertex and plaquette stabilizers are proven under, the place *crimson* means `X`

and *blue* imply `Z`

.

To outline a legitimate code, `X`

and `Z`

stabilizers should commute. Since they’re Pauli operators, it implies that they have to at all times intersect on a fair variety of qubits. You possibly can examine that that is the case right here: vertex and plaquette operators at all times intersect on both zero or two qubits.

Because the stabilizer group is a bunch, it implies that the product of stabilizers can be a stabilizer. Due to this fact, any product of plaquettes can be a stabilizer. Right here is an instance of product of three plaquettes:

You possibly can discover that this varieties a loop! The reason being that when multiplying the plaquette operators, there are at all times two `X`

operators utilized on every qubit of the majority, which cancel one another. This remark is true basically: all of the `X`

stabilizers are loops within the lattice! To persuade your self of that truth, strive inserting plaquette stabilizers within the following panel:

Click on on faces so as to add plaquette stabilizers

What about `Z`

stabilizers? Additionally they kind loops… if we have a look at them the appropriate manner! Attempt to perceive why by inserting vertex stabilizers within the following panel:

Click on on vertices so as to add vertex stabilizers

For example, right here is the product of three vertex operators:

Are you able to see the loop? If no, this image ought to make it clearer:

The trick was to attract an edge orthogonal to each blue edge (dashed purple strains on the determine). Formally, this corresponds to representing the operator within the so-called **twin lattice**, a lattice fashioned by rotating every edge by 90° (dashed gray lattice within the determine). On this lattice, vertex stabilizers have a square-like form, just like the plaquette stabilizers of the primal lattice. Due to this fact, all of the properties we are able to derive for `X`

stabilizers, errors and logicals may be straight translated to `Z`

operators by merely contemplating the twin lattice, the place `Z`

operators behave precisely like `X`

operators.

So we’ve seen that every one the `X`

and `Z`

stabilizers are loops within the lattice. However is the converse true, that’s, do all of the loops outline stabilizers? Strive to consider this query, we’ll come again to it later.

## Detecting errors

So what occurs when errors begin occurring on our code? Use the panel under to insert `X`

and `Z`

errors within the code:

Click on on edges so as to add

Vertices lighted up in yellow are those which anticommute with the error and are equal to `-1`

. They’re a part of the **syndrome** (set of measured stabilizer values) and are sometimes referred to as **excitations** or **defects**. Can you see a sample in the best way excitations pertains to errors? Let me aid you. Begin by eradicating all of the errors. Then, draw a path of errors. Are you able to see what occurs?

Excitations solely seem on the boundary of error paths! Right here is an instance of sample that ought to make that clear:

We are able to see that excitations are at all times created in pairs, and transfer by means of the lattice when growing the scale of the error string. What about `Z`

errors? The identical phenomenon happens if errors paths are created by including errors on parallel edges:

As anticipated, when seen within the twin lattice (i.e. when rotating every edge by 90°), these `Z`

paths correspond to common strings of errors.

The truth that excitations stay on the boundary of error paths additionally implies that when a path varieties a loop, the excitations disappear. In different phrases, loops at all times commute with all of the stabilizers. And what’s an operator that commutes with all of the stabilizers? A logical operator!

## Logical operators, loops and topology

A logical operator can both be trivial, during which case it’s a stabilizer, or non-trivial, during which case it performs an `X`

, `Y`

or `Z`

logical operation on any of the `ok`

qubits encoded within the code. For the floor code, the excellence between all these completely different operators is dependent upon the forms of loop they kind. And this the place the reference to topology actually begins. Let’s first examine the construction of loops on a common clean manifold, earlier than making use of it to the floor code.

### Loops on a clean manifold

Let’s contemplate a clean manifold `mathcal{M}`

(e.g. a torus). We are saying that two loops on `mathcal{M}`

are **equal** if there exists a clean deformation of 1 loop to the opposite, that means that we are able to easily transfer the primary loop to the opposite loop with out reducing it^{. For example, the next 4 loops (inexperienced) on the torus are equal:}

Furthermore, we are saying {that a} loop is **contractible**, or **trivial**, whether it is equal to a degree, that’s, we are able to easily scale back it till it turns into a single level. All of the loops within the determine above are examples of contractible loops on the torus. So what do non-contractible loops appear like? Listed below are examples of non-contractible loops:

One loop (blue) goes across the center gap, whereas two loops (crimson) goes across the gap fashioned by the within of the donut. Notice that these forms of loop (crimson and blue) should not equal to one another, and can’t be deformed to acquire any of the inexperienced loops of the primary determine neither.

As at all times after we outline a notion of equivalence, it may be attention-grabbing to have a look at all of the completely different equivalence lessons that they result in. As a reminder, an **equivalence class**, or **coset**, is a set containing all of the objects equal to a reference object. So let’s enumerate all of the equivalence lessons of loops on the torus. First, we’ve the contractible loops. They’re all equal, since every of them may be decreased to a degree, and two factors can at all times be easily moved to one another. In order that’s our first equal class, that we are able to name the **trivial class**. Then, we’ve the crimson and blue loops of the determine above: one which goes across the center gap and the opposite that goes across the gap fashioned by the within of the donut. That’s two different equal lessons. Notice {that a} pair of loops can be technically a loop itself, so taking the crimson and blue loops collectively varieties its personal loop, which isn’t equal to both of them individually. This “double loop” may also be understood as (and is equal to) a single loop that goes round each holes, just like the orange line within the image under:

In order that’s a fourth equivalence class. Do we’ve extra?

Sure! Loops going round a gap twice should not equal to loops going across the gap as soon as. Due to this fact, for every `ok in mathbb{N}`

, we’ve a brand new class of loops going round one of many holes `ok`

instances. Since basically loops are given a course, we are able to additionally contemplate loops going round every gap in the wrong way and take `ok in mathbb{Z}`

. General, there are infinitely-many equivalence lessons which may be labeled by two integers `(k_1,k_2) in mathbb{Z}^2`

, the place every integer signifies what number of instances the loops go across the corresponding gap. On this notation, the trivial class corresponds to `(0,0)`

, the blue and crimson non-trivial loops correspond to `(0,1)`

and `(1,0)`

, and the orange loop corresponds to `(1,1)`

.

**Train 1**: What’s the equivalence class of the next (purple) loop? (solution)

### Loops on the floor code

How can we apply what we’ve realized to the floor code? In comparison with common clean manifolds, the floor code has a extra discrete construction, and the notion of *easily deforming a loop* doesn’t straight apply right here. We’d like a barely completely different notion of equivalence^{. We are saying that two loops ell_1,ell_2 on the floor code are equal if there exists a stabilizer S in mathcal{S} such that ell_1 = S ell_2. For example, if we contemplate X errors and X stabilizers, two loops of errors are equal if we are able to apply a collection of plaquettes to go from one to the opposite. As an train, present that the next two loops are equal, by discovering some plaquettes that transfer one loop to the opposite:}

Notice that this notion of equivalence is precisely the identical because the notion of logical equivalence outlined in Part II of the stabilizer formalism series: making use of a stabilizer to a logical provides one other illustration of the identical logical. So operationally, two loops are equal in the event that they correspond to the identical logical operator. Due to this fact, by taking a look at all of the equivalence lessons of loops, we can classify the completely different logical operators of the code.

Now, we are saying {that a} loop is **contractible**, or **trivial**, whether it is equal to the empty loop (no error). In different phrases, a loop is trivial if it’s a stabilizer. Within the case of `X`

errors, a trivial loop corresponds to the boundary of a set of faces (the plaquettes that kind the stabilizer).

We at the moment are able to reply our essential query. What are the non-trivial loops of the floor code, or in different phrases, the non-trivial logical operators? For `X`

errors, they appear like this:

And since making use of plaquette stabilizers doesn’t change the logical operator, the next strings give different legitimate representatives of the identical logicals:

As within the clean case, what issues is that the non-trivial loops go across the torus. Certainly, you won’t be able to put in writing these operators as merchandise of plaquette stabilizers. Operationally, every of these two loops (the “horizontal” and the “vertical” ones) correspond to making use of a logical `X`

operator to the code, and since they aren’t equal, they’re making use of it to completely different logical qubits. Due to this fact, we’ve at the very least two logical qubits, one for the horizontal loop and one for the vertical loop. Do we’ve extra?

This time, we solely have 4 completely different cosets of loops. Certainly, opposite to the sleek case, looping across the lattice twice at all times provides a trivial loop. This may be seen in two methods. The primary manner consists in observing {that a} loop going across the lattice twice is at all times equal to 2 disjoint loops (this was additionally true within the clean case).

The subsequent step is then to point out that such a two-loop sample is at all times a stabilizer. For example, attempt to discover the plaquette stabilizers that give rise to the 2 loops on the appropriate image:

Click on on faces so as to add plaquette stabilizers

Click on on faces so as to add plaquette stabilizers

The second technique to see that is to do not forget that a loop applies a logical operator `P`

, and making use of this logical operator twice provides `P^2=I`

. It implies that any double loop lives within the id coset, and is subsequently a stabilizer.

Thus, our equivalence lessons for `X`

errors may be labeled by two bits `(k_1,k_2) in mathbb{Z}_2 instances mathbb{Z}_2`

. The corresponding logical operators are `I`

, `X_1`

, `X_2`

and `X_1 X_2`

. The truth that our units are `mathbb{Z}_2`

as an alternative of `mathbb{Z}`

may also be interpreted as a consequence of the truth that we’ve qubits. For qudits, the generalization of Pauli operators obey `P^d=I`

, and `mathbb{Z}_2`

is changed by `mathbb{Z}_d`

. For error-correcting codes on continuous-variable programs (roughly, qudits with `d=infty`

), we recuperate `mathbb{Z}`

as our house of equivalence lessons^{.}

Let’s summarize what we’ve realized. Loops of the floor code outline logical operators. There are 4 non-equivalent forms of loops: the trivial ones (stabilizers), the horizontal ones (`X_1`

operator), the vertical ones (`X_2`

operator), and people two on the similar time (`X_1 X_2`

operator). Due to this fact, the floor code encodes `ok=2`

logical qubits.

Notice that basically, the floor code may be outlined on any clean manifold `mathcal{M}`

by discretizing it. The variety of logical qubits of the code is then straight related to the topological properties of the manifold, and specifically, to the variety of holes, or in additional technical phrases, the **first Betti quantity** of the manifold. For example, for the torus, the truth that `ok=2`

is a consequence of the presence of two holes.

To this point, we’ve primarily mentioned loops of `X`

errors, however what about `Z`

errors? As anticipated, the `Z_1`

and `Z_2`

logicals correspond to loops going across the torus when thought of within the twin lattice:

On the left is the `Z_1`

logical (which anticommute with `X_1`

) and on the appropriate is the `Z_2`

logical (which anticommute with `X_2`

). These anticommutation relation between `X_1`

and `Z_1`

may be seen on this image:

Certainly, we are able to see that the `X_1`

and `Z_1`

logicals intersect on precisely one qubit (inexperienced), that means that they anticommute. This property is impartial on the precise logical representatives you select: a “horizontal” `X`

loop will at all times intersect with a “vertical” `Z`

loop on a single qubit.

You now have all it is advisable decide the parameters of the floor code!

**Train 2**: What are the `[[n,k,d]]`

parameters of a floor code with lattice dimension `L`

? (solution)

## Decoding the floor code

Let’s think about that you just observe the next syndrome, and wish to discover a good correction operator.

We all know that excitations at all times seem in pairs, and correspond to the boundary of strings of errors.

So the error should be a string that hyperlinks these two excitations. Nonetheless, the variety of strings that would have given this syndrome could be very giant! Listed below are three examples of errors resulting in the syndrome proven above:

Let’s suppose that the primary sample was our precise error, however we selected the center sample as an alternative as our correction operator. That is what the ultimate sample, akin to the error (crimson) plus the correction operator (yellow), would appear like:

And as you possibly can see, this can be a stabilizer! So making use of this correction operator places us again within the unique state and the correction is a hit. Alternatively, here’s what would occur if we had utilized the final correction operator as an alternative:

As you possibly can see, this can be a logical operator! So that is an instance of correction failure, the place we’ve modified the logical state of our code when making use of the correction.

What classes can we draw from this instance? The purpose of the floor code decoding drawback is to match the excitations such that the ultimate operator is a stabilizer. Let’s attempt to formalize this a bit of bit. I described the final decoding drawback for stabilizer codes in a previous post, however a brief reminder might be warranted.

Equally to how logical operators may be partitioned into cosets, we are able to additionally enumerate equal lessons for errors becoming a given syndrome. For example, the primary and center patterns in our instance above are a part of the identical equal class, as they are often associated by a plaquette stabilizer. Alternatively, the final string belongs to a distinct class. The purpose of decoding is to discover a correction operator that belongs to the identical coset because the precise error. Certainly, the product `CE`

of the correction operator with the error is the same as a stabilizer if and provided that there’s a stabilizer `S`

such that `C=ES`

, that’s, if `C`

and `E`

belong to the identical class. To unravel this drawback with the knowledge we’ve, that’s, solely the syndrome and the error chances, the optimum decoding drawback, additionally referred to as **maximum-likelihood decoding**, may be formulated as discovering the coset `bm{bar{C}}`

with the best chance:

```
start{aligned}
max_{bm{bar{C}}} P(bm{bar{C}})
finish{aligned}
```

the place `P(bm{bar{C}})`

may be calculated as a sum over all of the operators within the coset: `P(bm{bar{C}}) = sum_{bm{C} in bm{bar{C}}} P(bm{C})`

.

Fixing this drawback precisely is computationally very exhausting, because it requires calculating a sum over an exponential variety of phrases (within the dimension of the lattice). However for the floor code, it may be approximated very nicely utilizing tensor network decoders, which have a complexity of `O(n chi^3)`

(as much as some logarithmic issue), with `n`

the variety of qubits and `chi`

a parameter quantifying the diploma of approximation of the decoder (akin to the certain dimension of the tensor community). The primary draw back of this decoder is that it generalizes poorly to the case of imperfect syndrome measurements. On this case, measurements must be repeated in time, resulting in a 3D decoding drawback that tensor networks can’t remedy effectively in the mean time.

As mentioned in my stabilizer decoding post, the maximum-likelihood decoding drawback may also be approximated by fixing for the error with the best chance, as an alternative of the entire coset. Assuming i.i.d. noise, discovering the error with the best chance is equal to discovering the smallest error that matches the syndrome. Within the case of the floor code, this corresponds to matching the excitations with chains of minimal weight.

Because it occurs, that is fully equal to fixing a well-known graph drawback, often called **minimum-weight perfect matching**! This drawback may be expressed as matching all of the vertices of a weighted graph (with a fair variety of vertices), such that the whole weight is minimized. In our case, the graph is constructed as a whole graph with a vertex for every excitation. The burden of every edge between two vertices is then given by the Manhattan distance between the 2 corresponding excitations. For example, let’s contemplate the next decoding drawback:

The related graph is then the next:

By enumerating all of the attainable matchings, you possibly can shortly see that the considered one of minimal weight hyperlinks vertices 1 and a couple of, and three and 4, with a complete weight of 5:

From there, we are able to deduce our decoding answer:

It occurs that minimum-weight perfect-matching may be solved in polynomial time utilizing the Blossom algorithm, which has a worst-case complexity of `O(n^3)`

. Whereas this complexity may appear fairly excessive, a recent modification of the Blossom algorithm, proposed by Oscar Higgott and Craig Gidney, appears to have a mean complexity of `O(n)`

. It additionally generalizes very nicely to the imperfect syndrome case, making it probably the greatest decoders on the market when it comes to trade-off between velocity and efficiency (the efficiency might be reviewed when speaking about thresholds within the final part of the publish).

You possibly can play with the matching decoder within the following visualization^{:}

Click on on edges so as to add

There are lots of different floor code decoders on the market with their very own execs and cons, resembling union-find, neural network-based decoders, belief propagation, and so forth. Describing all of them intimately is out of scope for this weblog publish, however I hope to put in writing a separate publish someday devoted to decoding. I additionally haven’t talked a lot in regards to the decoding drawback for imperfect syndrome, which I additionally go away for a separate weblog publish.

Let’s now reply a query that you just might need been questioning this entire time: how the hell will we implement a toric lattice in observe? Whereas it’s in precept attainable to implement an precise torus experimentally (for instance with cold atoms), it’s impractical for a lot of quantum computing structure. Happily, there exists a purely planar model of the floor code, that we are going to focus on now!

## Floor code with open boundaries

Think about the next model of the floor code, the place vertex stabilizers on the highest and backside boundaries, and plaquette stabilizers on the left and proper boundaries, at the moment are supported on three qubits as an alternative of 4:

Click on on edges so as to add

We name the highest and backside boundaries **clean boundaries**, and the left and proper boundaries **tough boundaries**. Be at liberty to play with this lattice and check out to determine what the primary variations are, in comparison with the toric code. Specifically, are you able to determine the logical operators of this code? What number of equal lessons, or logical qubits, can you see?

The primary distinction to note is that excitations can now be created on the boundary!

On this instance, a vertex excitation is created on the tough boundary, and a plaquette excitation is created on the sleek boundary. You possibly can see that excitations don’t have to come back in pairs anymore! This poses a slight problem when decoding utilizing minimum-weight good matching, however this may simply be overcome by including some new boundary nodes to the matching graph.

Extra importantly, we are able to observe that vertex excitations can solely be created or annihilated on the tough boundaries, and plaquette excitations solely on the clean boundaries. Which means that `X`

logicals have to hitch the tough boundaries, and `Z`

logicals have to hitch the sleek boundaries. That is illustrated within the following determine, the place we are able to see that there is no such thing as a “vertical” `X`

logical or “horizontal” `Z`

logical.

Due to this fact, there are solely two equivalence lessons of logicals for every error kind: the strings that be part of reverse boundaries, and the trivial loops. As a consequence, this non-periodic model of the floor code, additionally referred to as **planar code**, solely encodes a single qubit. It’s a `[[2L^2 - 2L + 1, 1, L]]`

-code. Whereas we’ve misplaced one qubit in comparison with the toric model, the truth that it may be laid out on a 2D floor makes it way more sensible.

## A extra compact model: the rotated floor code

You probably have began trying on the floor code literature, you might need seen that individuals usually use a distinct illustration, which seems roughly like the next:

Click on on vertices so as to add

On this illustration, qubits are on the vertices, and all of the stabilizers are on the plaquette. Be at liberty to play with this lattice to know what’s happening.

It occurs that this lattice represents precisely the planar code that we noticed earlier than! Listed below are the 2 lattices on high of one another:

The thought is to show each fringe of the unique illustration right into a vertex, every vertex into yellow face, and every face right into a rose face. Because of this, each vertices and plaquettes change into rotated squares, and qubits change into vertices. This illustration known as the **rectified lattice**.

One benefit of this illustration is that it permits to provide you with a distinct, extra compact, model of the floor code. The thought is to take to following central piece of the rectified lattice:

We then rotate it and add just a few boundary stabilizers. This offers the next code, referred to as the **rotated floor code**:

Click on on vertices so as to add

As at all times, be happy to familiarize your self with this new code by enjoying with it on the visualization. It is best to have the ability to see that it additionally encodes a single logical qubit, and has a distance of `L`

. Nonetheless, this time, the variety of bodily qubits is precisely `L^2`

. The rotated floor code is subsequently a `[[L^2, 1, L]]`

-code, which is an element two enchancment within the overhead in comparison with the unique floor code. This model of the floor code is subsequently the popular one to appreciate experimentally. For example, its two smallest situations, the `[[9,1,3]]`

code and the `[[16,1,4]]`

code are those lately realized by the Google lab.

When contemplating a code household such because the floor code, the `[[n,k,d]]`

parameters solely give a part of the story. One other essential traits of a code household is its set of thresholds.

## Thresholds of the floor code

The **threshold** of a code household for a given noise mannequin and decoder is the maximal bodily error price `p_{th}`

such that for all `p < p_{th}`

, growing the code dimension decreases the logical error price. The brink is usually calculated numerically utilizing plots that appear like the next:

To make this determine, codes with distance `10,20,30`

are simulated below noise channel with various bodily error price. Errors are then decoded, and logical errors are enumerated. We are able to see that above `p_{th} approx 15.5%`

, growing the code distance will increase the logical error price, whereas under `p_{th}`

, the logical error price decreases with the code distance.

So what’s the threshold of the floor code? Initially, very importantly, there isn’t a single threshold for the floor code: it extremely is dependent upon which noise channel and which decoder we’re utilizing. Let’s begin by discussing noise fashions.

We frequently make the excellence between three forms of noise fashions:

- The
**code-capacity**mannequin, during which errors can happen on all of the bodily qubits of the code, however measurements are assumed to be good. - The
**phenomenological**noise mannequin, during which every stabilizer measurement may fail with a set chance. - The
**circuit-level**noise mannequin, during which the circuits to arrange the code and extract the syndrome are thought of, and errors are assumed to happen with a sure chance after every bodily gate.

The code-capacity threshold is the best to estimate, each when it comes to implementation time and computational time, and permits to get a tough thought of the efficiency of a given code or decoder. The phenomenological threshold will get us nearer to the true threshold worth and may be helpful when evaluating decoders that take care of measurement errors in attention-grabbing methods (resembling single-shot decoders). Lastly, circuit-level thresholds are probably the most real looking ones and approximate probably the most precisely the precise noise stage that experimentalists want to achieve to make error correction work with a given code. Whereas circuit-level thresholds have been thought of very exhausting to estimate for a very long time, primarily as a result of lack of very quick noisy Clifford circuit simulators, latest instruments resembling Stim have made these simulations a lot much less cumbersome.

For every of these three fashions, we additionally must specify the distribution of `X`

, `Y`

and `Z`

errors^{. There are two quite common decisions right here. The primary is the depolarizing noise mannequin, during which these three Paulis are assumed to happen with the identical chance. Since Y is product of X and Z, it implies that P(Y)=P(X,Z) neq P(X)P(Z), or in different phrases, X and Z are correlated. One other noise mannequin is the impartial X/Z mannequin, during which X and Z are impartial and happen with the identical chance. The chance of getting Y errors is fastened as P(Y)=P(X)P(Z)=P(X)^2 and is subsequently decrease than for depolarizing noise.}

Relating to the decoders, we’ll contemplate two of them right here for simplicity: the maximum-likelihood decoder, and the matching decoder. Because it occurs, the code-capacity threshold for the utmost probability decoder corresponds precisely to the section transition of a sure statistical mechanics mannequin. This *stat mech mapping* was established in Dennis et al. (a traditional of the quantum error correction literature) in 2002. For the floor code subjected to impartial `X/Z`

errors, the equal stat mech mannequin is the random-bond Ising mannequin, whose section transition had simply been calculated at the moment. They had been subsequently in a position to give this primary floor code threshold with out doing any simulation themselves!

We at the moment are prepared to offer the precise threshold values for the floor code! Here’s a desk with the code-capacity thresholds of the completely different noise fashions and decoder mentioned beforehand:

Desk 1: Code-capacity thresholds of the floor code

For phenomenological and circuit-level noise, I’m solely conscious of some matching decoder thresholds below depolarizing noise. For phenomenological noise, we’ve a threshold of about `3%`

. For circuit-level noise, the edge goes all the way down to about `1%`

, which is commonly the cited worth for “the edge of the floor code”.

## Conclusion

On this publish, we’ve outlined the floor code and its completely different variants (toric, planar, rotated) and tried to know its most vital properties visually. We’ve got seen that it encodes one or two logical qubits relying on the boundary circumstances, and has a distance scaling as `sqrt{N}`

. Stabilizers may be thought as trivial (or contractible) loops on the underlying manifold, whereas the logical `X`

and `Z`

operators are the non-trivial loops going across the torus or becoming a member of the boundaries, drawing a connection between topology and codes. We’ve got additionally studied the decoding drawback for the floor code and the way minimum-weight good matching can be utilized for this objective. Lastly, I’ve launched the notion of error-correction threshold and given its worth for various decoders and noise fashions.

The floor code is by far some of the studied codes of the quantum error-correction literature and there’s a lot extra to say about it! I haven’t instructed you easy methods to take care of measurement errors, easy methods to put together the code and measure the syndrome utilizing quantum circuits, easy methods to run logical gates on it, easy methods to generalize it to completely different lattices and dimensions, easy methods to make exact the reference to topology, and so forth. The floor code can be a stepping stone to know extra difficult codes, from the colour code (the second most well-known household of 2D codes) to hypergraph product codes and all the best way to good LDPC codes. Now that you’re outfitted with the stabilizer formalism and have an excellent grasp of the floor code, the tree of attainable studying trajectories has abruptly acquired many branches, and I hope to cowl as lots of these in subsequent weblog posts!

Within the meantime, one direct follow-up from this publish is Dominik Kufel’s post on the condensed matter features of the toric code, the place you’ll study in regards to the connection between codes and Hamiltonians, why *excitations* are referred to as excitations and may be regarded as quasi-particles referred to as *anyons*, what the state of the floor code seems like and the way it supplies an instance of topological section of matter. This connection is essential to study for any working towards quantum error-correcter, as it’s used extensively within the literature and permits to know many computational features of the floor code (easy methods to make gates by braiding anyons, why the circuit to arrange the floor code has polynomial dimension, and so forth.). So go learn his publish!

## Resolution of the train

**Train 1**: What’s the equivalence class of the next (purple) loop? (Back to section)

**Correction**: The loops goes as soon as across the center gap, and 3 times across the gap forming the within of the donut. Due to this fact, it belongs to coset labelled by `(1,3)`

.

**Train 2**: What are the `[[n,k,d]]`

parameters of a floor code with lattice dimension `L`

? (Back to section)

**Correction**: Since there are `L^2`

horizontal and `L^2`

vertical edges, we’ve `n=2L^2`

. Then, we noticed that there are precisely two non-equivalent forms of logical operators, that means that there are `ok=2`

qubits. Lastly, the space is the minimal dimension of a logical operator, which in our case is `d=L`

. Due to this fact, the floor code is a `[[2L^2, 2, L]]`

-code.