Now Reading
Recognizing and Avoiding Heap Fragmentation in Rust Functions

Recognizing and Avoiding Heap Fragmentation in Rust Functions

2023-04-06 10:25:18

Cover image

Thriller of the Stair-step profile.

We not too long ago noticed one of our Rust projects, an axum service, exhibit some odd habits when it got here to reminiscence utilization.
An odd-looking reminiscence profile is the very last thing I’d count on from a Rust program, however right here we’re.

Memory usage graph showing a sudden jump up, but otherwise flat

The service would run with “flat” reminiscence for a time frame, then immediately leap as much as a brand new plateau.
This sample would repeat over hours, typically beneath load, however not at all times.
The worrying half was as soon as we noticed a pointy enhance, it was uncommon for the reminiscence to drop again down.
It was as if the reminiscence was misplaced, or in any other case “leaked” infrequently.

Beneath regular circumstances, this “stair-step” profile was simply odd wanting, however at one level the reminiscence utilization climbed
disproportionately.
Unbounded reminiscence development can result in providers being compelled to exit. When providers exit abruptly, this could decrease
availability… which is unhealthy for enterprise. I needed to dig in and determine what was taking place.

Usually after I consider surprising reminiscence development in a program, I consider leaks. Nonetheless, this appeared totally different.
With a leak, you are likely to see a extra regular, common, sample of development.

Memory usage graph showing a steady rise then sharp drop, repeated, indicative of a leaky process

Typically this seems like a line sloping up and to the proper. So, if our service wasn’t leaking, what was it doing?

If I may establish circumstances that prompted the leap in reminiscence utilization, perhaps I may mitigate no matter was taking place.

Digging in

I had two burning questions:

  • Did one thing change in our code to advertise this habits?
  • In any other case, did a brand new visitors sample emerge?

Taking a look at historic metrics, I may see comparable patterns of sharp will increase between lengthy flat durations, however by no means
earlier than did now we have this form of development.
To know if the expansion itself was new (despite the “stair-step” sample being regular for us), I’d want a dependable
approach to reproduce this habits.

If I may drive the “step” to indicate itself, then I’ll have a approach to confirm a change in habits after I take steps to
curb the reminiscence development.
I’d additionally be capable to backtrack via our git historical past and search for a time limit when the service didn’t exhibit
seemingly unbounded development.

The scale I used when operating my load exams have been:

  • The dimensions of POST our bodies despatched to the service.
  • The request fee (ie, requests per second).
  • The variety of concurrent consumer connections.

The magic mixture for me was: bigger request our bodies and greater concurrency.

When operating load exams on a neighborhood system, there are all types of limiting components, together with the finite variety of
processors out there for operating each purchasers and the server itself. Nonetheless, I used to be capable of see the “stair-step” within the
reminiscence on my native machine given simply the proper circumstances, even at a decrease total request fee.

Utilizing a fixed-sized payload and sending requests in batches, with a quick relaxation between them, I used to be capable of drive up the
reminiscence of the service repeatedly, a step at a time.

I discovered it fascinating that whereas I may develop the reminiscence over time, I’d finally hit a degree of diminishing returns.
Ultimately, there’d be some (nonetheless a lot greater than anticipated) ceiling to the expansion.
Enjoying round slightly extra, I discovered I may attain a fair greater ceiling by sending requests with various payload sizes.

As soon as I had recognized my enter, I used to be capable of work backward via our git historical past, finally studying that our
manufacturing scare was not prone to be the results of current adjustments on our finish.

The particulars of the workload to set off this “stair-step” are particular to the applying itself, although I used to be ready
to drive the same graph to occur with a toy project.

#[derive(serde::Deserialize, Clone)]
struct Widget {
    payload: serde_json::Worth,
}

#[derive(serde::Serialize)]
struct WidgetCreateResponse {
    id: String,
    dimension: usize,
}

async fn create_widget(Json(widget): Json<Widget>) -> Response {
    (
        StatusCode::CREATED,
        Json(process_widget(widget.clone()).await),
    )
        .into_response()
}

async fn process_widget(widget: Widget) -> WidgetCreateResponse {
    let widget_id = uuid::Uuid::new_v4();
    let bytes = serde_json::to_vec(&widget.payload).unwrap_or_default();
    
    
    
    tokio::time::sleep(std::time::Period::from_millis(
        std::env::var("SLEEP_MS")
            .as_deref()
            .unwrap_or("150")
            .parse()
            .count on("invalid SLEEP_MS"),
    ))
    .await;
    WidgetCreateResponse {
        id: widget_id.to_string(),
        dimension: bytes.len(),
    }
}

It turned out that you just didn’t want a lot to get there. I managed to see the same sharp (however on this case a lot smaller)
enhance from an axum app with a single handler receiving a JSON physique.

Memory usage graph with a sharp jump up circled in red

Whereas the reminiscence will increase in my toy challenge have been nowhere close to as dramatic as we noticed within the manufacturing service, it was
sufficient to assist me evaluate and distinction through the subsequent section of my investigation. It additionally helped me to have the tighter
iteration loop of a smaller codebase whereas I experimented with totally different workloads. See the README for
particulars on how I ran my load exams.

I spent a while looking the net for bug reviews or discussions that may describe the same habits.
A time period that got here up repeatedly was Heap Fragmentation and after studying a bit extra on the subject, it appeared prefer it
may match what I used to be seeing.

What’s Heap Fragmentation?

People of a sure age might have had the expertise of watching a defrag utility on DOS or Home windows transfer blocks round
on a tough disk to consolidate the “used” and “free” areas.

Defrag utility in action

Within the case of this outdated PC arduous drive, recordsdata of various sizes have been written to disk then later moved or deleted, leaving
a “gap” of accessible house between different used areas. Because the disk begins to replenish, you may attempt to create a brand new file
that doesn’t fairly slot in a kind of smaller areas. In heap fragmentation state of affairs, that can result in an allocation failure, although the failure mode of disk fragmentation can be barely much less drastic. On disk, the file will then must be cut up to smaller chunks which makes entry a lot much less environment friendly (thanks wongarsu for the correction). The answer for the disk drive is to “defrag” (de-fragment) the drive to be able to re-arrange these open blocks co steady areas.

One thing comparable can occur when the allocator (the factor answerable for managing reminiscence allocation in your program) provides and removes values of various sizes over a time frame.
Gaps which are too small and scattered all through the heap can result in new “contemporary” blocks of reminiscence being allotted to
accommodate a brand new worth that received’t match in any other case. Although sadly due to how reminiscence administration works a “defrag” is just not potential.

The precise trigger for the fragmentation might be any variety of issues: JSON parsing with serde, one thing on the
framework-level in axum, one thing deeper in tokio, and even only a quirk of the particular allocator implementation
for the given system.
Even with out understanding the foundation trigger (if there may be such a factor) the habits is observable in the environment and
considerably reproducible in a bare-bones app.

If that is what was taking place to the method reminiscence, what might be accomplished about it?
It looks as if it could be arduous to vary the workload to keep away from fragmentation.
It additionally looks as if it’d be difficult to unwind all of the dependencies in my challenge to presumably discover a root trigger within the code
for the way the fragmentation occasions are occurring. So, what might be accomplished?

See Also

Jemalloc to the rescue

jemalloc is self-described as aiming to “[emphasize] fragmentation avoidance and scalable
concurrency help.”

Concurrency was certainly part of the issue for my program, and avoiding fragmentation is the secret.
jemalloc sounds prefer it might be simply what I would like.

Since jemalloc is an allocator that goes out of its approach to keep away from fragmentation within the first place, the hope was our
service may be capable to run longer with out regularly growing the reminiscence.

It’s not so trivial to vary the inputs to my program, or the pile of software dependencies. It’s, nevertheless, trivial
to swap out the allocator.
Following the examples within the https://github.com/tikv/jemallocator readme, little or no work was required to take it for
a check drive.

For my toy project, I added a cargo function to optionally swap out the default allocator for jemalloc and re-ran my
load exams.

Two memory usage graphs side by side. On the left is the default allocator with the "stair-step" jump up in usage, on the
right is the same program with jemalloc in use. The jemalloc graph shows a ragged line where memory rises and falls over
the duration of the test

Recording the resident reminiscence throughout my simulated load exhibits the 2 distinct reminiscence profiles.

With out jemalloc, we see the acquainted stair-step profile. With jemalloc, we see the reminiscence rise and fall repeatedly
because the check runs. Extra importantly, whereas there’s a appreciable distinction between the reminiscence utilization with jemalloc
throughout load versus idle instances, we don’t “lose floor” as we did earlier than because the reminiscence at all times comes again right down to the
baseline.

Wrapping Up

In case you occur to see a “stair-step” profile on a Rust service, contemplate taking jemalloc for a check drive.
In case you occur to have a workload that promotes heap fragmentation, jemalloc might give a greater outcome total.

Individually, included within the toy project repo is a benchmark.yml to be used with the https://github.com/fcsonline/drill
load testing instrument.
Attempt altering the concurrency, physique dimension (and the arbitrary handler sleep period within the service itself), and so forth to see
how the change in allocator impacts the reminiscence profile.

As for real-world influence, you’ll be able to clearly see the profile change after we rolled out the change to jemalloc.

Memory usage graph before and after switching to jemalloc

The place the service used to indicate flat traces and huge steps, usually no matter load, we now see a extra ragged line that
follows the energetic workload extra carefully. Other than the advantage of serving to the service keep away from pointless reminiscence development,
this modification gave us higher perception into how our service responds to load so all in all, this was a optimistic end result.


In case you’re fascinated with constructing a sturdy and scalable service utilizing Rust, we’re hiring! Try our careers page for extra info.

For extra content material like this, be certain to comply with us on Twitter, Github or RSS for the most recent updates for the Svix webhook service, or be a part of the dialogue on our community Slack.



Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top