Now Reading
Investigating Linux Phantom Disk Reads

Investigating Linux Phantom Disk Reads

2023-05-02 15:25:59

Not so way back, certainly one of our customers reached out for a case of bizarre {hardware}
utilization. They have been utilizing our ILP (InfluxDB Line Protocol)
client to insert rows into their QuestDB
database, however together with disk writes, additionally they noticed important disk reads.
That is positively not anticipated from a write-only workload, so we needed to get to
the underside of this downside. Right this moment we share this story, filled with rises and falls,
in addition to Linux kernel magic.

Logo of Linux and QuestDB


As you might already know, QuestDB has an append-only columnar
storage format for information persistence. In
follow, which means that because the database writes a brand new row, it appends its
column values to a number of column information.

This creates excessive disk write charges as a result of sequential write sample. When
dealing with a write-only load, the database will not be anticipated to learn a lot from the
disk. Because of this the actual concern was shocking to us.

After a couple of makes an attempt, we managed to breed the difficulty. The signs seemed
like the next iostat output:

iostat utility output with a high number of disk reads

Right here now we have 10-70MB/s for disk writes whereas reads are as excessive as 40-50MB/s.
Learn utilization ought to be near zero in ingest-only eventualities. The excessive learn
quantity is, subsequently, very a lot sudden.


The very first thing we wished to know was the precise information that have been learn by
the database. Fortunately, nothing is not possible in Linux. The blktrace utility
mixed with blkparse can be utilized to gather all reads on the given disk:

The above command prints all disk-read occasions occurring on the system. The
output appears one thing like the next:

Every line right here stands for a special occasion. For simplicity, let’s contemplate the
first line:

The related elements are the next:

  • 425548 – this occasion was generated with pid 425548.
  • Q RA – this occasion stands for a disk learn request added (queued) to the I/O
    queue. The “A” suffix will not be nicely documented, however it stands for potential
    readahead operation. We’ll be taught what readahead is a bit later.
  • 536514808 + 8 – this learn operation begins at block 536514808 and is 8
    blocks in dimension.
  • [questdb-ilpwrit] – the operation was began by a QuestDB’s ILP author

Armed with this information, we will use debugfs to comply with the block quantity to
discover the corresponding inode:

Lastly, we will test what stands for the inode 8270377:

These steps should be finished per learn occasion, though writing a script to automate
it’s straightforward. After wanting on the occasions, we discovered that the disk reads correspond
to the column information. So, whereas the database was writing these append-only information,
one way or the other, we ended up with disk reads.

One other attention-grabbing reality is that the consumer had fairly a couple of tables, round 50 of
them, every holding a couple of hundred columns. In consequence, the database needed to cope
with a number of column information. We have been fairly assured that our ILP code wasn’t
presupposed to be studying from these information however solely writing to them.

Who may be studying the information? Possibly the OS?


Like many different databases, QuestDB makes use of buffered I/O, comparable to
read, and
write, to cope with disk
operations. Which means that every time we write one thing to a file, the kernel
writes the modified information to plenty of pages within the web page cache and marks them
as soiled.

The web page cache is a particular clear in-memory cache utilized by Linux to maintain
not too long ago learn information from the disk and not too long ago modified information to be written to the
disk. The cached information is organized in pages which can be 4KB in dimension on most
distributions and CPU architectures.

It is also not possible to restrict the quantity of RAM for the web page cache because the kernel
tries to make use of all obtainable RAM for it. Older pages are evicted from the web page
cache to permit new web page allocation for purposes or the OS. Most often,
this occurs transparently for the applying.

With the default
cairo.commit.mode worth,
QuestDB would not make express fsync/msync calls to flush the column information
information to the disk, so it totally depends on the kernel to flush the soiled pages.
Therefore, we have to perceive higher what to anticipate from the kernel earlier than
hypothesizing on our “phantom reads” case.

As we already know, the OS would not write file information modifications to the disk
instantly. As a substitute, it writes them to the web page cache. That is referred to as the
write-back caching technique. A write-back assumes {that a} background course of is
answerable for writing the soiled pages again to the disk. In Linux, that is finished
by pdflush, a set of kernel threads answerable for the soiled web page writeback.

There are a number of pdflush settings obtainable for configuring. Listed below are the
most vital ones:

  • dirty_background_ratio – whereas the share of soiled pages is lower than
    this setting, soiled pages keep in reminiscence till they’re sufficiently old. As soon as the
    variety of soiled pages exceeds this ratio, the kernel proactively wakes up
    pdflush. On Ubuntu and different Linux distros, this setting defaults to 10
  • dirty_ratio – when the share of soiled pages exceeds this ratio, the
    writes are now not asynchronous. Which means that the writing course of (the
    database course of in our case) will synchronously write pages to the disk. When
    this occurs, the corresponding thread is put into “uninterruptable sleep”
    standing (D standing code in high utility). This setting normally defaults to
    20 (20%).
  • dirty_expire_centisecs – this defines the age in centiseconds for soiled
    pages to be sufficiently old for the writeback. This setting normally defaults to
    3000 (30 seconds).
  • dirty_writeback_centisecs – defines the interval to get up the pdflush
    course of. This setting normally defaults to 500 (5 seconds).

The present values of the above settings may be checked by way of the /proc digital
file system:

It is also vital to notice that the above percentages are calculated based mostly on
the entire reclaimable reminiscence, not the entire RAM obtainable on the machine. If
your software would not produce quite a lot of soiled pages and there’s loads of
RAM, all disk writes are finished asynchronously by pdflush.

Nevertheless, if the quantity of reminiscence obtainable to the web page cache is low, pdflush
could be writing the info more often than not with excessive probabilities of the applying
being put into the “uninterruptable sleep” standing and blocked on writes.

Tweaking these values did not give us a lot. Do not forget that the consumer was writing to
a excessive variety of columns? This implies the database needed to allocate some reminiscence
per column to cope with
(O3) writes, leaving much less reminiscence obtainable to the web page cache. We first checked
that, and certainly a lot of the RAM was utilized by the database course of. Tweaking
cairo.o3.column.reminiscence.dimension from the default 16MB to 256KB helped to cut back
the disk learn charge considerably, so certainly the issue had one thing to do with
the reminiscence strain. Don’t worry when you do not perceive this paragraph in
element. Right here is a very powerful bit: Lowering the database reminiscence utilization
diminished the variety of reads. That is a helpful clue.

However what was the precise cause for the disk reads?

See Also

To reply this query, we have to perceive the disk studying a part of the web page
cache higher. To maintain issues easy, we’ll give attention to mmap-based I/O. As quickly as
you mmap a file, the kernel allocates web page desk entries (PTEs) for the
digital reminiscence to order an tackle vary to your file, however it would not learn
the file contents at this level. The precise information is learn into the web page while you
entry the allotted reminiscence, i.e. begin studying (LOAD instruction in x86) or
writing (STORE instruction in x86) the reminiscence.

When this occurs for the primary time, the
memory management unit
(MMU) of your CPU indicators a particular occasion referred to as “web page fault.” A web page fault
implies that the accessed reminiscence belongs to a PTE with no allotted bodily
reminiscence. The kernel handles a web page fault in two other ways:

  • If the web page cache already has the related file information in reminiscence, maybe
    belonging to the identical file opened by one other course of, the kernel merely
    updates the PTE to map to the prevailing web page. That is referred to as a minor web page
  • If the file information is not cached but, the kernel has to dam the applying,
    learn the web page, and solely then replace the PTE. That is referred to as a significant web page

As you may guess, main web page faults are rather more costly than minor ones,
so Linux tries to reduce their quantity with an optimization referred to as
readahead (typically it is also referred to
as “fault-ahead” or “pre-fault”).

On the conceptual degree, the readahead is answerable for studying the info that
the applying hasn’t explicitly requested. If you first entry (both learn
or write) a couple of bytes of a just-opened file, a significant web page fault occurs, and the
OS reads the info comparable to the requested web page plus a number of pages earlier than
and after the file. That is referred to as “read-around.” In the event you maintain accessing the
subsequent pages of the file, the kernel acknowledges the sequential entry
sample and begins readahead, making an attempt to learn the next pages in batches
forward of time.

By doing this, Linux tries to optimize for sequential entry patterns in addition to
to extend the probabilities of hitting an already cached web page in case of random

Keep in mind the disk learn occasion from the blktrace output? The “A” suffix within the
“RA” operation kind urged that the disk learn was part of the readahead.
Nevertheless, we knew that our workload was write-only, coping with a big quantity
of information. The difficulty was rather more noticeable when there was not a lot reminiscence left
for web page cache functions.

What if the pages have been evicted from the web page cache too early, resulting in
redundant readahead reads on the next web page entry?

We will test this speculation by disabling the readahead. This is so simple as
making the madvise
system name with the MADV_RANDOM flag. This trace tells the kernel that the
software goes to entry the mmapped file randomly, so it ought to disable
readahead for the mmapped file.

That was it! No extra “phantom reads”:

iostat utility output with a low number of disk reads


In consequence, we discovered a sub-optimal kernel readahead conduct. Ingestion of a
excessive variety of column information underneath reminiscence strain led to the kernel beginning
readahead disk learn operations, which you would not count on from a write-only
load. The remaining was so simple as
using madvise in our code to
disable the readahead in desk writers.

What is the ethical of this story? First, we would not be capable to repair this concern if
we weren’t conscious of how buffered I/O works in Linux. When you already know what to
count on from the OS and what to test, discovering and utilizing the precise utilities is
trivial. Second, any consumer grievance, even one that’s onerous to breed, is an
alternative to be taught one thing new and make QuestDB higher.

As regular, we encourage you to check out the newest QuestDB launch and share your
suggestions with our Slack Community. You too can
play with our live demo to see how briskly it executes
your queries. And, after all, contributions to our
open-source project on GitHub are extra
than welcome.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top