Now Reading
Lowering On-line Serving Latency from 1.11s to 123.6ms on a Distributed SQL Database

Lowering On-line Serving Latency from 1.11s to 123.6ms on a Distributed SQL Database

2023-01-12 03:44:26

TL;DR:

This submit tells how an internet site on a distributed database diminished on-line serving latency from 1.11 s to 417.7 ms, after which to 123.6 ms. We discovered that some classes realized on MySQL might be utilized all through the optimization course of. However after we optimize a distributed database, we have to contemplate extra.

The OSS Insight web site shows the info adjustments of GitHub occasions in actual time. It is powered by TiDB Cloud, a MySQL-compatible distributed SQL database for elastic scale and real-time analytics.

Just lately, to save lots of prices, we tried to make use of lower-specification machines with out affecting question effectivity and consumer expertise. However our web site and question response slowed down.

The repository analysis page was loading

The repository evaluation web page was loading, loading, and loading

How might we resolve these issues on a distributed database? May we use the methodology we realized on MySQL?

Analyzing the SQL execution plan

To determine sluggish SQL statements, we used TiDB Cloud’s Prognosis web page to kind SQL queries by their common latency.

For instance, after the API server acquired a request, it executed the next SQL assertion to acquire the variety of points within the vscode repository:

SELECT
COUNT(DISTINCT quantity)
FROM github_events
WHERE
repo_id = 41881900
AND kind = 'IssuesEvent';

Nevertheless, if the open supply repository is massive, this question could take a number of seconds or extra to execute.

Utilizing EXPLAIN ANALYZE to troubleshoot question efficiency issues

In MySQL, after we troubleshoot question efficiency issues, we normally use the EXPLAIN ANALYZE <sql> assertion to view the SQL assertion’s execution plan. We are able to use the execution plan to find the issue. The identical works for TiDB.

We executed the EXPLAIN assertion:

EXPLAIN ANALYZE SELECT
COUNT(DISTINCT quantity)
FROM github_events
WHERE
repo_id = 41881900
AND kind = 'IssuesEvent';

The end result confirmed that the question took 1.11 seconds to execute.

The query result

The question end result

You’ll be able to see that TiDB’s EXPLAIN ANALYZE assertion execution end result was fully totally different from MySQL’s. TiDB’s execution plan gave us a clearer understanding of how this SQL assertion was executed.

The execution plan reveals:

  • This SQL assertion was cut up into a number of subtasks. Some had been on the root node, and others had been on the tikv node.
  • The question fetched knowledge from the partition:issue_event partition desk.
  • This question did a variety scan by means of the index index_github_events_on_repo_id(repo_id). This let the question slim down the info scan shortly. This course of solely took 59 ms. It was the sum of the execution occasions of a number of concurrent duties.
  • Apart from IndexRangeScan, the question additionally used TableRowIDScan. This scan took 4.69 s, the sum of execution occasions for a number of concurrent subtasks.

From the execution occasions above, we decided that the question efficiency bottleneck was within the TableRowIDScan step.

We reran the EXPLAIN ANALYZE assertion and located that the question was quicker the second time. Why?

Why did TableRowIDScan take so lengthy?

To seek out the explanation why TableRowIDScan took so lengthy, we want primary data of TiDB’s underlying storage.

In TiDB, a desk’s knowledge entries and indexes are saved on TiKV nodes in key-value pairs.

  • For an index, the bottom line is the mix of the index worth and the row_id (for a non-clustered index) or the first key (for a clustered index). The row_id or main key signifies the place the info is saved.
  • For a knowledge entry, the bottom line is the mix of the desk ID and the row_id or main key. The worth half is the mix of this row of information.

This graph reveals how IndexLookup is executed within the execution plan:

The logical structure

That is the logical construction, not the bodily storage construction.

Within the question above, TiDB makes use of the question situation repo_id=41881900 to filter out all row numbers row_id associated to the repository within the secondary index index_github_events_on_repo_id. The question wants the quantity column knowledge, however the secondary index would not present it. Due to this fact, TiDB should execute IndexLookup to seek out the corresponding row within the desk based mostly on the obtained row_id (the TableRowIDScan step).

The rows are in all probability scattered in numerous knowledge blocks and saved on the exhausting disk. This causes TiDB to carry out numerous I/O operations to learn knowledge from totally different knowledge blocks and even totally different machine nodes.

Why was EXPLAIN ANALYZE quicker the second time?

In EXPLAIN ANALZYE‘s execution end result, we noticed that the “execution information” column akin to the TableRowIDScan step contained this data:

block: {cache_hit_count: 2755559, read_count: 179510, read_byte: 4.07 GB}

We thought this had one thing to do with TiKV. TiKV learn a really massive variety of knowledge blocks from the disk. As a result of the info blocks learn from the disk had been cached in reminiscence within the first execution, 2.75 million knowledge blocks might be learn instantly from reminiscence as a substitute of being retrieved from the exhausting disk. This made the TableRowIDScan step a lot quicker, and the question was quicker total.

Nevertheless, we believed that consumer queries had been random. For instance, a consumer would possibly lookup knowledge from a vscode repository after which go to a kubernetes repository. TiKV’s reminiscence could not cache all the info blocks in all of the drives. Due to this fact, this didn’t resolve our drawback, but it surely reminded us that after we analyze SQL execution effectivity, we have to exclude cache results.

Utilizing a masking index to keep away from executing TableRowIDScan

May we keep away from executing TableRowIDScan in IndexLookup?

In MySQL, a masking index prevents the database from index lookup after index filtering. We needed to use this to OSS Perception. In our TiDB database, we tried to create a composite index to realize index protection.

Once we created a composite index with a number of columns, we wanted to concentrate to the column order. Our objectives had been to permit a composite index for use by as many queries as attainable, to assist these queries slim the scope of information scans as shortly as attainable, and to supply as many fields as attainable within the question. Once we created a composite index we adopted this order:

  1. Columns that had excessive differentiation and might be used as equivalence situations for the WHERE assertion, like repo_id
  2. Columns that did not have excessive differentiation however might be used as equivalence situations for the WHERE assertion, like kind and motion
  3. Columns that might be used as vary question situations for the WHERE assertion, like created_at
  4. Redundant columns that weren’t used as filter situations however had been used within the question, similar to quantity and push_size

We used the CREATE IDNEX assertion to create a composite index within the database:

CREATE INDEX index_github_events_on_repo_id_type_number ON github_events(repo_id, kind, quantity);

Once we created the index and ran the SQL assertion once more, the question velocity was considerably quicker. We seen the execution plan by means of EXPLAIN ANALYZE and located that the execution plan grew to become easier. The IndexLookup and TableRowIDScan steps had been gone. The question took solely 417.7 ms.

The result of the EXPLAIN query

The results of the EXPLAIN question. This question value 417.7 ms

So we knew that our question might get all the info it wanted by doing an IndexRangeScan on the brand new index. This composite index included the quantity area, so TiDB didn’t must carry out IndexLookup to get knowledge from the desk. This diminished a number of I/O operations.

`IndexRangeScan` in the non-clustered table

IndexRangeScan within the non-clustered desk

Pushing down computing to additional cut back question latency

For a question that wanted to acquire 270,000 rows of information, 417.7 ms was fairly a brief execution time. However might we enhance the time much more?

We thought this relied on TiDB’s structure that separates computing and storage layers. That is totally different from MySQL.

See Also

In TiDB:

  • The tidb-server node computes knowledge. It corresponds to root within the execution plan.
  • The tikv-server node shops the info. It corresponds to cop[tikv] within the execution plan.

Typically, an SQL assertion is cut up into a number of steps to execute with the cooperation of computing and storage nodes.

Once we executed the SQL assertion on this article, TiDB obtained the info of the github_events desk from tikv-server and carried out the mixture calculation of the COUNT operate on tidb-server.

SELECT
COUNT(DISTINCT quantity)
FROM github_events
WHERE
repo_id = 41881900
AND kind = 'IssuesEvent';

The execution plan indicated that when TiDB was performing IndexReader, tidb-server wanted to learn 270,000 rows of information from tikv-server by means of the community. This was time-consuming.

`tidb-server` read 270,000 rows of data from `tikv-server`

tidb-server learn 270,000 rows of information from tikv-server

How might we keep away from such a big community transmission? Though the question wanted to acquire a considerable amount of knowledge, the ultimate calculation end result was solely a quantity. May we full the COUNT aggregation calculation on tikv-server and return the end result solely to tidb-server?

TiDB had applied this concept by means of the coprocessor on tikv-server. This optimization course of is known as computing pushdown.

The execution plan indicated that our SQL question didn’t do that. Why? We checked the TiDB documentation and realized that:

Often, mixture capabilities with the DISTINCT possibility are executed within the TiDB layer in a single-threaded execution mannequin.

This meant that our SQL assertion could not use computing pushdown.

SELECT
COUNT(DISTINCT quantity)
FROM github_events
WHERE
repo_id = 41881900
AND kind = 'IssuesEvent';

Due to this fact, we eliminated the DISTINCT key phrase.

For the github_events desk, a difficulty solely generated an occasion with the IssuesEvent kind and opened motion. We might get the entire variety of distinctive points by including the situation of motion = 'opened'. This fashion, we did not want to make use of the DISTINCT key phrase for deduplication.

SELECT
COUNT(quantity)
FROM github_events
WHERE
repo_id = 41881900 -- vscode
AND kind = 'IssuesEvent'
AND motion = 'opened';

The composite index we created lacked the motion column. This triggered the question index protection to fail. So we created a brand new composite index:

CREATE INDEX index_github_events_on_repo_id_type_action_number ON github_events(repo_id, kind, motion, quantity);

After we created the index, we checked the execution plan of the modified SQL assertion by means of the EXPLAIN ANALYZE assertion. We discovered that:

  • As a result of we added a brand new filter motion='opened', the variety of rows to scan had decreased from 270,000 to 140,000.
  • tikv-server executed the StreamAgg operator, which was the mixture calculation of the COUNT operate. This indicated that the calculation had been pushed all the way down to the TiKV coprocessor for execution.
  • tidb-server solely wanted to acquire two rows of information from tikv-server by means of the community. This drastically diminished the quantity of information transmitted.
  • The question solely took 123.6 ms.
+-------------------------+---------+---------+-----------+-------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------+------+
| id | estRows | actRows | activity | entry object | execution information | operator information | reminiscence | disk |
+-------------------------+---------+---------+-----------+-------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------+------+
| StreamAgg_28 | 1.00 | 1 | root | | time:123.6ms, loops:2 | funcs:depend(Column#43)->Column#34 | 388 Bytes | N/A |
| └─IndexReader_29 | 1.00 | 2 | root | partition:issues_event | time:123.6ms, loops:2, cop_task: {num: 2, max: 123.5ms, min: 1.5ms, avg: 62.5ms, p95: 123.5ms, max_proc_keys: 131360, p95_proc_keys: 131360, tot_proc: 115ms, tot_wait: 1ms, rpc_num: 2, rpc_time: 125ms, copr_cache_hit_ratio: 0.50, distsql_concurrency: 15} | index:StreamAgg_11 | 590 Bytes | N/A |
| └─StreamAgg_11 | 1.00 | 2 | cop[tikv] | | tikv_task:{proc max:116ms, min:8ms, avg: 62ms, p80:116ms, p95:116ms, iters:139, duties:2}, scan_detail: {total_process_keys: 131360, total_process_keys_size: 23603556, total_keys: 131564, get_snapshot_time: 1ms, rocksdb: {delete_skipped_count: 320, key_skipped_count: 131883, block: {cache_hit_count: 307, read_count: 1, read_byte: 63.9 KB, read_time: 60.2µs}}} | funcs:depend(gharchive_dev.github_events.quantity)->Column#43 | N/A | N/A |
| └─IndexRangeScan_15 | 7.00 | 141179 | cop[tikv] | desk:github_events, index:index_ge_on_repo_id_type_action_created_at_number(repo_id, kind, motion, created_at, quantity) | tikv_task:{proc max:116ms, min:8ms, avg: 62ms, p80:116ms, p95:116ms, iters:139, duties:2} | vary:[41881900 "IssuesEvent" "opened",41881900 "IssuesEvent" "opened"], hold order:false | N/A | N/A |
+-------------------------+---------+---------+-----------+-------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------+------+

Making use of what we realized to different queries

By our evaluation and optimizations, the question latency was considerably diminished:

1.11 s → 417.7 ms → 123.6 ms

We utilized what we realized to different queries and created the next composite indexes within the github_events desk:

index_ge_on_repo_id_type_action_pr_merged_created_at_add_del

index_ge_on_repo_id_type_action_created_at_number_pdsize_psize

index_ge_on_repo_id_type_action_created_at_actor_login

index_ge_on_creator_id_type_action_merged_created_at_add_del

index_ge_on_actor_id_type_action_created_at_repo_id_commits

These composite indexes lined greater than 20 analytical queries in repository evaluation and private evaluation pages on the OSS Perception web site. This improved our web site’s total loading velocity.

Some classes we realized on MySQL will be utilized all through the optimization course of. However we have to contemplate extra after we optimize question efficiency in a distributed database. We additionally advocate you learn Performance Tuning within the TiDB documentation. This will provide you with a extra skilled and complete information to efficiency optimization.

References

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top