Now Reading
Selecting a Postgres Main Key

Selecting a Postgres Main Key

2023-01-20 04:55:30

Main keys are vital. They uniquely determine rows of knowledge in tables, and make it straightforward to fetch information. The job of a database is to archive and recall information and you are going to have a tough time discovering information and not using a good main key or a superb index.

Generally it is sensible to make use of a “natural key” (like an e-mail column in a customers desk) and typically it’s higher to make use of a “surrogate key”, a price made for the aim of figuring out a row (and has no different which means).

At first look, the query of which main key to make use of is simple! Simply throw a integer/serial on there, proper? Numeric IDs are cool, however what about random worth IDs or Universally Unique IDentifiers (UUIDs)?

Seems the query of which identifier (and on this case, UUID) to make use of is sophisticated — we’ll dive into among the complexity and inherent trade-offs, and determine issues out:

  • What are the alternatives for identifiers?
  • If we select to make use of/add UUIDs, which of them ought to we select?
  • How can we get these UUIDs into postgres?
  • Which UUIDs carry out finest?

However first, a fast historical past lesson.

A quick historical past of identifiers and why we use them


Let’s take into consideration figuring out rows of knowledge from first ideas. What is the first means you would possibly consider figuring out issues? Assigning them numbers!

We are able to arrange a desk like this:

-- Let's allow entry to case-insensitive textual content

-- Heres a primary customers desk
CREATE TABLE customers (
  id integer PRIMARY KEY,
  e-mail citext NOT NULL CHECK (LENGTH(e-mail) < 255),
  title textual content NOT NULL

-- Let's assume we do not need two customers with the very same e-mail
CREATE UNIQUE INDEX users_email_uniq ON customers USING BTREE (e-mail);

This seems to be nice, however what ought to id be on new rows? I do not know — possibly the appliance can determine it out? In the event that they retailer some worth in reminiscence? That does not appear proper.

Perhaps we might work out the following integer from what’s within the desk itself — we simply want to have the ability to “depend” upwards. We do have all of the customers tables rows in there, so we should always have the ability to do it:

INSERT INTO customers (id, e-mail, title)
SELECT COUNT(*) + 1, '', 'new_user' FROM customers;

After working that question, we will double test our outcomes:

id e-mail title
1 new person
2 new person

Utilizing COUNT(*) in our question will not be probably the most environment friendly (and even best) answer although, and hopefully it is clear why — counting a sequence of numbers for main keys is a characteristic inbuilt to Postgres!

serial/bigserial is the suitable instrument in our toolbox to take care of a shared, auto-incrementing sequence of numbers. Let’s faux we learn the postgres documentation and use these as a substitute.


serial is actually a handy macro for utilizing Postgres sequences, a database-managed auto-incrementing stream of integer.

Let’s hear it from the docs:

The information sorts smallserial, serial and bigserial will not be true sorts, however merely a notational comfort for creating distinctive identifier columns (just like the AUTO_INCREMENT property supported by another databases).

Utilizing a serial column to create the customers desk would appear to be this:

CREATE TABLE customers (
  id serial PRIMARY KEY,
  e-mail citext NOT NULL CHECK (LENGTH(e-mail) < 255),
  title textual content NOT NULL

OK, now let’s attempt inserting into it – we should not should specify id:

INSERT INTO users_serial (e-mail, title) 
VALUES ('', 'new person');
id e-mail title
1 new person

It really works, as you would possibly count on – now the appliance does not should by some means magically know the suitable ID to make use of when inserting.

However what does serial really do? Utilizing a serial column is operationally just like the next SQL:

CREATE SEQUENCE tablename_colname_seq AS integer;
CREATE TABLE tablename (
    colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;

Again in software land, the INSERT assertion returns, and supplies the brand new id the database assigned our new row. A number of software situations need not coordinate what ID to make use of — they simply do not, and discover out from the database.

We have taken a considerably meandering path to get right here, however that is the usual answer for many cheap database schemas.

Why not cease at serial?

There are few points with sequences:

  • When writing automation that merely iterates via id values, word that serial columns can have gaps, even for those who by no means DELETE (e.x. if an INSERT was rolled again — sequences reside outdoors transactions).
  • When used from outdoors code serial could leak some information or give attackers an edge (e.x., if works, how about
  • serial is PostgreSQL particular (i.e. not SQL requirements compliant)

Do not be too postpone by these causes — serial continues to be the go-to for many use-cases.

Even the final level about serial not being requirements compliant is solved in Postgres 10+ through the use of…

integer/biginteger (once more!)

Postgres 10 added support for the IDENTITY column syntax in CREATE TABLE (EDB has an important writeup on the addition).

This implies we will fortunately return to utilizing integer/biginteger:

  e-mail citext NOT NULL CHECK (LENGTH(e-mail) < 255),
  title textual content NOT NULL

So how does this work? Postgres does the identical factor below the covers — it generates a sequence. As this syntax is requirements compliant, it is usually really useful apply for DBAs going ahead, for the sake of the realm.

So integers are nice, however data leakage continues to be an issue. How can we repair that? Properly, make the numbers random, clearly.

Random Numeric IDs

As an example the appliance utilizing the DB has some Python code like the next:

from random import randrange 
from fashions import Person 
MAX_RANDOM_USER_ID = 1_000_000_000
def create_user():
    Add new person to the database
    user_id = randrange(1, MAX_RANDOM_USER_ID)
    person = Person(id=user_id, e-mail="", title="new person")

That seems to be good, however there’s an issue — random is a pseudorandom generator.

Pseudo-random numbers are not what we would like for guaranteeing person IDs can’t be simply guessed/collide. It is attainable to get the very same sequence of values out of a pseudo-random quantity generator through the use of the identical seed worth.

Generally you need pseudo-random habits (for instance for testing or fuzzing), nevertheless it’s usually not desired for manufacturing programs which may run from a duplicated an identical software picture, since they might have weak pseudo-random seed initialization.

(Safe) Random Numeric IDs

On the very least we want a correctly safe random numbers — we want Python’s secrets module:

from secrets and techniques import randbelow
# ...
def create_user():
    Add new person to the database (utilizing safe random numbers)
    user_id = randrange(1, MAX_RANDOM_USER_ID)
    person = Person(id=user_id, e-mail="", title="new person")

Now we now have a safe random worth coming in for our person IDs. However having values like 583247 and 8923916 get generated are cool and all, however there are a number of issues:

  • These numbers are random and fairly inscrutable
  • The keyspace is pretty small (possibly good for feedback on a preferred web site, however not for IDs!)
  • Individuals can nonetheless technically test all of them (the guessing area is 1 to MAX_RANDOM_USER_ID!)

We’d like one thing higher.

(Safe) Random UUIDs

Alongside comes UUIDs — you are most likely used to seeing them now, values like this UUIDv4:

468e8075-5815-4fe2-80d3-45a31827954b .

They’re very random (nearly at all times generated with safe random sources), and whereas they’re even worse for remembering, they’re close to inconceivable to virtually guess — the search area is simply too giant!

Extra importantly, UUIDs introduce methodology to the insanity — totally different variations of UUID are derived other ways — mixed with different sources of randomness or recognized values.

There are numerous versions of UUID, however let’s focus on those we’re extra doubtless to make use of/see day after day.


Version 1 UUIDs have three two elements:

  • a 60 bit date-time (at nanosecond precision)
  • a 48 bit MAC address

However the place’s the randomness? Properly v1s assume that you just will not generate a ton of values in the identical nanosecond (and there are some additional bits reserved for differentiating even whenever you do), however one other supply is the MAC tackle. MAC addresses uniquely (normally) determine community playing cards — which is a safety threat — and people bits could be made random.

This is what a UUIDv1 seems to be like:


You may generate v1 UUIDs in Postgres natively due to the uuid-ossp contrib module. This is how you can generate a v1 UUID with random MAC tackle:


SELECT uuid_generate_v1mc();

(1 row)


Model 4 UUIDs use all the accessible bits for randomness — 122 bits value!.

UUIDv4s appear to be this:


These do not have a time part, however they do not have in time they make up for in randomness — it’s very unlikely for them to collide, in order that they make for glorious World Distinctive IDentifiers (“GUID”s).

We are able to generate them in Postgres like this (with uuid-ossp):

SELECT uuid_generate_v4();

(1 row)

Since Postgres would catch a collision on a PRIMARY KEY or UNIQUE INDEX column, we’re achieved proper? If we wish to generate UUIDs all we have to do is select UUID v1 or V4, and we can’t leak any schema construction data to the surface world, proper?

It is a workable answer, however as you would possibly count on, it isn’t that straightforward.

The Put up-UUIDv1/v4 period: A Cambrian explosion of identifiers


UUIDv1 and v4 had been a begin, however weren’t sufficient for many firms on the market. There are a pair shortcomings that plague each v1 and v2:

  • UUIDs are twice the dimensions of bigint/bigserial
  • UUIDv1s include a time factor however they’re not lexicographically sortable (this implies they SORT terribly, relative to integer or a timestamp column)
  • UUIDv1s are much less random than UUIDv4, and may collide/overlap in shut sufficient time intervals, at giant scale
  • UUIDv4s index terribly, as they’re basically random values (clearly, they SORT terribly as nicely)

Lots of the world’s greatest firms generated UUIDs at speeds that made all of those deficiencies an issue.

A cambrian explosion of UUIDs resulted, as seen by the IETF — this resulted within the new UUID formats (v6,v7,v8) being printed in 2021.

This is a fast listing (from that IETF doc):

That is… Quite a lot of UUIDs. They’re all barely totally different, however the innovation was summed up by the IETF:

An inspection of those implementations particulars the next tendencies that assist outline this customary:

  • Timestamps MUST be k-sortable. That’s, values inside or near the identical timestamp are ordered correctly by sorting algorithms.
  • Timestamps SHOULD be big-endian with the most-significant bits of the time embedded as-is with out reordering.
  • Timestamps SHOULD make the most of millisecond precision and Unix Epoch as timestamp supply. Though, there’s some variation to this amongst implementations relying on the appliance necessities.
  • The ID format SHOULD be Lexicographically sortable whereas within the textual illustration.
  • IDs MUST guarantee correct embedded sequencing to facilitate sorting when a number of UUIDs are created throughout a given timestamp.
  • IDs MUST NOT require distinctive community identifiers as a part of reaching uniqueness.
  • Distributed nodes MUST have the ability to create collision resistant Distinctive IDs and not using a consulting a centralized useful resource.
    The IETF went on to introduce three 3 new kinds of UUIDs which have these properties these firms had been in search of: UUIDv6, UUIDv7, and UUIDv8.

So what is the distinction you ask?

  • UUIDv6 – 62 bits of gregorian time + 48 bits of randomness
  • UUIDv7 – 36 bits of massive endian unix timestamp (seconds since epoch + leapseconds w/ non-compulsory sub-second precision) + variable randomness as much as 62 bits
  • UUIDv8 – variable dimension timestamp (32/48/60/64 bits) + variable dimension clock (8/12 bits) + variable randomness (54/62 bits)

It is not fairly straightforward to work out what this all means however let’s boil it down:

  • All of those UUIDs type correctly (the “excessive bits” of time are first, like placing the 12 months earlier than the month — “2022/07”)
  • UUIDv6 requires randomness
  • The information contained within the UUID could be variable (ex. UUIDv8), this implies you may bytes that imply one thing else (ex. an encoding of the compute area you are working in)

Alright, achieved listening to about UUIDs? Let’s get to the enjoyable half.

Benchmarking ID era with uuid-ossp and pg_idkit

With the historical past lesson behind us, let’s benchmark these ID era mechanisms towards one another! For UUIDv1 and UUIDv4 we will use uuid-ossp.

See Also

Sadly, uuid-ossp is not fairly so superior as to have many of those newer UUIDs we have been discussing, so we’ll pull in pg_idkit right here.

pg_idkit is constructed with Rust, so it provides us entry to the next ID era crates:

For every kind of UUID, we will check the next:

  • Technology velocity: **How briskly can I generate IDs (for instance 1,000,000 of them)?
  • Desk & Index dimension: How a lot bigger do tables and related indices get?

Technology velocity

Technology velocity is fairly straightforward to check, we will allow timing mode on psql and run a easy benchmark with generate_series:

timing -- allow psql timing mode

-- Generate IDs 1 million occasions with ksuid
SELECT COUNT(idkit_ksuid_generate()) FROM generate_series(1, 1000000);

Working the entire ID era mechanisms on a single core of my machine (which occurs to be an Oryx Pro), the bottom of 5 runs for every ID seems to be like this:

Generation speed test

To be truthful, era velocity shouldn’t be a deal breaker because it’s unlikely to be the bottle neck for many purposes. That mentioned, it’s good to have some information on the place every ID era mechanism lands.

Desk & Index dimension

We are able to test the dimensions of our tables & associated indices with this question (after working VACUUM):

   relname  as table_name,
   pg_size_pretty(pg_total_relation_size(relid)) As "Desk Dimension",
   pg_size_pretty(pg_indexes_size(relid)) as "Index Dimension",
   pg_size_pretty(pg_relation_size(relid)) as "Whole Dimension"
   FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;

Listed below are the sizes in tabular kind:

Sizes in tabular form

These numbers are largely a mirrored image of the size of the default settings of pg_idkit however most likely value having in entrance of you anyway.

With this, we most likely have sufficient data to decide (and a brand new library to generate our UUIDs with)!

Which ID do you have to use?

As ordinary, it relies upon — you did not assume it would be that straightforward, did you?

All I can provide are some common guidelines of thumb that hopefully be just right for you:

  • integers and serial have apparent advantages for simplicity, storage, and sortability. You would possibly not wish to expose them to the world although.
  • If you need the last word in collision avoidance UUIDv4 is OK
  • UUIDv1 might have been nice, however it does not lexicographically type.
  • The perfect time-based ID appears to be xid, with good efficiency and type friendliness
  • If you wish to be a bit of extra standards-oriented, UUID v6/v7

As ordinary, the perfect outcomes will come from weighing all of the choices and discovering what’s finest in your use-case, and doing acceptable testing in your information.

Doable Enhancements

We’ve achieved some good exploration to this point, however listed here are some concepts for attention-grabbing use circumstances for pg_idkit and measuring the impression of ID era utilizing it.

Usecase: Producing our created_at columns from our IDs

One attention-grabbing characteristic can be utilizing at the very least partially time-based UUIDs for created_at columns — we might save area by virtualizing our created_at columns:

-- At desk creation
CREATE TABLE customers (
  id textual content PRIMARY KEY DEFAULT idkit_ksuid_generate(),
  title textual content,
  e-mail textual content,

-- An instance question for a selected KSUID that makes use of created_at
SELECT *, idkit_ksuid_extract_timestamptz(id) 
FROM customers
WHERE id = '0F755149A55730412B0AEC0E3B5B089C14B5B58D';

Ideally we might use the GENERATED ALWAYS AS ( ... ) syntax for generated columns whereas creating the desk, however because the time of this put up Postgres doesn’t but assist digital generated columns (solely saved ones).

Benchmarking: Measuring index fragmentation

How fragmented do our indices get after use of every of those strategies?

Fortunately for us Postgres has the pgstattuple extension so we will discover out — due to Laurenz Albe on StackOverflow).

Integrating and together with these checks within the pg_idkit README would tremendously assist folks trying to decide.

Benchmarking: Measuring SORT friendliness

One other nice metric to measure is likely to be efficiency of those indices on sure frequent SORT patterns. Whereas that is inherently workload-specific, it might be nice to select a workload and see what we get.

In most code bases, easy WHERE queries with SORTs abound, and one of many large advantages of UUIDv6, UUIDv7 and the opposite alternate options is lexicographic sorting, in spite of everything.

Figuring out simply how good a sure ID era methodology is at sustaining locality can be good to know.

Creating and utilizing a perform like idkit_uuidv1_extract_timestamptz and utilizing it in a “functional index” (an index on an expression) might resolve the type unfriendliness of UUIDv1 as nicely!


Identifiers have a protracted historical past and are surprisingly unsolved at current. It may be complicated however due to the ability of Postgres we do not have to over-think it — tables could be migrated from one ID sample to a different, and we will use DDL statements in transactions.

Hopefully this text helps you head off some bikeshedding along with your teammates whenever you subsequent focus on which IDs are finest to make use of.

Extra Postgres articles

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top