Now Reading
constructing message streaming in Rust

constructing message streaming in Rust

2024-01-04 09:29:34


Over half a yr in the past (in April, to be actual), I ultimately determined to study Rust for good. My earlier try in the course of the 2022 AoC had failed quite shortly, after a couple of days of finishing the workout routines – I lastly realized that I wanted an actual venture to work on. For the previous few years, I have been coping with the completely different sorts of distributed programs (principally utilizing C#), together with the standard microservices structure or Web3. No matter their nature, some type of the messaging between the unbiased elements was at all times required. I had an opportunity to make use of the instruments comparable to RabbitMQ, ZeroMQ, Kafka or Aeron (simply to call a couple of), in addition to implementing the low-level peer-to-peer communication protocols.

After a couple of months of attempting to determine (or simply staying within the limbo I suppose), what can be the perfect venture to work on, I made a decision to construct the message streaming platform (needless to say streaming is not the same as common message dealer). The opposite cause (apart from attending to know Rust) was to actually perceive the internals of the messaging programs and the trade-offs that have been made by their builders – a few of them being the only implication of the speculation of distributed programs (ordering, consistency, partitioning and so forth.), whereas others the results of the implementation particulars (programming language, OS, {hardware} and so forth).

And that is how the was born. The identify is an abbreviation of the Italian Greyhound (sure, I personal two of them), small but extraordinarily quick canine, the perfect of their class.


Subsequently, what I would like, or really what we wish (since there’s a couple of of us engaged on it already) for to be – the perfect message streaming platform in its class. Light-weight by way of the useful resource consumption, quick (and predictable) in relation to the throughput and latency, and straightforward to make use of when talking of its API, SDK and configuration of the venture.


On the very starting, Iggy had quite restricted performance, and all the things was dealt with utilizing the QUIC protocol based mostly on Quinn library. You might join a number of purposes into the server, and begin exchanging the messages between them, just by appending the info to the stream (from the producer perspective), and fetching the information on the buyer facet, by offering an offset (numeric worth specifying from which ingredient within the stream, you want to question the info) – that is just about the very fundamentals of how the message streaming platform works by way of the underlying infrastructure.

After spending a couple of weeks on constructing the preliminary model, after which one other few weeks on rewriting its core half (sure, prototyping + validation repeated in a steady loop labored fairly properly), I managed to implement the persistent streaming server being able to parallel writes/reads to/from unbiased streams supporting many distinct apps related into it. Merely put, one may simply have many purposes, and even hundreds of the streams (relying on how do you determine to separate your information between them e.g. one stream for consumer associated occasions, one other one for the funds occasions and so forth.) and begin producing & consuming the messages with out interfering to one another.

On prime of this, the help for TCP and HTTP protocols have been added. Beneath the hood, the standard structure of streams, consisting of the subjects being cut up into the partitions, which finally function on a uncooked file information utilizing so-called segments has been applied as properly.


It was one of many “aha” moments, when reimplementing the parallel entry to the info with the utilization of underlying synchronization mechanism (RwLock and so forth.), optimized information buildings e.g. for coping with bytes, together with the Tokio work stealing method, yielded the nice enhancements for the general throughput.

I do consider, that someplace at this level I had realized, that Iggy would possibly really turn into one thing helpful – not only a toy venture, to be deserted after reaching its preliminary purpose (which was type of already achieved).

let polled_messages = consumer.poll_messages(&PollMessages {
  stream_id: Identifier::numeric(1)?,
  topic_id: Identifier::named("orders")?,
  shopper: Shopper::group(Identifier::named("funds")?),
  partition_id: None,
  technique: PollingStrategy::offset(0),
  rely: 10,
  auto_commit: true,

After operating some benchmarks (sure, we’ve got a devoted app for the benchmarking purposes) and seeing the promising numbers (vary of 2-6 GB/s for each, writes & reads when processing thousands and thousands of messages), I ultimately determined to provide it a long-term shot. Being absolutely conscious that there is nonetheless heaps to be completed (talking of many months, and even years), I could not be extra glad to search out out that there is additionally another person on the market, who wish to contribute to the venture and turn into part of the workforce.



On the time of penning this submit, Iggy consists of round 10 members contributing to its completely different elements. A few of us do work on the core streaming server, whereas the opposite ones are targeted on SDKs for the completely different programming languages or tooling comparable to Net UI or CLI – all these initiatives are equally essential, as they add as much as the general ecosystem. However how do you really collect a workforce of open supply contributors, who’re prepared to spend their free time engaged on it?

Nicely, I want I had a solution to that query – truthfully, in case of Iggy I wasn’t really on the lookout for anybody, as I did not suppose this may very well be an attention-grabbing venture to work on (apart from myself). Then how did that occur anyway? There have been solely 2 issues in frequent – all of the those who joined the venture have been a part of the identical Discord communities, but extra importantly all of them shared the eagerness for programming, and I am not speaking about Rust language particularly. From junior to senior, from embedded to front-end builders – whatever the years of expertise and present occupation, everybody has discovered a strategy to implement one thing significant.


For instance, once I requested one man what was the explanation behind constructing an SDK in Go, the reply was the necessity of enjoying with and studying a brand new language. Why C# SDK? Nicely, the opposite man needed to dive extra into the low-level ideas and determined to squeeze out nice efficiency from the managed runtime. Why construct Net UI in Svelte? At work, I principally use React, and I needed to study a brand new framework – one other member mentioned.

My level is – so long as you consider in what you are constructing, and also you’re constant about it (it was one of many principal explanation why I have been contributing to Iggy daily since its inception, and nonetheless doing so), there’s an opportunity that somebody on the market will discover it and fortunately be part of you in your efforts. Lead by instance, or no matter you name it.

On the identical time, we have began receiving the primary, exterior contributions from all around the globe – whether or not speaking about less complicated duties, or the extra refined ones, requiring important period of time being spent on each, the implementation and the discussions to finally ship the code.

It gave us much more confidence, that there are different individuals (exterior our inner bubble), who discover this venture to be attention-grabbing and value spending their time. And with out all these wonderful contributors, it might be a lot tougher (and even inconceivable) to ship so many options.


At first, let me simply level out among the properties and options which might be a part of the core streaming server:

  • Extremely performant, persistent append-only log for the message streaming
  • Very excessive throughput for each writes and reads
  • Low latency and predictable useful resource utilization because of the Rust compiled language (no GC)
  • Customers authentication and authorization with granular permissions and PAT (Private Entry Tokens)
  • Assist for a number of streams, subjects and partitions
  • Assist for a number of transport protocols (QUIC, TCP, HTTP)
  • Totally operational RESTful API which will be optionally enabled
  • Out there consumer SDK in a number of languages
  • Works immediately with the binary information (lack of enforced schema and serialization/deserialization)
  • Configurable server options (e.g. caching, section dimension, information flush interval, transport protocols and so forth.)
  • Risk of storing the shopper offsets on the server
  • A number of methods of polling the messages:
    • By offset (utilizing the indexes)
    • By timestamp (utilizing the time indexes)
    • First/Final N messages
    • Subsequent N messages for the precise shopper
  • Risk of auto committing the offset (e.g. to realize at-most-once supply)
  • Shopper teams offering the message ordering and horizontal scaling throughout the related shoppers
  • Message expiry with auto deletion based mostly on the configurable retention coverage
  • Further options comparable to server facet message deduplication
  • TLS help for all transport protocols (TCP, QUIC, HTTPS)
  • Non-compulsory server-side in addition to client-side information encryption utilizing AES-256-GCM
  • Non-compulsory metadata help within the type of message headers
  • Constructed-in CLI to handle the streaming server
  • Constructed-in benchmarking app to check the efficiency
  • Single binary deployment (no exterior dependencies)
  • Operating as a single node (no cluster help but)

And as already talked about, we have been engaged on SDKs for the a number of programming languages:

Please remember, although, that a few of them e.g. for Rust or C# are extra updated with the latest server modifications, whereas the opposite ones would possibly nonetheless have to do some catching up with the newest options. Nevertheless, given the quantity of accessible strategies on the server’s API and the underlying TCP/UDP stack with customized serialization to be applied from the scratch (apart from HTTP transport, that is the better one), I would say we’re doing fairly okay, and I can not stress sufficient how grateful I’m to all of the contributors for his or her enormous quantity of labor!

However wait, there’s much more – what can be a message streaming platform with out some extra tooling for managing it? We have additionally been growing the CLI.


In addition to trendy Web UI to make it occur 🙂


Final however not least, we have got a fully-featured CI/CD pipeline accountable for operating all of the checks and checks on a number of platforms, and eventually producing the discharge artifacts and Docker images.

See Also


At first look, it would appear like there’s loads of options already in place, however for anybody who has ever labored with the message streaming infrastructure earlier than, that is perhaps only a tip of an iceberg, thus let’s talk about the roadmap.


After gaining some traction a couple of months in the past (principally because of touchdown on the GitHub trending web page in July), we have talked to some customers doubtlessly interested by making Iggy a part of their infrastructure (there’s even one company utilizing it already), and mentioned what options can be addition to the present stack.


Contemplating what’s already there, being labored on or deliberate for the longer term releases, comparable to interactive CLI, trendy Net UI, elective information compression and archivization, plugin help or a number of SDKs, there are at the very least three additonal challenges to beat:

Clustering – the opportunity of having a extremely accessible and fault tolerant distributed message streaming platform in a manufacturing atmosphere, is often one of the essential facets when contemplating the actual instrument. Whereas it would not be too troublesome to implement the extension (consider a easy proxy/load balancer), permitting to observe and ship the info both to the first or secondary duplicate (handled as a fallback server) and change between them when one of many nodes goes down, such an answer would nonetheless lead to SPOF and would not actually scale. As an alternative, we have began experimenting with Raft consensus mechanism (de facto the trade commonplace) in a devoted repository, which ought to enable us in delivering the actually fault tolerant, distributed infrastructure with an extra information replication on the partition stage (so-called unit of parallelization).


Low-level I/O – though the present outcomes (based mostly on the benchmarking instrument measuring the throughput and so forth.) are satisfying, we strongly consider that there is nonetheless (doubtlessly an enormous) room for enchancment. We’re planning to make use of io_uring for all I/O operations (disk or community associated). The model new, completion based mostly API (accessible within the latest Linux kernels) exhibits a major increase when in comparison with the present options comparable to epoll or kqueue – on the finish of the day, the streaming server at its core is all about writing & studying information to/from disk and sending it by way of the community buffer. We have determined to provide a attempt monoio runtime, because it appears to be probably the most performant one. Going additional, we might like to include strategies comparable to zero-copy, kernel bypass and all the opposite goodies e.g. from DPDK or different related frameworks.

Thread-per-core – in an effort to keep away from the quite pricey context switches as a result of utilization of synchronization mechanism when accessing the info from the completely different threads (e.g. by way of Tokio’s work stealing mechanism), we’re planning to discover (or really, already doing it, within the beforehand talked about repository for clustering sandbox) thread-per-core structure, as soon as once more, delivered as a part of monoio runtime. The general thought will be described in two phrases – share nothing (or as little as attainable). For instance, the streams may very well be tied to the actual CPU cores, leading to no extra overhead (by way of Mutexes, RwLocks and so forth.) when writing or studying the info. Nearly as good as it would sound, there are at all times some tradeoffs – what if some particular streams are extra ceaselessly accessed than the others? Would the remaining cores stay idle as a substitute of doing one thing helpful? Alternatively, instruments comparable to ScyllaDB or Redpanda appear to be leveraging this mannequin fairly successfully (each are utilizing the identical Seastar framework). We might be on the lookout for the solutions, earlier than deciding which method (thread-per-core or work stealing) fits Iggy higher sooner or later.


Why constructing one other message streaming then? A couple of months in the past, I might most likely reply – strictly for enjoyable. But, after exploring extra in-depth the established order, what we wish to obtain is type of twofold – on one hand, it might be nice to ship the general-purpose instrument, comparable to Kafka. Alternatively, why to not try to actually push laborious the OS and {hardware} to its limits when talking of the efficiency, reliability, throughput and latency, one thing what e.g. Aeron does? And what if we may put this all collectively into the easy-to-use, unified platform, supporting the preferred programming languages, with the addition of recent CLI and Net UI for managing it?

Solely the time will inform, however we’re already excited sufficient to problem ourselves. We might love to listen to your ideas, concepts and suggestions – something that may assist us in constructing the perfect message streaming platform in Rust that you’ll get pleasure from utilizing! Be happy to affix our Discord neighborhood and tell us what do you suppose 🙂


This weblog makes use of Rust.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top