Raytracing on AMD’s RDNA 2/3, and Nvidia’s Turing and Pascal – Chips and Cheese
Raytracing goals to precisely mannequin mild by following the paths of sunshine rays and seeing what they hit. Nonetheless, raytracing is kind of costly in comparison with say, calculating mild values by taking a pixel’s distance to mild sources and doing an inverse sq. root. It’s essential ship off a number of rays to get sufficient rays to hit every pixel. On high of that, a ray may probably hit any object within the scene. Clearly, you don’t need to do intersection checks in opposition to each piece of geometry. That’s the place a bounding quantity hierarchy (BVH) involves implement a divide-and-conquer strategy to seeing what a ray hits.
A BVH is a tree, or a construction the place every node connects to a number of little one nodes. Your GPU begins on the node on the high of the tree, and checks whether or not the ray in query intersects any little one nodes. If it does, it follows these little one hyperlinks and repeats the method till it will get to the underside of the tree. Every little one node represents an oblong area, which is why these higher degree nodes are sometimes known as field nodes. The underside nodes, often known as leaf nodes, comprise geometry primitives which might be checked to find out if the ray really hit one thing. These geometry primitives are triangles, so the leaf nodes are often known as triangle nodes.
The idea is kind of easy, however there’s a number of alternative ways you possibly can construct a BVH. When you put extra bins right into a field node, you possibly can subdivide the scene extra. However meaning extra computation (intersection checks) at every degree. You may make extra ranges to chop down on computational prices, however then you definately’d make extra jumps from one node to a different. Right here, we’re going to make use of developer instruments from each AMD and Nvidia to take a peek at how every producer has chosen to construct their BVH-es.
AMD RDNA 2 and RDNA 3
AMD implements raytracing acceleration by including intersection check directions to the feel items. As a substitute of coping with textures although, these directions take a field or triangle node in a predefined format. Field nodes can symbolize 4 bins, and triangle nodes can symbolize 4 triangles. The instruction computes intersection check outcomes for every thing in that node, and arms the outcomes again to the shader. Then, the shader is accountable for traversing the BVH and handing the following node to the feel items. RDNA 3 moreover has specialised LDS directions to make managing the traversal stack sooner.
Cyberpunk 2077
Cyberpunk 2077 could make in depth use of raytracing, and is without doubt one of the greatest showcases of what the know-how can allow. Turning on raytracing usually produces an instantly noticeable distinction, as reflections that the sport doesn’t deal with with rasterization turn out to be seen. To do raytracing, Cyberpunk 2077 makes use of the DirectX Raytracing (DXR) API. It defines BVH-es in two buildings – a high degree acceleration buildings (TLAS), and a backside degree acceleration construction (BLAS). Traversing the TLAS will get you to a BLAS, which ultimately will get you to the related geometry.
With a seize from inside “The Mox”, we get a TLAS that covers a large portion of the gameplay setting. Most of Evening Metropolis appears to be included, in addition to the encircling areas. The TLAS has 70,720 nodes, of which 22,404 are field nodes and 48,316 are “occasion nodes” that hyperlink to BLAS situations. Traversing the TLAS leads you to eight,315 BLAS situations, which collectively symbolize 11,975,029 triangles. The TLAS occupies 11 MB of storage, whereas every thing collectively (together with the BLAS-es) occupies 795 MB.
The primary degree of 4 bounding bins divide town into two components, and the massive unoccupied space on the backside of the display screen into two components as effectively. These areas don’t have a number of geometry to do additional subdivision on, so the TLAS solely goes a pair ranges deeper earlier than you get BLAS pointers.
After all, town is the enjoyable half. Zooming into the depths of Evening Metropolis, we see a tree represented by a BLAS. To be clear, I’m speaking a couple of tree within the sport, not an information construction. Cache and reminiscence latency goes to be a priority right here, as a result of attending to the BLAS requires 12 pointer chasing jumps between nodes.
Throughout the BLAS, taking one other six jumps will get you to a triangle node, or leaf node, with 4 triangles. Every triangle in all probability represents a set of leaves on the precise tree. From the highest, attending to the precise geometry takes 19 hops, together with leaping from the TLAS to the BLAS.
AMD thus makes use of a reasonably deep BVH with a number of subdividing. Which means much less demand on intersection check throughput. However cache and reminiscence latency can have a big affect, as a result of every soar between nodes relies on the intersection check outcomes from the earlier node. GPUs have excessive cache and reminiscence latency in comparison with CPUs, so RDNA 2 might want to hold a number of rays in flight to cover that latency.
We beforehand checked out how RDNA 2 handled raytracing at the hardware level, and famous that the L2 cache performed a really vital function. Wanting on the raytracing construction sizes, the hitrates we noticed make a number of sense. The TLAS alone is 11 MB, so until a number of rays occur to go in the identical route, caches with capacities within the kilobyte vary will in all probability have issue coping.
Our testing additionally exhibits that the RX 6900 XT’s L2 cache has a load-to-use latency of simply above 80 ns. Whereas that’s good for a multi-megabyte GPU cache, it’s near reminiscence latency for CPUs. Ultimately, the 6900 XT was averaging 28.7 billion field checks and three.21 billion triangle checks per second throughout that decision, regardless of being underclocked to 1800 MHz. AMD says every CU can carry out 4 field checks or one triangle check per cycle, so RT core utilization may very well be wherever from 7.2% to 29% relying on whether or not the counters increment for each intersection check, or each node.
Ideally, AMD would hold extra rays in flight to cover latency and enhance execution unit utilization, however vector register file capability limits occupancy to 10 waves per SIMD out of the 16 most. That’s one motivation behind RDNA 3 elevated vector register file capability. Checking Cyberpunk 2077 by means of RDNA 3’s lenses, we see occupancy enhance to 12/16.
Moreover, RDNA 3 enjoys a noticeable enhance in L0 and L1 hitrates. These caches have doubled in measurement in comparison with RDNA 2, whereas additionally providing improved latency. RDNA 3 finally ends up averaging 45.2 billion field checks and 5.22 billion triangle checks per second in that shader. That’s fairly a formidable efficiency uplift, although it’s arduous to quantify the distinction in RT core utilization as a result of the 7900 XTX was not locked to identified clocks.
Hogwarts Legacy
Hogwarts Legacy is a latest sport launch that additionally makes use of raytracing. Titanic has carried out profiling on this sport utilizing his 7900 XTX, since I don’t personal the sport. Each Hogwarts Legacy and Cyberpunk 2077 characteristic comparable TLAS sizes, however Hogwarts has a a lot smaller reminiscence footprint when you think about the BLAS-es.
Simply as with Cyberpunk 2077, the TLAS covers a large quantity of space. In Hogwarts Legacy, it seems to incorporate Hogwarts Fortress in addition to the encircling grounds, and a few far-off mountains. The mountains don’t have a number of complicated geometry and are coated by a single field (and BLAS). Three of the 4 bins on the high degree cowl the mountains, and one covers a lot of the gameplay space.
Titanic took the seize from the courtyard. Zooming into that space reveals a large quantity of geometric element, together with animate objects like people and a cat.
As with Cyberpunk, getting right down to the underside degree entails fairly a couple of hops. A Hogwarts pupil wanting on the fountain is 13 ranges in. Once more, reminiscence latency goes to be a definite wrongdoer as a result of even hitting the L1 cache on RDNA 3 will value 35 ns.
Curiously, the coed’s (her?) physique is represented by three BLAS-es. One represents most of her physique. One other covers her hair, whereas yet one more incorporates her face and arms.
Backside Stage Acceleration Construction (BLAS) | Depth | Nodes |
Face and Fingers, 316 KB | 7 common 11 most |
955 field nodes 1991 triangles |
Hair, 169 KB | 6 common 9 most |
486 field nodes 1060 triangles |
Physique, 788 KB | 8 common 14 most |
2359 field nodes 4958 triangles |
The physique BLAS is the biggest, and goes right down to very small subdivisions like components of a button. I hope raytracing on this sport seems unbelievable, as a result of it takes 10 jumps to get to that a part of a button.
This degree of element just isn’t common all through the BVH. There’s additionally a cat within the courtyard, 12 ranges down within the TLAS. A single BLAS represents the complete cat. In comparison with the human BLAS-es, the cat BLAS is way easier and represents an order of magnitude much less geometry. It occupies 73 KB, and solely has 196 field nodes masking a complete of 460 triangles. It finally ends up being 6 ranges deep on common, with a most depth of 9 ranges.
Clearly, the builders behind Hogwarts Legacy selected to optimize efficiency by deciding that cats are much less necessary than people.
Inline vs Oblique Raytracing
Microsoft’s DirectX raytracing API helps raytracing by way of two important strategies. The primary is inline raytracing, the place one massive shader program handles each traversing the BVH and performing the suitable lighting calculations when it sees a rays hit and miss issues. The second is oblique raytracing, the place separate shader applications get invoked to deal with ray hits and misses. It is a simplified rationalization after all, so contemplate trying out Microsoft’s explanation of the API or Nvidia’s suggestions on what approach to use. However lengthy story quick, oblique raytracing suffers from the overhead of launching separate shader applications, whereas inline raytracing might endure from a single shader program getting too large.
Cyberpunk 2077 makes use of inline raytracing, by way of DispatchRays<Unified> calls. Hogwarts Legacy, very like Unreal Engine 5’s metropolis demo, makes use of oblique raytracing by way of ExecuteIndirect<Rays><Oblique> calls.
Oblique raytracing additionally provides us a fairly clear indication of raytracing prices. Ray traversal is the costliest factor by far, however Hogwarts Legacy additionally spends fairly a while dealing with hits as soon as they’re discovered.
Nvidia’s Turing
Turing was Nvidia’s first structure to implement {hardware} raytracing acceleration. With Turing’s launch, Nvidia made a major funding each instantly in raytracing acceleration, and in supporting applied sciences like DLSS. Since then, they’ve continued to double down on this space. It will be fascinating to see whether or not Nvidia’s raytracing implementation has modified of their newer playing cards, however I don’t have an Ampere or Ada card. I occurred to exchange my GTX 1080 on the top of the cryptomining growth in 2021, and Ampere was ridiculously overpriced on the time. For positive RDNA 2 was overpriced too, simply to a much less ridiculous extent.
Nvidia didn’t doc how the acceleration works at a low degree, however we will at the least look at the acceleration buildings through the use of Nsight Graphics to take body captures. The TLAS once more covers the complete metropolis, which isn’t a shock as a result of the appliance decides what the raytracing acceleration buildings ought to cowl. Identical to AMD’s TLAS, Nvidia’s occupies about 11 MB.
Nonetheless, Nvidia takes a really completely different strategy to BVH building. The way in which Nsight Graphics presents the BVH suggests it’s an especially extensive tree. Increasing the highest node instantly reveals hundreds of bounding bins, every of which factors to a BLAS. Every BLAS then incorporates wherever from a couple of dozen to hundreds of primitives. If Nsight’s illustration corresponds to the precise raytracing construction, then Nvidia can unravel their acceleration construction in simply three hops. That makes Nvidia’s implementation far much less delicate to cache and reminiscence latency.
To allow this strategy, Nvidia doubtless has extra versatile {hardware}, or is dealing with much more work with the final goal vector items. Not like AMD, the place nodes can solely level to 4 kids, Nvidia doesn’t have mounted measurement nodes. One node can level to 2 triangle nodes, whereas one other factors to 6. A single triangle node can comprise lots of of triangles.
Nonetheless, this strategy calls for much more intersection check throughput. That’s really a clever selection, as a result of GPUs are designed to reap the benefits of tons of express parallelism. Not like CPUs, latency optimizations should not essential, and that’s very seen within the reminiscence hierarchy. L2 cache entry on Nvidia usually go effectively over 100 ns. Within the RTX 2060 Cellular’s case, L2 latency is round 120 to 143 ns, relying on whether or not you’re going by means of the TMUs.
To counter that, Nvidia has a bigger first degree cache. However on Turing and plenty of different Nvidia architectures, L1 and native reminiscence (shared reminiscence) are carved out of the identical block of SRAM. Shared reminiscence is Nvidia’s equal of AMD’s LDS, and each are nice locations to retailer a BVH traversal stack. There’s an opportunity that Nvidia received’t have the ability to maximize L1 cache sizes in raytracing workloads. Zooming into the longest DispatchRays name, we will see that Nvidia does obtain respectable cache hitrates.
Turing’s L1 cache is massive sufficient that it achieves higher hitrate than AMD’s L1. That’s not stunning as a result of Turing can nonetheless present 24 KB of L1 capability even when prioritizing shared reminiscence. The 6900 XT within the instance above obtained a cumulative L0 and L1 hitrate of 78.6%, so AMD’s mid-level L1 cache does assist to even issues out, however after all AMD’s L1 is technically a second degree cache, and hitting it will likely be slower than hitting Nvidia’s first degree cache. At L2, AMD has a bonus as a result of the 6900 XT’s L2 is bigger, and has decrease latency. However that’s fully anticipated of a more moderen GPU with greater efficiency targets.
Turing’s primary constructing blocks are Streaming Multiprocessors, or SMs. They’re vaguely akin to WGPs on AMD. Nonetheless, Turing SMs are smaller than WGPs, and smaller than SMs on Pascal (the earlier Nvidia technology). An RDNA 2 WGP or Pascal SM can hold 64 waves in flight, however a Turing SM can solely monitor 32. On this DispatchRays name, Turing averaged an occupancy of 17.37 waves out of 32, or 54%. SM utilization was poor, with the execution ports energetic solely 11.35% of the time.
The most important culprits are stalls for “quick scoreboard” and “lengthy scoreboard”. “Brief scoreboard” refers to quick length, variable latency operations like shared reminiscence entry. Nvidia could also be utilizing shared reminiscence to trace BVH traversal state, so chunk of that is in all probability right down to shared reminiscence latency, which is round 15.57 ns on Turing. That’s fairly good in comparison with the common caches, however nonetheless lengthy in absolute phrases.
“Lengthy scoreboard” factors to international reminiscence latency. International reminiscence is backed by VRAM, and corresponds to what we consider as reminiscence on a CPU. Ray traversal entails loads of pointer chasing, in order that’s not stunning. The third largest motive, “MIO throttle”, is more durable to grasp as a result of it doesn’t level to a transparent, single trigger.
Warp was stalled ready for the MIO (reminiscence enter/output) instruction queue to be not full. This stall motive is excessive in instances of maximum utilization of the MIO pipelines, which embrace particular math directions, dynamic branches, in addition to shared reminiscence directions. When brought on by shared reminiscence accesses, attempting to make use of fewer however wider hundreds can scale back pipeline stress.
An Nvidia SM in all probability sends quite a lot of longer latency operations to a MIO queue. On this case, the RTX 2060’s SMs had been often bottlenecked by one thing behind that queue. It may very well be shared reminiscence, particular math directions like inverse sq. roots or reciprocals, and even branches.
One other trigger for poor SM utilization is that Turing’s SMs are simply weaker typically. With Turing, Nvidia moved to statically cut up integer and floating level paths. For comparability, Pascal and RDNA can execute each integer and FP32 at full charge, and flexibly give execution bandwidth to no matter’s wanted within the second. In distinction, Turing must stall each different cycle if confronted with an extended sequence of FP32 directions. To get probably the most out of Turing, you want an ideal mixture of alternating INT32 and FP32 directions. Possibly that didn’t occur.
Modifications in Ampere
Turing’s struggles does assist present perspective on how Nvidia approached Ampere. First, they doubled the FP32 throughput in every SM. That ought to permit much better scheduler port utilization, as a result of the SM not wants an ideal 1:1 ratio of INT32 to FP32 directions to attain peak throughput. In observe, FP32 directions are much more widespread than INT32 ones, so it is a good transfer that ought to profit a variety of graphics purposes past raytracing.
On the raytracing entrance, Ampere doubles the ray-triangle intersection check charge. This makes a number of sense if Nvidia’s utilizing a fats tree the place triangle nodes can comprise hundreds of triangles. Such a method minimizes reminiscence latency prices, however calls for extra intersection check throughput. Ampere’s change suits that. I’ve little doubt that Ampere is a far stronger structure for each rasterization and raytracing than Turing.
Turing does have a tough time, however that’s actually anticipated of a primary technology raytracing implementation. Nobody buys the primary technology of a brand new know-how anticipating something greater than a tough expertise that will get quickly outdated as know-how matures. That utilized to early digital cameras, early plane, and so forth. Turing isn’t any exception. However what’s necessary is that classes realized with Turing knowledgeable main adjustments in Ampere, permitting Nvidia to create a really robust second technology raytracing implementation.
Nvidia’s Pascal
Nvidia’s Pascal structure helps raytracing, however doesn’t have any specialised {hardware} to help with raytracing. As a substitute, raytracing is finished fully with compute shaders, utilizing the identical vector ALUs used to do conventional rasterization. I wasn’t in a position to profile Cyberpunk 2077 raytracing with Pascal, as a result of the sport says it’s not supported on the GTX 1080. As a substitute, I’ve some outcomes from Nvidia’s RTX demos. I’m wanting on the Star Wars demo particularly.
Like Turing, Pascal makes use of a really fats BVH. You may get from the foundation node to a triangle node in simply three hops. Nvidia didn’t develop a separate raytracing technique for Pascal, in all probability as a result of all GPU raytracing implementations clear up basically comparable issues, no matter whether or not they have devoted {hardware} to help with raytracing. Pascal doesn’t have considerably completely different latency reminiscence latency traits than Turing. Like every other GPU, Pascal can be optimized to reap the benefits of express parallelism so as to cover latency and feed a number of vector execution items.
Once more, we see raytracing acceleration buildings cowl nearly every thing. Not like Cyberpunk and Hogwarts Legacy although, the entire space coated is much smaller, as a result of the demo is rather a lot smaller in scope. The TLAS consists of 180 situations, masking a complete of three,447,421 triangles. As a result of the demo is smaller in scope, Nvidia may afford to place much more element into every object. The closest stormtrooper is represented by 555,698 triangles organized into 17 sub-boxes. Distinction that to Hogwarts Legacy, which represents a human with simply over 8K triangles. Subdivisions on the stormtrooper go right down to the visor on the helmet, the scope on the gun, and different minute particulars.
In comparison with what we see in video games, the stormtrooper has been lovingly modeled with a ridiculous quantity of geometry, giving it very clear, rounded wanting form. Every of the stormtrooper’s triangle nodes has wherever from a pair hundred to tens of hundreds of triangles. Some nodes have much more triangles than the complete human in Hogwarts Legacy. For instance, the helmet alone accounts for 110k triangles. This degree of element could also be not possible to area in an open world sport, nevertheless it’s fascinating to see in a scoped-down technical demo.
Pascal struggles arduous to render this technical demo, with one DispatchRays name taking a staggering 26.72 ms. Every SM tracked 13.23 waves on common out of 64 theoretical, which means that occupancy was very poor and the structure couldn’t hold a number of work in flight. A number of warps had been stalled on international reminiscence latency (lengthy scoreboard) simply as with Turing, however a couple of different deficiencies present up too. There’s additionally a number of stalling due to “no directions” (instruction cache misses). Pascal suffers as a result of it has smaller instruction caches relative to Turing, and since it has to do intersection checks with out particular directions.
“Wait” is the third greatest stall motive, and factors to instruction execution latency. Pascal has a latency of six cycles for the most typical floating level and integer directions, whereas Turing and Ampere convey that right down to 4 cycles. The time period “wait” is used as a result of Nvidia handles instruction dependencies by embedding static scheduling information that tells the SM scheduler to stall a thread for a specified variety of cycles. With low occupancy and loads of stalling, Pascal’s SMs common simply 1.09 IPC. In response to Nsight, the SMs achieved about 22.3% of their theoretical effectivity. Whereas that’s higher than what Turing obtained in Cyberpunk 2077, Turing will get by as a result of it’s throwing a number of weaker SMs on the drawback.
For positive, Pascal and Turing each endure stalls because of shared reminiscence, and it does seem like Pascal’s doing a number of shared reminiscence accesses throughout raytracing. However on Pascal, that value is peanuts in comparison with common cache and reminiscence latency.
The primary wrongdoer is Pascal’s L1 texture cache. Internally, a Pascal SM is partitioned into 4 SM sub partitions, that are analogous to a RDNA WGP’s SIMDs. Every pair of SMSPs share a reminiscence pipeline and 24 KB L1 texture cache. Whereas this cache is bigger than RDNA’s L0 cache, hitrate is kind of poor on this workload. On a L1 miss, we go straight to the L2, as a result of there’s no mid-level cache to cushion the affect of first degree cache misses.
Even on a L1 hit, Pascal is at a big drawback as a result of there’s no option to bypass the feel items. The Pascal structure is rather a lot much less optimized for latency delicate compute, and sadly raytracing is an instance of that.
As soon as we get out to L2, Pascal does flip in efficiency. Pascal’s L2 has acceptable latency, and achieves good hitrates.
There’s additionally nuance to Pascal’s efficiency. Pascal’s SMs achieved excellent throughput within the different two DispatchRays calls, the place L1 hitrates had been a bit greater. All the calls pointed to the identical BVH. Moreover, Pascal achieved wonderful compute utilization within the later RTReflectionFilter part.
Part | SM IPC | Occupancy | Cache Hitrates |
E9076 DispatchRays, 26.68 ms 24,962 warps launched (compute) |
1.09 14.8% twin problem |
13.23 / 64 | L1 Tex: 55% L2: 84.1% |
E8551 DispatchRays, 6.01 ms 53,851 warps launched (compute) |
2.17 13.8% twin problem |
13.21 / 64 | L1 Tex: 68.9% L2: 86% |
E8808 DispatchRays, 3.66 ms 53,802 warps launched (compute) |
1.97 15.2% twin problem |
13.29 / 64 | L1 Tex: 68.7% L2: 77.3% |
RTReflectionFilter DrawIndexed, 5.21 ms 15 vertex shader invocations 7,843,200 pixel shader invocations 88 compute warps |
4.04 1.15% twin problem |
32.34 / 64 | L1 Tex: 516% L2: 84.4% |
So Pascal is able to ripping by means of workloads with highly effective vector items, utilizing its twin problem functionality to ensure the schedulers can feed the first FP32 or INT32 pipes. Nonetheless, the structure suffers from low occupancy, doubtless because of its restricted register file capability and lack of separate scalar registers. Cache latency and instruction cache capability additionally current issues. All of those had been addressed to some extent in Turing, although at the price of shedding Pascal’s 32-wide vector items and twin problem functionality.
Last Phrases
Raytracing is an thrilling new space of improvement for graphics rendering. Tracing rays guarantees to ship extra correct lighting results with much less work on the developer’s half. Nonetheless, optimizing raytracing workloads is troublesome. AMD and Nvidia each use acceleration buildings that undertake a divide-and-conquer strategy, however the two producers differ rather a lot in simply how they do the dividing.
AMD takes a extra typical strategy that wouldn’t be misplaced on a CPU, and makes use of a inflexible BVH format that enables for less complicated {hardware}. AMD’s RT accelerators don’t must cope with variable size nodes. Nonetheless, AMD BVH makes it extra susceptible to cache and reminiscence latency, considered one of a GPU’s conventional weaknesses. RDNA 3 counters this by hitting the issue from all sides. Cache latency has gone down, whereas capability has gone up. Raytracing particular LDS directions assist scale back latency throughout the shader that’s dealing with ray traversal. Lastly, elevated vector register file capability lets every WGP maintain state for extra threads, letting it hold extra rays in flight to cover latency. A number of these optimizations will assist a variety of workloads past raytracing. It’s arduous to see what wouldn’t profit from greater occupancy and higher caching.
Nvidia takes a radically completely different strategy that performs to a GPU’s benefits. Through the use of a really extensive tree, Nvidia shifts emphasis away from cache and reminiscence latency, and towards compute throughput. With Ada, that’s the place I think Nvidia’s strategy actually begins to shine. Ampere already had a staggering SM depend, with RT cores which have double the triangle check throughput in comparison with Turing. Ada pushes issues additional, with a good greater SM depend and triangle check throughput doubled once more. There’s rather a lot much less divide and conquer in Nvidia’s strategy, however there’s much more parallelism out there. And Nvidia is bringing tons of devoted {hardware} to plow by means of these intersection checks.
However that doesn’t imply Nvidia’s weak on the caching entrance both. Ada has an incredible cache hierarchy. I think Nvidia’s bringing that along with the elements above to remain roughly one technology forward of AMD in raytracing workloads. Almost about raytracing technique, and the way precisely to implement a BVH, I don’t suppose there’s a basically proper or unsuitable strategy. Nvidia and AMD have each made vital strides in direction of higher raytracing efficiency. As raytracing will get wider adoption and extra use although, we may even see AMD’s designs pattern in direction of larger investments into raytracing.
When you like our articles and journalism, and also you need to assist us in our endeavors, then contemplate heading over to our Patreon or our PayPal if you wish to toss a couple of dollars our approach. If you need to speak with the Chips and Cheese employees and the individuals behind the scenes, then contemplate becoming a member of our Discord.