Low Discrepancy Sequences « The weblog on the backside of the ocean
Random numbers may be helpful in graphics and sport growth, however they’ve a pesky and typically undesirable behavior of clumping collectively.
It is a drawback in path tracing and monte carlo integration if you take N samples, however the samples aren’t properly unfold throughout the sampling vary.
This will also be an issue for conditions like if you end up randomly inserting objects on the planet or producing treasure for a treasure chest. You don’t need your randomly positioned timber to solely be in a single a part of the forest, and also you don’t desire a participant to get solely trash objects or solely godly objects after they open a treasure chest. Ideally you wish to have some randomness, however you don’t need the random quantity generator to provide you all the identical or related random numbers.
The issue is that random numbers may be TOO random, like within the beneath the place you’ll be able to see clumps and huge gaps between the 100 samples.
For instances like that, if you need random numbers which are slightly bit extra properly distributed, you would possibly discover some use in low discrepancy sequences.
The standalone C++ code (one supply file, commonplace headers, no libraries to hyperlink to) I used to generate the information and pictures are on the backside of this submit, in addition to some hyperlinks to extra sources.
On this context, discrepancy is a measurement of the best or lowest density of factors in a sequence. Excessive discrepancy means that there’s both a big space of empty house, or that there’s an space that has a excessive density of factors. Low discrepancy signifies that there are neither, and that your factors are roughly fairly evenly distributed.
The bottom discrepancy attainable has no randomness in any respect, and within the 1 dimensional case signifies that the factors are evenly distributed on a grid. For monte carlo integration and the sport dev utilization instances I discussed, we do need some randomness, we simply need the random factors to be unfold out slightly extra evenly.
If extra formal math notation is your factor, discrepancy is outlined as:
You possibly can learn extra in regards to the formal definition right here: Wikipedia:
Equidistributed sequence
For monte carlo integration particularly, that is the conduct every factor offers you:
- Excessive Discrepancy: Random Numbers / White Noise aka Uniform Distribution – At decrease pattern counts, convergance is slower (and have increased variance) on account of the potential of not getting good protection over the realm you integrating. At increased pattern counts, this drawback disappears. (Trace: actual time graphics and preview renderings use a smaller variety of samples)
- Lowest Discrepancy: Common Grid – This may trigger aliasing, not like the opposite “random” based mostly sampling, which commerce aliasing for noise. Noise is most well-liked over aliasing.
- Low Discrepancy: Low Discrepancy Sequences – In decrease numbers of samples, it will have quicker convergence by having higher protection of the sampling house, however will use randomness to do away with aliasing by introducing noise.
Additionally attention-grabbing to notice, Quasi Monte Carlo has provably higher asymptotic convergence than common monte carlo integration.
We’ll first have a look at 1 dimensional sequences.
Grid
Listed here are 100 samples evenly spaced:
Random Numbers (White Noise)
That is really a excessive discrepancy sequence. To generate this, you simply use a regular random quantity generator to select 100 factors between 0 and 1. I used std::mt19937 with a std::uniform_real_distribution from 0 to 1:
Subrandom Numbers
Subrandom numbers are methods to lower the discrepancy of white noise.
A method to do that is to interrupt the sampling house in half. You then generate even numbered samples within the first half of the house, and odd numbered samples within the second half of the house.
There’s no purpose you’ll be able to’t generalize this into extra divisions of house although.
This splits the house into 4 areas:
8 areas:
16 areas:
32 areas:
There are different methods to generate subrandom numbers although. A method is to generate random numbers between 0 and 0.5, and add them to the final pattern, plus 0.5. This offers you a random stroll sort setup.
Right here is that:
Uniform Sampling + Jitter
If you happen to take the primary subrandom concept to the logical most, you break your pattern house up into N sections and place one level inside these N sections to make a low discrepancy sequence made up of N factors.
One other method to take a look at that is that you simply do uniform sampling, however add some random jitter to the samples, between +/- half a uniform pattern measurement, to maintain the samples in their very own areas.
That is that:
I’ve heard that Pixar invented this system curiously.
Rational numbers are numbers which may be described as fractions, akin to 0.75 which may be expressed as 3/4. Irrational numbers are numbers which CANNOT be described as fractions, akin to pi, or the golden ratio, or the sq. root of a chief quantity.
Apparently you should use irrational numbers to generate low discrepancy sequences. You begin with some worth (might be 0, or might be a random quantity), add the irrational quantity, and modulus in opposition to 1.0. To get the following pattern you add the irrational worth once more, and modulus in opposition to 1.0 once more. Rinse and repeat till you get as many samples as you need.
Some values work higher than others although, and apparently the golden ratio is provably the only option (1.61803398875…), says Wikipedia.
Right here is the golden ratio, utilizing 4 totally different random (white noise) beginning values:
Right here I’ve used the sq. root of two, with 4 totally different beginning random numbers once more:
Lastly, right here is pi, with 4 random beginning values:
Van der Corput Sequence
The Van der Corput sequence is the 1d equivelant of the Halton sequence which we’ll discuss later.
The way you generate values within the Van der Corput sequence is you exchange the index of your pattern into some base.
As an example if it was base 2, you’ll convert your index to binary. If it was base 16, you’ll convert your index to hexadecimal.
Now, as an alternative of treating the digits as if they’re , , , and so forth (the place B is the bottom), you as an alternative deal with them as , , and so forth. In different phrases, you multiply every digit by a fraction and add up the outcomes.
To point out a pair fast examples, let’s say we needed pattern 6 within the sequence of base 2.
First we convert 6 to binary which is 110. From proper to left, now we have 3 digits: a 0 within the 1’s place, a 1 within the 2’s place, and a 1 within the 4’s place. , so we will see that 110 is in actual fact 6 in binary.
To get the Van der Corput worth for this, as an alternative of treating it because the 1’s, 2’s and 4’s digit, we deal with it because the 1/2, 1/4 and 1/8’s digit.
.
So, pattern 6 within the Van der Corput sequence utilizing base 2 is 3/8.
Let’s strive pattern 21 in base 3.
First we convert 21 to base 3 which is 210. We will confirm that is proper by seeing that .
As a substitute of a 1’s, 3’s and 9’s digit, we’re going to deal with it like a 1/3, 1/9 and 1/27 digit.
So, pattern 21 within the Van der Corput sequence utilizing base 3 is 5/27.
Right here is the Van der Corput sequence for base 2:
Right here it’s for base 3:
Base 4:
Base 5:
Sobol
One dimensional Sobol is definitely simply the Van der Corput sequence base 2 re-arranged slightly bit, nevertheless it’s generated in another way.
You begin with 0 (both utilizing it as pattern 0 or pattern -1, doesn’t matter which), and for every pattern you do that:
- Calculate the Ruler operate worth for the present pattern’s index(extra information in a second)
- Make the course vector by shifting 1 left (in binary) 31 – ruler occasions.
- XOR the final pattern by the course vector to get the brand new pattern
- To interpret the pattern as a floating level quantity you divide it by
Which may sound fully totally different than the Van der Corput sequence nevertheless it really is similar factor – simply re-ordered.
Within the closing step when dividing by , we’re actually simply deciphering the binary quantity as a fraction similar to earlier than, nevertheless it’s the LEFT most digit that’s the 1/2 spot, not the RIGHT most digit.
The Ruler Function goes like: 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, …
It’s fairly straightforward to calculate too. Calculating the ruler operate for an index (beginning at 1) is simply the zero based mostly index of the fitting most 1’s digit after changing the quantity to binary.
1 in binary is 001 so Ruler(1) is 0.
2 in binary is 010 so Ruler(2) is 1.
3 in binary is 011 so Ruler(3) is 0.
4 in binary is 100 so Ruler(4) is 2.
5 in binary is 101 so Ruler(5) is 0.
and so forth.
Right here is 1D Sobol:
In a single dimension, the Hammersley sequence is similar as the bottom 2 Van der Corput sequence, and in the identical order. If that sounds unusual that it’s the identical, it’s a 2nd sequence I broke down right into a 1d sequence for comparability. The one factor Hammersley has that makes it distinctive within the 1d case is that you could truncate bits.
It doesn’t appear that helpful for 1d Hammersley to truncate bits however realizing that’s helpful information too I assume. Have a look at the 2nd model of Hammersley to get a fairer have a look at it, as a result of it’s meant to be a 2nd sequence.
Right here is Hammersley:
With 1 bit truncated:
With 2 bits truncated:
Poisson disc factors are factors that are densely packed, however have a minimal distance from one another.
Pc scientists are nonetheless figuring out good algorithms to generate these factors effectively.
I exploit “Mitchell’s Finest-Candidate” which signifies that if you wish to generate a brand new level within the sequence, you generate N new factors, and select whichever level is farthest away from the opposite factors you’ve generated to date.
Right here it’s the place N is 100:
Subsequent up, let’s have a look at some 2 dimensional sequences.
Grid
Beneath is 2nd uniform samples on a grid.
Word that uniform grid shouldn’t be notably low discrepancy for the 2nd case! Extra information right here: Is it expected that uniform points would have non zero discrepancy?
Random
Listed here are 100 random factors:
Uniform Grid + Jitter
Here’s a uniform grid that has random jitter utilized to the factors. Jittered grid is a fairly generally used low discrepancy sampling method that has good success.
Subrandom
Similar to in 1 dimensions, you’ll be able to apply the subrandom concepts to 2 dimensions the place you divide the X and Y axis into so many sections, and randomly select factors within the sections.
If you happen to divide X and Y into the identical variety of sections although, you will have an issue as a result of some areas usually are not going to have any factors in them.
@Reedbeta identified that as an alternative of utilizing ipercentx and that ipercenty, that you might use ipercentx and (i/x)%y to make it choose factors in all areas.
Choosing totally different numbers for X and Y may be one other approach to give good outcomes. Right here’s dividing X and Y into 2 and three sections respectively:
If you happen to select co-prime numbers for divisions for every axis you may get maximal interval of repeats. 2 and three are coprime so the final instance is an effective instance of that, however right here is 3 and 11:
Right here is 3 and 97. 97 is massive sufficient that with solely doing 100 samples, we’re nearly doing jittered grid on the y axis.
Right here is the opposite subrandom quantity from 1d, the place we begin with a random worth for X and Y, after which add a random quantity between 0 and 0.5 to every, additionally including 0.5, to make a “random stroll” sort setup once more:
Halton
The Halton sequence is simply the Van der Corput sequence, however utilizing a special base on every axis.
Right here is the Halton sequence the place X and Y use bases 2 and three:
Right here it’s utilizing bases 5 and seven:
Listed here are bases 13 and 9:
The irrational numbers method can be utilized for 2nd as properly however I wasn’t in a position to learn the way to make it give first rate wanting output that didn’t have an apparent diagonal sample in them. Bart Wronski shared a neat paper that explains tips on how to use the golden ratio in 2nd with nice success: Golden Ratio Sequences For Low-Discrepancy Sampling
This makes use of the golden ratio for the X axis and the sq. root of two for the Y axis. Beneath that’s the identical, with a random start line, to make it give a special sequence.
Right here X axis makes use of sq. root of two and Y axis makes use of sq. root of three. Beneath that may be a random start line, which supplies the identical discrepancy.
Hammersley
In 2 dimensions, the Hammersley sequence makes use of the 1d Hammersley sequence for the X axis: As a substitute of treating the binary model of the index as binary, you deal with it as fractions such as you do for Van der Corput and sum up the fractions.
For the Y axis, you simply reverse the bits after which do the identical!
Right here is the Hammersley sequence. Word we must take 128 samples (not simply the 100 we did) if we needed it to fill the complete sq. with samples.
Truncating bits in 2nd is a bit helpful. Right here is 1 bit truncated:
2 bits truncated:
Poisson Disc
Utilizing the identical technique we did for 1d, we will generate factors in 2nd house:
N Rooks
There’s a sampling sample referred to as N-Rooks the place you place N rooks onto a chess board and organize them such that no two are in the identical row or column.
A approach to generate these samples is to understand that there might be just one rook per row, and that none of them will ever be in the identical column. So, you make an array that has numbers 0 to N-1, after which shuffle the array. The index into the array is the row, and the worth within the array is the column.
Listed here are 100 rooks:
Sobol
Sobol in two dimensions is extra complicated to clarify so I’ll hyperlink you to the supply I used: Sobol Sequences Made Simple.
The 1D sobol already lined is used for the X axis, after which one thing extra complicated was used for the Y axis:
Hyperlinks
Bart Wronski has a extremely nice collection on a associated subject: Dithering in Games
Wikipedia: Low Discrepancy Sequence
Wikipedia: Van der Corput Sequence
Using Fibonacci Sequence To Generate Colors
Deeper info and usage cases for low discrepancy sequences
Low discrepancy sequences are associated to blue noise. The place white noise comprises all frequencies evenly, blue noise has extra excessive frequencies and fewer low frequencies. Blue noise is basically the last word in low discrepancy, however may be costly to compute. Listed here are some pages on blue noise:
The problem with 3D blue noise
Vegetation placement in “The Witness”
Listed here are some hyperlinks from @marc_b_reynolds:
Sobol (low-discrepancy) sequence in 1-3D, stratified in 2-4D.
Traditional binary-reflected grey code.
Sobol.h
Code
#outline _CRT_SECURE_NO_WARNINGS #embody <home windows.h> // for bitmap headers and efficiency counter. Sorry non home windows folks! #embody <vector> #embody <stdint.h> #embody <random> #embody <array> #embody <algorithm> #embody <stdlib.h> #embody <set> typedef uint8_t uint8; #outline NUM_SAMPLES 100 // to simplify some 2nd code, this have to be a sq. #outline NUM_SAMPLES_FOR_COLORING 100 // Turning this on will sluggish issues down considerably as a result of it is an O(N^5) operation for 2nd! #outline CALCULATE_DISCREPANCY 0 #outline IMAGE1D_WIDTH 600 #outline IMAGE1D_HEIGHT 50 #outline IMAGE2D_WIDTH 300 #outline IMAGE2D_HEIGHT 300 #outline IMAGE_PAD 30 #outline IMAGE1D_CENTERX ((IMAGE1D_WIDTH+IMAGE_PAD*2)/2) #outline IMAGE1D_CENTERY ((IMAGE1D_HEIGHT+IMAGE_PAD*2)/2) #outline IMAGE2D_CENTERX ((IMAGE2D_WIDTH+IMAGE_PAD*2)/2) #outline IMAGE2D_CENTERY ((IMAGE2D_HEIGHT+IMAGE_PAD*2)/2) #outline AXIS_HEIGHT 40 #outline DATA_HEIGHT 20 #outline DATA_WIDTH 2 #outline COLOR_FILL SColor(255,255,255) #outline COLOR_AXIS SColor(0, 0, 0) //====================================================================================== struct SImageData { SImageData () : m_width(0) , m_height(0) { } size_t m_width; size_t m_height; size_t m_pitch; std::vector<uint8> m_pixels; }; struct SColor { SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0) : R(_R), G(_G), B(_B) { } uint8 B, G, R; }; //====================================================================================== bool SaveImage (const char *fileName, const SImageData &picture) { // open the file if we will FILE *file; file = fopen(fileName, "wb"); if (!file) { printf("Couldn't save %sn", fileName); return false; } // make the header information BITMAPFILEHEADER header; BITMAPINFOHEADER infoHeader; header.bfType = 0x4D42; header.bfReserved1 = 0; header.bfReserved2 = 0; header.bfOffBits = 54; infoHeader.biSize = 40; infoHeader.biWidth = (LONG)picture.m_width; infoHeader.biHeight = (LONG)picture.m_height; infoHeader.biPlanes = 1; infoHeader.biBitCount = 24; infoHeader.biCompression = 0; infoHeader.biSizeImage = (DWORD) picture.m_pixels.measurement(); infoHeader.biXPelsPerMeter = 0; infoHeader.biYPelsPerMeter = 0; infoHeader.biClrUsed = 0; infoHeader.biClrImportant = 0; header.bfSize = infoHeader.biSizeImage + header.bfOffBits; // write the information and shut the file fwrite(&header, sizeof(header), 1, file); fwrite(&infoHeader, sizeof(infoHeader), 1, file); fwrite(&picture.m_pixels[0], infoHeader.biSizeImage, 1, file); fclose(file); return true; } //====================================================================================== void ImageInit (SImageData& picture, size_t width, size_t peak) { picture.m_width = width; picture.m_height = peak; picture.m_pitch = 4 * ((width * 24 + 31) / 32); picture.m_pixels.resize(picture.m_pitch * picture.m_width); std::fill(picture.m_pixels.start(), picture.m_pixels.finish(), 0); } //====================================================================================== void ImageClear (SImageData& picture, const SColor& coloration) { uint8* row = &picture.m_pixels[0]; for (size_t rowIndex = 0; rowIndex < picture.m_height; ++rowIndex) { SColor* pixels = (SColor*)row; std::fill(pixels, pixels + picture.m_width, coloration); row += picture.m_pitch; } } //====================================================================================== void ImageBox (SImageData& picture, size_t x1, size_t x2, size_t y1, size_t y2, const SColor& coloration) { for (size_t y = y1; y < y2; ++y) { uint8* row = &picture.m_pixels[y * image.m_pitch]; SColor* begin = &((SColor*)row)[x1]; std::fill(begin, begin + x2 - x1, coloration); } } //====================================================================================== float Distance (float x1, float y1, float x2, float y2) { float dx = (x2 - x1); float dy = (y2 - y1); return std::sqrtf(dx*dx + dy*dy); } //====================================================================================== SColor DataPointColor (size_t sampleIndex) { SColor ret; float p.c = (float(sampleIndex) / (float(NUM_SAMPLES_FOR_COLORING) - 1.0f)); ret.R = uint8((1.0f - p.c) * 255.0f); ret.G = 0; ret.B = uint8(p.c * 255.0f); float magazine = (float)sqrt(ret.R*ret.R + ret.G*ret.G + ret.B*ret.B); ret.R = uint8((float(ret.R) / magazine)*255.0f); ret.G = uint8((float(ret.G) / magazine)*255.0f); ret.B = uint8((float(ret.B) / magazine)*255.0f); return ret; } //====================================================================================== float RandomFloat (float min, float max) { static std::random_device rd; static std::mt19937 mt(rd()); std::uniform_real_distribution<float> dist(min, max); return dist(mt); } //====================================================================================== size_t Ruler (size_t n) { size_t ret = 0; whereas (n != 0 && (n & 1) == 0) { n /= 2; ++ret; } return ret; } //====================================================================================== float CalculateDiscrepancy1D (const std::array<float, NUM_SAMPLES>& samples) { // some information about calculating discrepancy // https://math.stackexchange.com/questions/1681562/how-to-calculate-discrepancy-of-a-sequence // Calculates the discrepancy of this knowledge. // Assumes the information is [0,1) for valid sample range std::array<float, NUM_SAMPLES> sortedSamples = samples; std::sort(sortedSamples.begin(), sortedSamples.end()); float maxDifference = 0.0f; for (size_t startIndex = 0; startIndex <= NUM_SAMPLES; ++startIndex) { // startIndex 0 = 0.0f. startIndex 1 = sortedSamples[0]. and so forth float startValue = 0.0f; if (startIndex > 0) startValue = sortedSamples[startIndex - 1]; for (size_t stopIndex = startIndex; stopIndex <= NUM_SAMPLES; ++stopIndex) { // stopIndex 0 = sortedSamples[0]. startIndex[N] = 1.0f. and so forth float stopValue = 1.0f; if (stopIndex < NUM_SAMPLES) stopValue = sortedSamples[stopIndex]; float size = stopValue - startValue; // open interval (startValue, stopValue) size_t countInside = 0; for (float pattern : samples) { if (pattern > startValue && pattern < stopValue) { ++countInside; } } float density = float(countInside) / float(NUM_SAMPLES); float distinction = std::abs(density - size); if (distinction > maxDifference) maxDifference = distinction; // closed interval [startValue, stopValue] countInside = 0; for (float pattern : samples) { if (pattern >= startValue && pattern <= stopValue) { ++countInside; } } density = float(countInside) / float(NUM_SAMPLES); distinction = std::abs(density - size); if (distinction > maxDifference) maxDifference = distinction; } } return maxDifference; } //====================================================================================== float CalculateDiscrepancy2D (const std::array<std::array<float, 2>, NUM_SAMPLES>& samples) { // some information about calculating discrepancy // https://math.stackexchange.com/questions/1681562/how-to-calculate-discrepancy-of-a-sequence // Calculates the discrepancy of this knowledge. // Assumes the information is [0,1) for valid sample range. // Get the sorted list of unique values on each axis std::set<float> setSamplesX; std::set<float> setSamplesY; for (const std::array<float, 2>& sample : samples) { setSamplesX.insert(sample[0]); setSamplesY.insert(pattern[1]); } std::vector<float> sortedXSamples; std::vector<float> sortedYSamples; sortedXSamples.reserve(setSamplesX.measurement()); sortedYSamples.reserve(setSamplesY.measurement()); for (float f : setSamplesX) sortedXSamples.push_back(f); for (float f : setSamplesY) sortedYSamples.push_back(f); // Get the sorted checklist of samples on the X axis, for quicker interval testing std::array<std::array<float, 2>, NUM_SAMPLES> sortedSamplesX = samples; std::kind(sortedSamplesX.start(), sortedSamplesX.finish(), [] (const std::array<float, 2>& itemA, const std::array<float, 2>& itemB) { return itemA[0] < itemB[0]; } ); // calculate discrepancy float maxDifference = 0.0f; for (size_t startIndexY = 0; startIndexY <= sortedYSamples.measurement(); ++startIndexY) { float startValueY = 0.0f; if (startIndexY > 0) startValueY = *(sortedYSamples.start() + startIndexY - 1); for (size_t startIndexX = 0; startIndexX <= sortedXSamples.measurement(); ++startIndexX) { float startValueX = 0.0f; if (startIndexX > 0) startValueX = *(sortedXSamples.start() + startIndexX - 1); for (size_t stopIndexY = startIndexY; stopIndexY <= sortedYSamples.measurement(); ++stopIndexY) { float stopValueY = 1.0f; if (stopIndexY < sortedYSamples.measurement()) stopValueY = sortedYSamples[stopIndexY]; for (size_t stopIndexX = startIndexX; stopIndexX <= sortedXSamples.measurement(); ++stopIndexX) { float stopValueX = 1.0f; if (stopIndexX < sortedXSamples.measurement()) stopValueX = sortedXSamples[stopIndexX]; // calculate space float size = stopValueX - startValueX; float peak = stopValueY - startValueY; float space = size * peak; // open interval (startValue, stopValue) size_t countInside = 0; for (const std::array<float, 2>& pattern : samples) { if (pattern[0] > startValueX && pattern[1] > startValueY && pattern[0] < stopValueX && pattern[1] < stopValueY) { ++countInside; } } float density = float(countInside) / float(NUM_SAMPLES); float distinction = std::abs(density - space); if (distinction > maxDifference) maxDifference = distinction; // closed interval [startValue, stopValue] countInside = 0; for (const std::array<float, 2>& pattern : samples) { if (pattern[0] >= startValueX && pattern[1] >= startValueY && pattern[0] <= stopValueX && pattern[1] <= stopValueY) { ++countInside; } } density = float(countInside) / float(NUM_SAMPLES); distinction = std::abs(density - space); if (distinction > maxDifference) maxDifference = distinction; } } } } return maxDifference; } //====================================================================================== void Test1D (const char* fileName, const std::array<float, NUM_SAMPLES>& samples) { // create and clear the picture SImageData picture; ImageInit(picture, IMAGE1D_WIDTH + IMAGE_PAD * 2, IMAGE1D_HEIGHT + IMAGE_PAD * 2); // setup the canvas ImageClear(picture, COLOR_FILL); // calculate the discrepancy #if CALCULATE_DISCREPANCY float discrepancy = CalculateDiscrepancy1D(samples); printf("%s Discrepancy = %0.2f%%n", fileName, discrepancy*100.0f); #endif // draw the pattern factors size_t i = 0; for (float f: samples) { size_t pos = size_t(f * float(IMAGE1D_WIDTH)) + IMAGE_PAD; ImageBox(picture, pos, pos + 1, IMAGE1D_CENTERY - DATA_HEIGHT / 2, IMAGE1D_CENTERY + DATA_HEIGHT / 2, DataPointColor(i)); ++i; } // draw the axes strains. horizontal first then the 2 vertical ImageBox(picture, IMAGE_PAD, IMAGE1D_WIDTH + IMAGE_PAD, IMAGE1D_CENTERY, IMAGE1D_CENTERY + 1, COLOR_AXIS); ImageBox(picture, IMAGE_PAD, IMAGE_PAD + 1, IMAGE1D_CENTERY - AXIS_HEIGHT / 2, IMAGE1D_CENTERY + AXIS_HEIGHT / 2, COLOR_AXIS); ImageBox(picture, IMAGE1D_WIDTH + IMAGE_PAD, IMAGE1D_WIDTH + IMAGE_PAD + 1, IMAGE1D_CENTERY - AXIS_HEIGHT / 2, IMAGE1D_CENTERY + AXIS_HEIGHT / 2, COLOR_AXIS); // save the picture SaveImage(fileName, picture); } //====================================================================================== void Test2D (const char* fileName, const std::array<std::array<float,2>, NUM_SAMPLES>& samples) { // create and clear the picture SImageData picture; ImageInit(picture, IMAGE2D_WIDTH + IMAGE_PAD * 2, IMAGE2D_HEIGHT + IMAGE_PAD * 2); // setup the canvas ImageClear(picture, COLOR_FILL); // calculate the discrepancy #if CALCULATE_DISCREPANCY float discrepancy = CalculateDiscrepancy2D(samples); printf("%s Discrepancy = %0.2f%%n", fileName, discrepancy*100.0f); #endif // draw the pattern factors size_t i = 0; for (const std::array<float, 2>& pattern : samples) { size_t posx = size_t(pattern[0] * float(IMAGE2D_WIDTH)) + IMAGE_PAD; size_t posy = size_t(pattern[1] * float(IMAGE2D_WIDTH)) + IMAGE_PAD; ImageBox(picture, posx - 1, posx + 1, posy - 1, posy + 1, DataPointColor(i)); ++i; } // horizontal strains ImageBox(picture, IMAGE_PAD - 1, IMAGE2D_WIDTH + IMAGE_PAD + 1, IMAGE_PAD - 1, IMAGE_PAD, COLOR_AXIS); ImageBox(picture, IMAGE_PAD - 1, IMAGE2D_WIDTH + IMAGE_PAD + 1, IMAGE2D_HEIGHT + IMAGE_PAD, IMAGE2D_HEIGHT + IMAGE_PAD + 1, COLOR_AXIS); // vertical strains ImageBox(picture, IMAGE_PAD - 1, IMAGE_PAD, IMAGE_PAD - 1, IMAGE2D_HEIGHT + IMAGE_PAD + 1, COLOR_AXIS); ImageBox(picture, IMAGE_PAD + IMAGE2D_WIDTH, IMAGE_PAD + IMAGE2D_WIDTH + 1, IMAGE_PAD - 1, IMAGE2D_HEIGHT + IMAGE_PAD + 1, COLOR_AXIS); // save the picture SaveImage(fileName, picture); } //====================================================================================== void TestUniform1D (bool jitter) { // calculate the pattern factors const float c_cellSize = 1.0f / float(NUM_SAMPLES+1); std::array<float, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samples[i] = float(i+1) / float(NUM_SAMPLES+1); if (jitter) samples[i] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f); } // save bitmap and so forth if (jitter) Test1D("1DUniformJitter.bmp", samples); else Test1D("1DUniform.bmp", samples); } //====================================================================================== void TestUniformRandom1D () { // calculate the pattern factors const float c_halfJitter = 1.0f / float((NUM_SAMPLES + 1) * 2); std::array<float, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) samples[i] = RandomFloat(0.0f, 1.0f); // save bitmap and so forth Test1D("1DUniformRandom.bmp", samples); } //====================================================================================== void TestSubRandomA1D (size_t numRegions) { const float c_randomRange = 1.0f / float(numRegions); // calculate the pattern factors const float c_halfJitter = 1.0f / float((NUM_SAMPLES + 1) * 2); std::array<float, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samples[i] = RandomFloat(0.0f, c_randomRange); samples[i] += float(i % numRegions) / float(numRegions); } // save bitmap and so forth char fileName[256]; sprintf(fileName, "1DSubRandomA_percentzu.bmp", numRegions); Test1D(fileName, samples); } //====================================================================================== void TestSubRandomB1D () { // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; float pattern = RandomFloat(0.0f, 0.5f); for (size_t i = 0; i < NUM_SAMPLES; ++i) { pattern = std::fmodf(pattern + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f); samples[i] = pattern; } // save bitmap and so forth Test1D("1DSubRandomB.bmp", samples); } //====================================================================================== void TestVanDerCorput (size_t base) { // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samples[i] = 0.0f; float denominator = float(base); size_t n = i; whereas (n > 0) { size_t multiplier = n % base; samples[i] += float(multiplier) / denominator; n = n / base; denominator *= base; } } // save bitmap and so forth char fileName[256]; sprintf(fileName, "1DVanDerCorput_percentzu.bmp", base); Test1D(fileName, samples); } //====================================================================================== void TestIrrational1D (float irrational, float seed) { // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; float pattern = seed; for (size_t i = 0; i < NUM_SAMPLES; ++i) { pattern = std::fmodf(pattern + irrational, 1.0f); samples[i] = pattern; } // save bitmap and so forth char irrationalStr[256]; sprintf(irrationalStr, "%f", irrational); char seedStr[256]; sprintf(seedStr, "%f", seed); char fileName[256]; sprintf(fileName, "1DIrrational_percents_percents.bmp", &irrationalStr[2], &seedStr[2]); Test1D(fileName, samples); } //====================================================================================== void TestSobol1D () { // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; size_t sampleInt = 0; for (size_t i = 0; i < NUM_SAMPLES; ++i) { size_t ruler = Ruler(i + 1); size_t course = size_t(size_t(1) << size_t(31 - ruler)); sampleInt = sampleInt ^ course; samples[i] = float(sampleInt) / std::pow(2.0f, 32.0f); } // save bitmap and so forth Test1D("1DSobol.bmp", samples); } //====================================================================================== void TestHammersley1D (size_t truncateBits) { // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; size_t sampleInt = 0; for (size_t i = 0; i < NUM_SAMPLES; ++i) { size_t n = i >> truncateBits; float base = 1.0f / 2.0f; samples[i] = 0.0f; whereas (n) { if (n & 1) samples[i] += base; n /= 2; base /= 2.0f; } } // save bitmap and so forth char fileName[256]; sprintf(fileName, "1DHammersley_percentzu.bmp", truncateBits); Test1D(fileName, samples); } //====================================================================================== float MinimumDistance1D (const std::array<float, NUM_SAMPLES>& samples, size_t numSamples, float x) { // Utilized by poisson. // This returns the minimal distance that time (x) is away from the pattern factors, from [0, numSamples). float minimumDistance = 0.0f; for (size_t i = 0; i < numSamples; ++i) return minimumDistance; } //====================================================================================== void TestPoisson1D () { // each time we wish to place some extent, we generate this many factors and select the one farthest away from all the opposite factors (largest minimal distance) const size_t c_bestOfAttempts = 100; // calculate the pattern factors std::array<float, NUM_SAMPLES> samples; for (size_t sampleIndex = 0; sampleIndex < NUM_SAMPLES; ++sampleIndex) { // generate some random factors and preserve the one which has the most important minimal distance from any of the present factors float bestX = 0.0f; float bestMinDistance = 0.0f; for (size_t try = 0; try < c_bestOfAttempts; ++try) { float attemptX = RandomFloat(0.0f, 1.0f); float minDistance = MinimumDistance1D(samples, sampleIndex, attemptX); if (minDistance > bestMinDistance) { bestX = attemptX; bestMinDistance = minDistance; } } samples[sampleIndex] = bestX; } // save bitmap and so forth Test1D("1DPoisson.bmp", samples); } //====================================================================================== void TestUniform2D (bool jitter) { // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES)); const float c_cellSize = 1.0f / float(c_oneSide+1); for (size_t iy = 0; iy < c_oneSide; ++iy) { for (size_t ix = 0; ix < c_oneSide; ++ix) { size_t sampleIndex = iy * c_oneSide + ix; samples[sampleIndex][0] = float(ix + 1) / (float(c_oneSide + 1)); if (jitter) samples[sampleIndex][0] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f); samples[sampleIndex][1] = float(iy + 1) / (float(c_oneSide) + 1.0f); if (jitter) samples[sampleIndex][1] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f); } } // save bitmap and so forth if (jitter) Test2D("2DUniformJitter.bmp", samples); else Test2D("2DUniform.bmp", samples); } //====================================================================================== void TestUniformRandom2D () { // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES)); const float c_halfJitter = 1.0f / float((c_oneSide + 1) * 2); for (size_t i = 0; i < NUM_SAMPLES; ++i) { samples[i][0] = RandomFloat(0.0f, 1.0f); samples[i][1] = RandomFloat(0.0f, 1.0f); } // save bitmap and so forth Test2D("2DUniformRandom.bmp", samples); } //====================================================================================== void TestSubRandomA2D (size_t regionsX, size_t regionsY) { const float c_randomRangeX = 1.0f / float(regionsX); const float c_randomRangeY = 1.0f / float(regionsY); // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samples[i][0] = RandomFloat(0.0f, c_randomRangeX); samples[i][0] += float(i % regionsX) / float(regionsX); samples[i][1] = RandomFloat(0.0f, c_randomRangeY); samples[i][1] += float(i % regionsY) / float(regionsY); } // save bitmap and so forth char fileName[256]; sprintf(fileName, "2DSubRandomA_percentzu_percentzu.bmp", regionsX, regionsY); Test2D(fileName, samples); } //====================================================================================== void TestSubRandomB2D () { // calculate the pattern factors float samplex = RandomFloat(0.0f, 0.5f); float sampley = RandomFloat(0.0f, 0.5f); std::array<std::array<float, 2>, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samplex = std::fmodf(samplex + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f); sampley = std::fmodf(sampley + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f); samples[i][0] = samplex; samples[i][1] = sampley; } // save bitmap and so forth Test2D("2DSubRandomB.bmp", samples); } //====================================================================================== void TestHalton (size_t basex, size_t basey) { // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES)); const float c_halfJitter = 1.0f / float((c_oneSide + 1) * 2); for (size_t i = 0; i < NUM_SAMPLES; ++i) { // x axis samples[i][0] = 0.0f; { float denominator = float(basex); size_t n = i; whereas (n > 0) { size_t multiplier = n % basex; samples[i][0] += float(multiplier) / denominator; n = n / basex; denominator *= basex; } } // y axis samples[i][1] = 0.0f; { float denominator = float(basey); size_t n = i; whereas (n > 0) { size_t multiplier = n % basey; samples[i][1] += float(multiplier) / denominator; n = n / basey; denominator *= basey; } } } // save bitmap and so forth char fileName[256]; sprintf(fileName, "2DHalton_percentzu_percentzu.bmp", basex, basey); Test2D(fileName, samples); } //====================================================================================== void TestSobol2D () { // calculate the pattern factors // x axis std::array<std::array<float, 2>, NUM_SAMPLES> samples; size_t sampleInt = 0; for (size_t i = 0; i < NUM_SAMPLES; ++i) { size_t ruler = Ruler(i + 1); size_t course = size_t(size_t(1) << size_t(31 - ruler)); sampleInt = sampleInt ^ course; samples[i][0] = float(sampleInt) / std::pow(2.0f, 32.0f); } // y axis // Code tailored from http://internet.maths.unsw.edu.au/~fkuo/sobol/ // makes use of numbers: new-joe-kuo-6.21201 // Path numbers std::vector<size_t> V; V.resize((size_t)ceil(log((double)NUM_SAMPLES) / log(2.0))); V[0] = size_t(1) << size_t(31); for (size_t i = 1; i < V.measurement(); ++i) V[i] = V[i - 1] ^ (V[i - 1] >> 1); // Samples sampleInt = 0; for (size_t i = 0; i < NUM_SAMPLES; ++i) { size_t ruler = Ruler(i + 1); sampleInt = sampleInt ^ V[ruler]; samples[i][1] = float(sampleInt) / std::pow(2.0f, 32.0f); } // save bitmap and so forth Test2D("2DSobol.bmp", samples); } //====================================================================================== void TestHammersley2D (size_t truncateBits) { // work out what number of bits we're working in. size_t worth = 1; size_t numBits = 0; whereas (worth < NUM_SAMPLES) { worth *= 2; ++numBits; } // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; size_t sampleInt = 0; for (size_t i = 0; i < NUM_SAMPLES; ++i) { // x axis samples[i][0] = 0.0f; { size_t n = i >> truncateBits; float base = 1.0f / 2.0f; whereas (n) { if (n & 1) samples[i][0] += base; n /= 2; base /= 2.0f; } } // y axis samples[i][1] = 0.0f; { size_t n = i >> truncateBits; size_t masks = size_t(1) << (numBits - 1 - truncateBits); float base = 1.0f / 2.0f; whereas (masks) { if (n & masks) samples[i][1] += base; masks /= 2; base /= 2.0f; } } } // save bitmap and so forth char fileName[256]; sprintf(fileName, "2DHammersley_percentzu.bmp", truncateBits); Test2D(fileName, samples); } //====================================================================================== void TestRooks2D () { // make and shuffle rook positions std::random_device rd; std::mt19937 mt(rd()); std::array<size_t, NUM_SAMPLES> rookPositions; for (size_t i = 0; i < NUM_SAMPLES; ++i) rookPositions[i] = i; std::shuffle(rookPositions.start(), rookPositions.finish(), mt); // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; for (size_t i = 0; i < NUM_SAMPLES; ++i) { // x axis samples[i][0] = float(rookPositions[i]) / float(NUM_SAMPLES-1); // y axis samples[i][1] = float(i) / float(NUM_SAMPLES - 1); } // save bitmap and so forth Test2D("2DRooks.bmp", samples); } //====================================================================================== void TestIrrational2D (float irrationalx, float irrationaly, float seedx, float seedy) { // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; float samplex = seedx; float sampley = seedy; for (size_t i = 0; i < NUM_SAMPLES; ++i) { samplex = std::fmodf(samplex + irrationalx, 1.0f); sampley = std::fmodf(sampley + irrationaly, 1.0f); samples[i][0] = samplex; samples[i][1] = sampley; } // save bitmap and so forth char irrationalxStr[256]; sprintf(irrationalxStr, "%f", irrationalx); char irrationalyStr[256]; sprintf(irrationalyStr, "%f", irrationaly); char seedxStr[256]; sprintf(seedxStr, "%f", seedx); char seedyStr[256]; sprintf(seedyStr, "%f", seedy); char fileName[256]; sprintf(fileName, "2DIrrational_percents_percents_percents_percents.bmp", &irrationalxStr[2], &irrationalyStr[2], &seedxStr[2], &seedyStr[2]); Test2D(fileName, samples); } //====================================================================================== float MinimumDistance2D (const std::array<std::array<float, 2>, NUM_SAMPLES>& samples, size_t numSamples, float x, float y) { // Utilized by poisson. // This returns the minimal distance that time (x,y) is away from the pattern factors, from [0, numSamples). float minimumDistance = 0.0f; for (size_t i = 0; i < numSamples; ++i) return minimumDistance; } //====================================================================================== void TestPoisson2D () { // each time we wish to place some extent, we generate this many factors and select the one farthest away from all the opposite factors (largest minimal distance) const size_t c_bestOfAttempts = 100; // calculate the pattern factors std::array<std::array<float, 2>, NUM_SAMPLES> samples; for (size_t sampleIndex = 0; sampleIndex < NUM_SAMPLES; ++sampleIndex) { // generate some random factors and preserve the one which has the most important minimal distance from any of the present factors float bestX = 0.0f; float bestY = 0.0f; float bestMinDistance = 0.0f; for (size_t try = 0; try < c_bestOfAttempts; ++try) { float attemptX = RandomFloat(0.0f, 1.0f); float attemptY = RandomFloat(0.0f, 1.0f); float minDistance = MinimumDistance2D(samples, sampleIndex, attemptX, attemptY); if (minDistance > bestMinDistance) { bestX = attemptX; bestY = attemptY; bestMinDistance = minDistance; } } samples[sampleIndex][0] = bestX; samples[sampleIndex][1] = bestY; } // save bitmap and so forth Test2D("2DPoisson.bmp", samples); } //====================================================================================== int primary (int argc, char **argv) { // 1D assessments { TestUniform1D(false); TestUniform1D(true); TestUniformRandom1D(); TestSubRandomA1D(2); TestSubRandomA1D(4); TestSubRandomA1D(8); TestSubRandomA1D(16); TestSubRandomA1D(32); TestSubRandomB1D(); TestVanDerCorput(2); TestVanDerCorput(3); TestVanDerCorput(4); TestVanDerCorput(5); // golden ratio mod 1 aka (sqrt(5) - 1)/2 TestIrrational1D(0.618034f, 0.0f); TestIrrational1D(0.618034f, 0.385180f); TestIrrational1D(0.618034f, 0.775719f); TestIrrational1D(0.618034f, 0.287194f); // sqrt(2) - 1 TestIrrational1D(0.414214f, 0.0f); TestIrrational1D(0.414214f, 0.385180f); TestIrrational1D(0.414214f, 0.775719f); TestIrrational1D(0.414214f, 0.287194f); // PI mod 1 TestIrrational1D(0.141593f, 0.0f); TestIrrational1D(0.141593f, 0.385180f); TestIrrational1D(0.141593f, 0.775719f); TestIrrational1D(0.141593f, 0.287194f); TestSobol1D(); TestHammersley1D(0); TestHammersley1D(1); TestHammersley1D(2); TestPoisson1D(); } // 2D assessments { TestUniform2D(false); TestUniform2D(true); TestUniformRandom2D(); TestSubRandomA2D(2, 2); TestSubRandomA2D(2, 3); TestSubRandomA2D(3, 11); TestSubRandomA2D(3, 97); TestSubRandomB2D(); TestHalton(2, 3); TestHalton(5, 7); TestHalton(13, 9); TestSobol2D(); TestHammersley2D(0); TestHammersley2D(1); TestHammersley2D(2); TestRooks2D(); // X axis = golden ratio mod 1 aka (sqrt(5)-1)/2 // Y axis = sqrt(2) mod 1 TestIrrational2D(0.618034f, 0.414214f, 0.0f, 0.0f); TestIrrational2D(0.618034f, 0.414214f, 0.775719f, 0.264045f); // X axis = sqrt(2) mod 1 // Y axis = sqrt(3) mod 1 TestIrrational2D(std::fmodf((float)std::sqrt(2.0f), 1.0f), std::fmodf((float)std::sqrt(3.0f), 1.0f), 0.0f, 0.0f); TestIrrational2D(std::fmodf((float)std::sqrt(2.0f), 1.0f), std::fmodf((float)std::sqrt(3.0f), 1.0f), 0.775719f, 0.264045f); TestPoisson2D(); } #if CALCULATE_DISCREPANCY printf("n"); system("pause"); #endif }