Household Photographs vs 256 Kb RAM

Dialogue on HackerNews and Lobsters.
Some time in the past I used to be excited about making one thing good for my spouse as a gift for the Valentine’s Day. My eye caught a somewhat cool-looking 7-color eInk show with Raspberry Pico W on board – Pimoroni Inky Frame. From the primary look it regarded fairly polished with ready-to-use SDKs in Python and C++. I believed it could be fascinating to make a dynamic household photograph body out of it, one thing that my spouse might have on her desk, with capacity to change images infrequently with a press of a button.
I purchased it on the identical day. Oh boy, I didn’t know what I used to be getting myself into…
Disclaimer: This isn’t a promotional put up and there aren’t any affiliate hyperlinks on this article. I truly hate this machine fairly a bit! Learn alongside to seek out out why.
First impressions
Label on the packaging of Inky Body. I believe it appears to be like implausible! I’ve repurposed my field for different parcels, so that is from one other purchaser from the Pimoroni’s web site.
First hour with the machine was truly fairly good. The packaging was on level, the body itself felt well-built with some pirate-let’s-hack-this-together vibe. I favored it! Nonetheless, for some motive, the battery pack didn’t work. This was not an enormous deal for me, as a result of my spouse’s desk has a built-in USB energy, which I meant to make use of with the body. Pimoroni did ship me a alternative battery pack freed from cost, however that didn’t work both. In all probability there may be some fault within the board itself, I’m not certain and once more, I didn’t actually care on the time. The way in which the body is powered turned out to be extraordinarily necessary later, however I didn’t know that but.
What do you imply a single photograph is 3MB?
Chonky filesize for a chonky lady. This magnificence is known as Guinness, she is a stunning creature I had the pleasure to catsit for a while after I lived in Eire.
The body itself contains an SD card slot. My unique plan was to load all of the images onto it and browse it into the Raspberry Pico W’s RAM on demand. Pico then would convert the picture into the 7-color pallete and switch it onto the eInk display screen. Pimoroni did a terrific job with their SDK – it’s an absolute beast of a library. It already contains the logic for decoding JPEG pictures and implements a really first rate dithering from RBG coloration house into 7 colours of the eInk display screen. There may be even an instance of a picture gallery – drawback solved, proper?
Or so I believed. Instance picture gallery makes use of house pictures from the James Webb Area Telescope. There may be an fascinating property about these pictures aside from their magnificence – they’re principally black, which permits them to compress fairly properly. A few of them are solely 8Kb, which comfortably matches into the 256Kb of RAM on Raspberry Pico W. Even after reducing for the dimensions of the display screen (600 x 448 pixels) a few of my household images had been nonetheless at the very least 130Kb of JPEG. Which by all means ought to match into the 256Kb of RAM, but it surely truly doesn’t.
The factor is, the Pimoroni SDK itself does take a substantial portion of RAM. It offers an interface to “draw” on some inside buffer after which flush this buffer onto the display screen. You can not flip this buffer off, most likely as a result of writing the stream of bytes into the eInk display screen straight is a really nich use case. From my experiments, we’ve got round 130Kb of RAM to spare, which is just too near the photograph dimension. My naive model coded in MicroPython ran of reminiscence each half of the time I turned on the body.
Hacking an answer near a deadline
I believe it was like twelfth of Febraury after I found the issue with the picture dimension. I wanted an answer, and quick, so I got here up with a easiest one I might consider. If a single picture doesn’t match into RAM, let’s lower it into a number of smaller ones and browse it one-by-one. The method of including a photograph to the body was:
- Open GIMP
- Scale the picture in order that an fascinating a part of it matches 600 x 448 crop space
- Reduce the picture to 600 x 448 dimension
- Put it aside with the bottom doable high quality
- Run imagemagick incantation
convert enter.jpeg -crop 75x56 +repage +adjoin output_percentd.jpeg
to chop the picture into 64 elements - Repeat
I knew this might chew me sooner or later, however on the time I simply rolled with it. The ensuing pictures didn’t have the dimensions of 130Kb / 64
, sadly. The smaller the picture, the much less context the compression algorithm has, so the ensuing elements turned out to be round 18Kb every.
The Raspberry Pico W then learn all 64 pictures from the SD card sequentially (which took at the very least 10 seconds in complete) and put every of them within the corresponding a part of the buffer. It labored. It was canine gradual, but it surely labored.
There was one factor I didn’t absolutely perceive although. For some motive, not one of the code examples from Pimoroni contained any loops. I used to be anticipating to see at the very least one loop within the picture gallery instance, as a result of you must verify periodically if any of the buttons had been pressed and alter the picture in the event that they had been. Once more, I didn’t give a lot care to it due to the deadline, and simply wrote a busy loop checking for button inputs. I didn’t take care of energy consumption both due to USB energy, so it was adequate. As it could prove later, I dodged moving into an enourmous rabbit entire of undocumented behaviour by doing this. I imply, in fact I did step into it later down the road, however at the very least not earlier than the deadline.
Lastly, I introduced the photograph body backed by a hacky resolution to my spouse and she or he beloved it! It did deliver me quite a lot of pleasure to go by her desk every day and see that she modified a photograph to a distinct one.
After roughly a month, the inevitable occurred. My spouse instructed me that the body stopped working.
Designing a correct resolution
The again of the Inky Body with Raspberry Pico W soldered onto it – a brutal, however sincere resolution.
As a substitute of debugging the issue and optimizing reminiscence utilization of my MicroPython code, I’ve completed the anticipated and determined to rewrite the entire thing in C++. This was primarily as a result of I should not have a lot expertise with MicroPython and it’s far simpler for me to foretell reminiscence utilization of C++ code. With MicroPython, I used to be calling gc.accumulate()
all over the place in hope it could magically work.
I wished to unravel the issue of pictures randomly inflicting out-of-memory conditions for good. For this to occur, I wanted a picture format which might have a steady dimension and nonetheless match into the reminiscence of Raspberry Pico W. The naive strategy of simply writing RGB values for all pixels clearly doesn’t work. There are 600 * 480
pixels to explain and every of them requires 3 bytes – one for every of the RGB elements. This leads to 864Kb
, clearly exceeding Pico’s capabilities. Nonetheless, the eInk display screen can’t show full RGB vary – it solely has 7 colours. What if we encoded our picture utilizing solely these colours? 7 colours means we’d like solely 3 bits to explain every pixel. In complete we’ve got 600 * 480 * 3 / 8
bytes, which is 108Kb
. Even with Pimoroni SDK’s reminiscence utilization accounted, we’d capable of match our picture into RAM! A bonus to this resolution is that we don’t have to do any decoding on the Pico’s aspect. We are able to simply learn uncooked bytes from the file and interpret them as the colour indexes.
The plain drawback with this strategy is that know we have to do the encoding for it. Cropping and rescaling the picture is fairly easy, however to get from RGB into the 7-color house, we additionally have to implement dithering, which was completed by Pimoroni SDK. Fortunately, the dithering logic is comparatively easy and may be simply extracted from the SDK code. The one factor I couldn’t get instantly had been precomputed thresholds for dithering. I took a easy strategy and simply printed them to console from the Raspberry Pico W working a program that solely does simply that.
With all the mandatory knowledge for encoding pictures, I had two paths in entrance of me:
- Write a Python script doing the encoding and name it after cropping/scaling the picture in GIMP. This might work, however I would be the bottleneck for my spouse to replace images on the body
- Create an online UI pleasant to a non-technical individual, which can do all of the cropping, scaling and encoding
I made a decision to go along with route (2), as a result of it permits my spouse to pick new images when she desires to, edit them within the UI after which merely copy them to the SD card.
This turned out to be easier than I believed! All of the heavy lifting was completed by the cropper.js, which already implements drag-n-drop controls with crop and scale performance. The one factor left for me to do was the encoding half, which was quite simple to repeat from C++ in Pimoroni SDK to vanilla Javascript. The ensuing consumer movement may be very easy: add a picture (to the browser, no knowledge is distributed to the server), edit it nonetheless you want and obtain the encoded picture. I made a decision to publish it on my web site for simplicity, so be happy to test it out here. Be happy to discover the dithering logic within the main.js file, it’s just some traces of code actually doing all of the work:
const canvas = cropper.getCroppedCanvas({
width: 600,
peak: 448,
});
const rawPixels = canvas.getContext("2nd").getImageData(0, 0, 600, 448).knowledge;
const inkyPixels = new Uint8Array(600 * 448 * 3 / 8);
perform setBitByIndex(index, worth) = (worth << bitIndex);
for (let y = 0; y < 448; y++) {
for (let x = 0; x < 600; x++) ((y & 0b11) << 2);
const color3Bit = candidateCache[cacheKey][dither16Pattern[patternIndex]];
const startBitIndex = y * 600 * 3 + x * 3;
setBitByIndex(startBitIndex, color3Bit & 1);
setBitByIndex(startBitIndex + 1, (color3Bit >> 1) & 1);
setBitByIndex(startBitIndex + 2, (color3Bit >> 2) & 1);
}
After the picture is encoded, I merely learn the entire thing from the SD card into 108Kb
buffer on the Raspberry Pico W aspect after which copy it into the Pimoroni SDK’s eInk display screen buffer. The entire thing is simply a few capabilities, most of which is dealing with IO errors:
#outline IMAGE_WIDTH 600
#outline IMAGE_HEIGHT 448
#outline IMAGE_SIZE ((IMAGE_WIDTH * IMAGE_HEIGHT * 3) / 8)
InkyFrame inky;
uint8_t extract_bit(const uint8_t* bytes, size_t index) {
size_t byte_index = index / 8;
size_t bit_index = index % 8;
return (bytes[byte_index] >> bit_index) & 1;
}
void display_image(const char* filename) {
FIL file;
res = f_open(&file, filename, FA_READ);
if (res != FR_OK) {
printf("[ERROR] Didn't open picture '%s', code: %dn", filename, res);
return;
}
uint8_t buffer[IMAGE_SIZE];
size_t learn = 0;
res = f_read(&file, buffer, IMAGE_SIZE, &learn);
if (res != FR_OK) {
printf("[ERROR] Didn't learn contents of picture '%s', code: %dn", filename, res);
return;
}
if (learn != IMAGE_SIZE) {
printf("[ERROR] Picture '%s' has the fallacious dimension, precise: %d, anticipated: %dn", filename, learn, IMAGE_SIZE);
return;
}
for (size_t y = 0; y < IMAGE_HEIGHT; y++) {
for (size_t x = 0; x < IMAGE_WIDTH; x++) (extract_bit(buffer, base + 2) << 2);
inky.set_pen(coloration);
inky.set_pixel(Level(x, y));
}
inky.replace(true);
}
The one factor left to do was to implement some easy controls for switching to a distinct photograph utilizing buttons on the body. Clearly, after inventing a customized picture format and a complete frontend for it, this might be the best half, proper? Proper?..
Candy desires and wake-ups
Bear in mind I instructed you that there aren’t any loops within the examples to the Pimoroni SDK? Even the picture gallery instance simply exited after displaying one picture. This bought me curious – wasn’t the picture gallery presupposed to react to the consumer inputs and, properly, change the picture? I attempted to repeat the identical strategy into my code and it didn’t work. As anticipated, the body displayed a single picture and instantly exited. Pondering that the issue should be on my aspect, I’ve tried to take away as a lot as doable from my code. Ultimately, it was simply blinking with a LED, but it surely nonetheless didn’t work. Loosing my religion and sanity, I attempted to repeat the code from the instance as is – it nonetheless didn’t work!
After I lurking round Pimoroni boards for a while, my eye caught a curious phrase talked about within the context of Inky Body – RTC. I assumed that RTC right here most likely means Actual-time Clock. I remembered that Pimoroni SDK has a technique inky.sleep(NUMBER_OF_SECONDS)
, which curiously was at all times referred to as simply earlier than the ultimate return
assertion within the foremost
perform. I lurked round some extra and located a phrase “RTC wake-up”. Sudden realization began to crawl in my head.
Actual-time Clocks are normally exterior gadgets. Exterior gadgets are powered by battery. No battery – no exterior machine. No RTC – no wake-ups. No wake-ups – no code working. Since my battery pack didn’t work, the body was powered by the USB all the time.
I searched the discussion board for these particular key phrases and, certain sufficient, there was a thread about it. Each time you ask the body to inky.sleep(..)
, Raspberry Pico W is absolutely powered down. Button presses and RTC on board could cause the Pico W to “get up”, working the code from the highest. No surprise there have been no loops within the examples – the body {hardware} offered the assure to run the code in a loop. However solely if the battery was offered, which, in fact, was not documented wherever. Really, none of this was documented wherever. I actually pieced the entire thing from replies on Pimoroni boards and this isn’t okay for a product with a retail worth of ~100$.
I was not completely satisfied to find this. Working out of choices, I applied the busy loop once more. A real Grug Brained Developer transfer could be to implement the busy loop from the beginning, however all of us have one thing to be taught I assume.
Conclusion
I’m fairly pleased with how this venture turned out. The brand new model written in C++ is steady and has been working good for over a month now with no points. My spouse can simply add new images or take away previous ones when she desires to. She grew keen on this little factor and it warms my coronary heart.
Concerning the body itself – I’m truthfully unsure who this product targets. The RAM of Raspberry Pico W is clearly no match for contemporary pictures. There isn’t any exterior RAM chip on the board itself and no interface to stream the picture into the chip. The display screen is clearly not match for any interactive functions, as a result of the replace time is 30 seconds and it doesn’t assist partial updates. The one two use instances left are calendars (significantly?) and pictures from James Webb Area Telescope. When you go off the marketed path, then you’re fully by yourself, with no documentation and little hope that no matter you give you matches into the machine capabilities. One can say that that is “a real hacker spirit”. In my view, “a real hacker spirit” is reverse-engineering a chip which was made earlier than you had been born. That is only a poorly documented product with a restricted potential.