Now Reading
The Quest for Netflix on Asahi Linux

The Quest for Netflix on Asahi Linux

2023-03-09 08:29:57

By David Buchanan, 9th of March 2023

“don’t violate the DMCA problem 2023”

Word: In the event you’re right here since you simply need to watch Netflix on Asahi, set up this, after which scroll all the way down to the “Netflix-Particular Annoyances” part.

About 6 months in the past, the macOS set up on my 1-year-old macbook determined to soft-brick itself.

Moderately than wiping and reinstalling, I took the chance to modify to Asahi Linux, and I have not appeared again.

One thing that bugged me was that I could not use the official Spotify app anymore, and equally, that I could not watch Netflix.

Reality be instructed, I do not care very a lot about Netflix – the UX supplied by BitTorrent is superior. Alternatively, I am fairly invested within the Spotify ecosystem, and whereas there are third occasion open-source shoppers that run nice on Asahi, I do desire the official interface (which is a controversial take, I collect).

Whereas Spotify doesn’t but provide a local consumer for aarch64 Linux, their internet app is completely usable—or at the least, it might be, if it did not present this glorious error message:

Playback of protected content material shouldn’t be enabled.

The basis reason behind this error is that the Widevine DRM module shouldn’t be put in (which can also be why Netflix does not work).

Thus begins the “don’t violate the DMCA problem 2023”. The purpose of this problem is to determine find out how to watch Netflix on Asahi Linux with out bypassing or in any other case breaking DRM.

You might discover that this text is considerably longer than my 280-character publication on doing the latter, from 2019.

Putting in Widevine

Clearly, the answer to all our issues is to put in and use Widevine. Sadly, one doesn’t merely Set up Widevine on a platform that is not formally supported by Google.

The one formally supported means to make use of Widevine on Linux is utilizing Chrome on an x86_64 CPU.

The astute reader might ask, roughly on this order:

  • How come it additionally works in Chromium and Firefox, on x86_64 Linux?
  • How come it additionally works on Android, which is aarch64 Linux underneath the hood?
  • How come it additionally works on Raspberry Pi, which is aarch64 Linux?

Wow, you ask wonderful questions! I’ll deal with them, so as:

Widevine in Firefox on x86_64 Linux?

Webpages entry DRM modules by way of the Encrypted Media Extensions API, which is a W3C commonplace. This specifies how webpages speak to the browser about doing DRM Issues™.

Within the occasion of Chrome, the browser does not implement the DRM itself, however delegates it to a local library known as a CDM (Content material Decryption Module).

Within the case of Widevine-in-Chrome-on-Linux, this CDM takes the type of a dynamic library referred to as This library is an opaque proprietary blob that we’re forbidden to look within (at the least, that is how they’d desire it to be).

Graciously, as a part of the Chromium mission, Google gives the C++ headers required to interface with The Blob. This interface permits different tasks like Firefox to implement help for Widevine, by way of the EME API, utilizing the very same blob as Chrome does. That is handy, but it surely does not assist us on aarch64 Linux – there isn’t any corresponding for us to borrow from Chrome, as a result of Widevine-in-Chrome-on-Linux-on-aarch64 shouldn’t be an formally supported platform.

Widevine on Android?

The quick reply is: the DRM works in another way on Android. Due to the variations between the Android platform and desktop Linux, the Android implementations of Widevine won’t be helpful to us, except you are planning on shedding the Do Not Violate The DMCA Problem (that is left as an train to the reader).

Widevine on Raspberry Pi?

Earlier, I stated that “Widevine-in-Chrome-on-Linux-on-aarch64” shouldn’t be an formally supported platform.

I lied.

Chromebooks exist, many have aarch64 CPUs, they run Chrome on Linux (kind of), they usually formally help Widevine. In some unspecified time in the future, anyone seen that there is a completely cromulent out there inside Google’s Chromebook restoration photographs, and wrote tools to obtain and extract it from the publicly out there photographs. So far as I can inform, the Raspberry Pi of us use scripts like that to acquire the CDM, and helpfully bundle it in a .deb file for simple set up on a Pi.

Nonetheless, there’s one other catch: Though Chromebooks have aarch64 CPUs and kernels, they run an armv7l userspace. Because of this the CDM blob can also be armv7l. That is fantastic on Pis as a result of they’ll additionally run an armv7l userspace (I believe it is even the default nonetheless?). Sadly, Apple Silicon can not run an armv7l userspace natively, the {hardware} merely doesn’t help it.

Wait! I lied once more!

Or moderately, that was all true on the time once I first investigated Widevine-on-Asahi, a number of months in the past. A couple of weeks in the past, Google determined to enter the twenty first century and began transport aarch64 userspaces on sure Chromebook fashions. Because of this “Widevine-in-Chrome-on-Linux-on-aarch64” does exist. The ChromeOS blob extraction course of works as earlier than, and the Pi Basis conveniently packages it as a .deb for Pi customers.

This is a viable path ahead. Onwards!

Widevine on Arch Linux ARM

The subsequent hurdle is that ChromeOS shouldn’t be fairly the identical factor as desktop Linux. ChromeOS’s glibc has some bizarre patches that make it incompatible with the glibc you may discover on a normal Arch Linux ARM field. In the event you attempt to load, you may be hit with a hard-to-debug segfault someplace in glibc.

The glibc-widevine AUR bundle neatly solves this downside, by patching glibc to work round ChromeOS’s idiosyncrasies. So far as I can inform, Raspbian’s glibc has related compatibility patches out-of-the-box.

Since it is not an formally supported platform, the upstream Chromium sources usually are not configured to help Widevine on aarch64 by default. Nonetheless, re-enabling that help at build-time is trivial, and fortunately that is what occurs within the Arch Linux Arm Chromium bundle resulting from a forward-thinking patch.

If you have not appeared on the place of the scroll bar on this text, you would possibly suppose we’re nearly there! You probably have a non-Apple-Silicon aarch64 gadget (a Linux’d Nintendo Swap?), you possibly can most likely simply set up the widevine-aarch64 AUR bundle (which grabs the CDM blob from the Pi repos and units issues up), and you will have a totally functioning Widevine set up.

To recap, the chain of occasions to this point is:

  • Google publishes a ChromeOS picture with an aarch64 userspace.
  • Pi Basis (presumably) makes use of a script to extract the Widevine blob from this picture and packages it for Raspbian as a .deb.
  • Glibc is patched to work round ChromeOS’s weirdness.
  • Chromium construct config is patched to enabled Widevine help on ARM.
  • Chrom{e,ium} and Firefox can now use Widevine DRM (yay?)
  • Netflix and Spotify work (yay!)

All these steps had already been found out by numerous useful individuals earlier than I got here alongside. However, this is not the top of the story for Asahi customers…

Widevine on Asahi-flavoured Arch Linux ARM

We’re on the house stretch now, proper?


Not fairly, there’s one final showstopper for Asahi customers, and it is a huge one: Asahi Linux is constructed to make use of 16K page sizes. The Widevine blobs out there to us solely help 4K pages. Whereas it’s doable to run a 4K kernel on Apple Silicon, it is a bit of a bodge proper now and it is not one thing I am ready to do on my daily-driver machine, and I assume most different individuals do not need to both.

This incompatibility is such a major hurdle that Asahi contributor chadmed instructed me to not hassle:

07:58 <chadmed> Retr0id: dont hassle its LOAD sections are 0x1000 aligned
                (i already tried)
07:58 <Retr0id> I am making an attempt more durable :P
07:59 <chadmed> :D

I’ve a wierd affliction whereby I can solely remedy an issue if anyone else implies that I can not, so this was precisely what I wanted to listen to.

Earlier, I described the CDM as an “opaque proprietary blob that we’re forbidden to look within”. If we did look, this could occur:

stay reenactment

How can we overcome this web page alignment difficulty, with out being obliterated by the total drive of viewing obfuscated code?

We won’t gaze instantly into the abyss at this time, however we will tiptoe across the edge – each in an effort to protect our sanity, and to ensure we do not get nerd-sniped into shedding the Do Not Violate The DMCA Problem.

Earlier than we proceed, I want to clarify what the precise downside is. Earlier than that, I must introduce the Downside Manufacturing facility:

ELF Loading

ELF stands for Executable and Linkable Format, and it is the de facto format for storing executable code on Linux, and our CDM library ( isn’t any exception.

Usually, an ELF is loaded by a loader (wow!), which could possibly be the Linux kernel itself, or a runtime loader like In both case, the ELF headers inform the loader find out how to load the code (and information) into reminiscence, and put together it for execution. is at all times loaded dynamically at runtime (which is typical for something plugin-like). Particularly, browsers load it utilizing dlopen(), which is supplied by glibc.

When dlopen() is invoked, there are three foremost phases that happen:

  • Loading
  • Linking and Relocation
  • Execution

The ELF accommodates a desk of Program Headers, every of which describes a Phase of this system. These headers are parsed by the loader, and every LOAD phase (a particular phase sort) accommodates metadata to inform the loader the way it needs to be loaded into reminiscence, and which permissions the reminiscence for the Phase ought to have (learn/write/execute). It is the alignment of those LOAD Segments that is inflicting points with Asahi’s 16K pages (extra on this later!)

Libraries get loaded at a random base deal with. The aim of the linker is to regulate the code and information to account for this arbitrary new location, resolving references inside the library, but additionally probably references to different libraries (e.g. the place one library calls a perform from one other). The dynamic linker (a part of glibc, on this case) parses a bunch of ELF buildings to make this occur, together with however not restricted to the Part Header Desk, and the Dynamic Part. These headers (of assorted sorts) enumerate Relocations, which inform the linker particularly the place and the way the code/information needs to be adjusted, to make every little thing work.

Aspect observe: Earlier, chadmed meant to say “LOAD segments”, not “LOAD sections”. In brief, Segments relate to how code is mapped into reminiscence, and Sections relate to how it’s linked and relocated. It is a simple mistake to make, and truthfully I take advantage of them interchangeably once I’m not writing articles like this one. Whoever designed ELF actually should have picked higher names for issues. Anyway, shifting on…

Though a shared library shouldn’t be an “executable” per se, it nonetheless has an INIT_ARRAY, which is an array of perform pointers which are referred to as sequentially by the loader, after the ELF has been linked. The library makes use of these INIT_ARRAY entries to do no matter startup initialisation duties it requires.

After that, the entire loading course of is full, and the “host” program can begin calling features from the newly loaded library.

The Downside

Throughout this course of, the loader makes use of the mmap() system name to map subsections of the ELF file into reminiscence at a selected deal with, as directed by the LOAD segments. If we RTFM (for mmap), we discover out that:

offset should be a a number of of the web page dimension

(offset being the offset into the file to start out from). And:

addr should be suitably aligned: for many architectures a a number of of the web page dimension is ample

(addr being the reminiscence deal with to map it at.)

As a consequence of those limitations, the linker imposes a associated restriction on LOAD segments:

case PT_LOAD:
    /* A load command tells us to map in a part of the file.
       We file the load instructions and course of all of them later.  */
    if (__glibc_unlikely (((ph->p_vaddr - ph->p_offset)
         & (GLRO(dl_pagesize) - 1)) != 0))
    = N_("ELF load command deal with/offset not page-aligned");
        goto lose;

To keep away from changing into a goto loser, we should be sure that (vaddr - offset) % pagesize == 0, the place vaddr is the Digital (reminiscence) Deal with to map a given phase at, and offset is the offset inside the ELF file the place the info to be mapped resides.

This is an inventory of the ELF Program Headers in my copy of

Kind             Offset     VAddr      FileSize   MemSize    Align      Prot
PT_PHDR          0x00000040 0x00000040 0x00000230 0x00000230 0x00000008 r--

PT_LOAD          0x00000000 0x00000000 0x00904290 0x00904290 0x00001000 r-x
PT_LOAD          0x00904290 0x00905290 0x00007500 0x00007500 0x00001000 rw-
PT_LOAD          0x0090b790 0x0090d790 0x00000df0 0x00c36698 0x00001000 rw-

PT_TLS           0x00904290 0x00905290 0x00000018 0x00000018 0x00000008 r--
PT_DYNAMIC       0x00909618 0x0090a618 0x00000220 0x00000220 0x00000008 rw-
PT_GNU_RELRO     0x00904290 0x00905290 0x00007500 0x00007d70 0x00000001 r--
PT_GNU_EH_FRAME  0x00524a24 0x00524a24 0x000010fc 0x000010fc 0x00000004 r--
PT_GNU_STACK     0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 rw-
PT_NOTE          0x00000270 0x00000270 0x00000024 0x00000024 0x00000004 r--

The three PT_LOAD segments are those to give attention to (I added some whitespace to assist).

When pagesize == 0x1000 (4KB), the constraint holds true, for all load segments in Nonetheless, once we improve pagesize to 0x4000 (16KB) as it’s underneath Asahi Linux, then it not holds true (within the second and third PT_LOAD segments).

You might discover that not one of the particular person fields have any specific alignment properties. That is fantastic, as a result of there is not a 1:1 correspondence between segments and calls to mmap() – the loader has logic to determine the precise mappings required.

The Resolution

We won’t do something that impacts the relative positions of the segments in reminiscence – this could make the CDM very indignant, as a result of it might break the relative offsets used to reference information in a single phase from one other. Moreover, CDMs typically get pissed off if you happen to make any modifications to their code which are detectable at runtime. Soothing the CDM’s rage might run afoul of the DMCA (we might not “circumvent” any “technological protection measures“), and so we should keep away from enraging it within the first place.

As a reminder, we want some solution to uphold the (vaddr - offset) % pagesize == 0 constraint. Altering vaddr shouldn’t be an possibility (as a result of it might have an effect on the runtime reminiscence format), however we can regulate offset – i.e. by shifting the info round within the ELF file itself.

The primary PT_LOAD phase is already completely aligned, but when we have a look at the second we discover that 0x00905290 – 0x00904290 = 0x1000, which isn’t any good.

We will repair this by including 0x1000 bytes of padding into the ELF file, in between the place the primary and second segments are saved. We regulate the offset area accordingly, giving 0x00904290 – 0x00904290 == 0. We do an identical factor to repair the third and closing phase.

That is barely simpler stated than carried out, as a result of when you begin inserting padding bytes into the ELF file, plenty of different offsets within the ELF headers develop into invalid and have to be adjusted (since something after the padding-insertion level has moved).

Importantly, these modifications solely have an effect on the format of the code within the ELF file. As soon as the code is loaded into reminiscence, the format needs to be similar to that of an unpatched binary loaded on a 4K-page system. The CDM runtime solely ever will get to examine the loaded model of itself, so it stays unbothered by any load-time shenanigans.

Permission Granularity

Every web page of reminiscence has its personal permissions (learn/write/execute).

On a system with 4K pages, every 4KB vary can have its personal permissions. The binary was compiled with this in thoughts.

See Also

Nonetheless, we have simply loaded the identical format onto 16K pages, so we now have much less granularity than was anticipated. This causes two interrelated points:

The .textual content part (which must be executable) and the .information part (which must be writable) find yourself partially overlapping the identical 16KB web page. This single web page must have permissions to deal with each usecases, and so we should make its permissions “RWX”. We will do that, and it works, however it is a bit unlucky as a result of RWX pages are a major safety threat. I do not see any means round this proper now. (Edit: I considered one thing! Watch this area…)

Equally, the “RELRO” safety mitigation (read-only relocations) tries to make the GOT read-only after relocations have been utilized. Sadly, at 16K granularity, this additionally overlaps with the top of the .textual content part, so we should disable the RELRO mitigation. Once more, this can be a answer, but it surely’s not excellent for safety.

To elaborate on the safety considerations: If somebody is making an attempt to pwn you thru a malicious webpage by leveraging a browser vuln, they could begin by developing an “arbitrary learn” primitive, adopted by an “arbitrary write” primitive. The subsequent factor they could need to do is execute native code. If RWX mappings exist, they’ll use their arbwrite to jot down malicious code into that mapping. Overwriting a GOT entry could be a handy solution to redirect execution into this new code (resulting from lack of RELRO). At that time, they’d both attempt to escape the browser sandbox, or instantly exploit the kernel over any un-sandboxed assault floor.

So, whereas we’re not introducing any new vulnerabilities, we’re considerably weakening current safety mitigations. If somebody was making an attempt to use you, they’d have a a lot simpler time (though they’d nonetheless want at the least one browser vuln). Realistically, you are not that anxious about this in follow, and neither am I. If this does hassle you, perhaps use a devoted browser for Netflix’ing?

The CDM might detect these web page permission inconsistencies and get mad at us, but it surely has no actual incentive to, and in follow it does not appear to thoughts.

Making use of the ELF Patches

At first, I attempted utilizing LIEF to carry out these modifications. I am unsure whether or not I used to be simply utilizing it fallacious, or if I used to be operating into LIEF bugs, however I wasn’t capable of make this work. It stored on giving me damaged ELFs that segfaulted at link-time.

In a caffeine-fueled trance, I whipped out hexedit and carried out the required changes by hand (with some bonus Python code to insert the padding bytes). A lot to my shock, my hand-edited ELF truly labored!

Sadly, I do not suppose I am allowed to distribute my patched ELF, however I did not need to must repeat that course of for every Widevine replace anyway, so I wrote a Python script that automates the entire course of.

After operating the script, we now have a functioning that is loadable by Firefox or Chrome on Asahi Linux!

The Ending Touches

Because of the aforementioned ChromeOS glibc weirdness, one of many hacks required to make it work on different distros entails LD_PRELOAD-ing a small library that accommodates the features __aarch64_ldadd4_acq_rel and __aarch64_swp4_acq_rel. The explanation for this requirement shouldn’t be terribly attention-grabbing, however I do not like having to LD_PRELOAD issues unnecessarily, so I devised a solution to embrace these features inside the binary itself.

Recall that we added 0x1000 bytes of padding between the primary and second segments. These bytes are nonetheless loaded into reminiscence, they usually’ll find yourself inside an executable reminiscence web page, so we are able to truly insert a small quantity of code. I used to be anxious that doing this could hassle the CDM, but it surely was fantastic in follow.

So, I put the code for these two features (all 44 bytes of it!) contained in the empty area. Usually, this system expects to have the ability to name these library features by way of the International Offset Desk (GOT), which is successfully a desk of perform pointers, populated at link-time. The ELF accommodates a Relocation entry that claims “please level this GOT entry on the __aarch64_ldadd4_acq_rel image” – an emblem that is not included in any of the libraries usually out there on Arch Linux. I patched the Relocation to as a substitute say “please level this GOT entry at [offset of my inserted code]” (and naturally, one thing related for __aarch64_swp4_acq_rel).

The top result’s that moderately than making an attempt to resolve nonexistent symbols, the linker merely redirects them to the code we added.

The code to do that can also be included in my Python script linked above.

After speaking to the maintainer of the widevine-aarch64 AUR bundle, we added my script to the bundle in order that Asahi customers (or anybody else with 16K pages) can have working Widevine out-of-the-box. Moreover, customers on 4K aarch64 platforms can nonetheless profit from my LD_PRELOAD avoidance patch (The patcher script doesn’t apply the RWX or no-RELRO hacks on this occasion, avoiding any pointless safety holes).

So now all it’s important to do is set up the widevine-aarch64 bundle, and you’ve got a working Widevine set up on Asahi Linux able to go, with out LD_PRELOAD hacks!

Netflix-Particular Annoyances

Even in spite of everything that, Netflix will refuse to work in any respect except you will have a ChromeOS useragent. I used this one:

Mozilla/5.0 (X11; CrOS aarch64 15236.80.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5414.125 Safari/537.36

Most different companies do not require this, Netflix is simply playing around right here (e.g. Spotify’s internet participant Simply Works).

The Widevine model we have put in is named “L3”, the least-secure tier of Widevine you should utilize. Going greater requires {hardware} help (and sure, Apple Silicon has related {hardware} for doing “stronger” DRM, however the software program/firmware plumbing shouldn’t be right here to utilize it underneath Linux, which is maybe for the most effective).

Most streaming platforms will restrict you to solely “HD” content material on L3 (versus 4K on L1). On Netflix, this higher restrict is 1080p (though it’d depend upon the precise content material you are making an attempt to observe?), however you’re additional restricted to a mere 720p by default. For some cause, you possibly can solely get 1080p in case your consumer asks properly for it (on the protocol stage), and there are browser extensions that do that for you robotically.

I do not know why it is like this; my finest guess is that there are some units on the market which have efficiency points with 1080p playback by way of Widevine L3 (which necessitates software program video decoding), and they also disabled it throughout the board to cut back the client help burden.

Pricey Netflix: please add a secret menu merchandise to override this behaviour someplace.


To summarise the above in meme format:

You would not make your personal memes at

Disclaimer: This meme doesn’t represent authorized recommendation. That stated, I do not suppose this course of violates the letter or spirit of the DMCA, nor another related laws, to the most effective of my information.


I did not anticipate my first Widevine-related weblog publish to be about utilizing the DRM as meant.

Nonetheless, probably the most technically attention-grabbing methodology of streaming media on aarch64 Linux proper now could be to wrangle DRM into doing its job.

This needs to be alarming to anybody with a stake in content material “safety”. Look what number of hoops I needed to soar by way of simply to legally watch Netflix as a paying buyer!

It could have been orders of magnitude extra handy for me to make use of a torrent consumer, however that will be a boring article. Making the DRM work shouldn’t be the “attention-grabbing” path!

Pricey Google: There are easy steps you possibly can take to enhance this case. Including aarch64 Ubuntu LTS (or one thing equally boring) to your matrix of supported platforms actually should not be that tough, given that you just already help x86_64 Linux, and aarch64 ChromeOS. This implies distro maintainers can spend much less time wrangling glibc compatibility points and maybe make it simpler for Spotify to launch a local consumer for aarch64 Linux!

Addendum: The EME API is an efficient factor! (kinda)

A typical objection to the mere existence of the EME API goes one thing alongside the traces of:

DRM??!?!?! In my W3C requirements??? I don’t like DRM so that is Very Dangerous.

I additionally don’t like DRM, and I might like to stay in a world the place it does not exist (and doesn’t must exist). Sadly, it does exist. Worse, market forces will be sure that it’s crammed into each out there DRM-shaped gap. If a DRM-shaped gap doesn’t exist, one will probably be created. I might moderately have my DRM-shaped holes emplaced by a coordinated standardisation course of, than by forcing DRM distributors to do it ad-hoc (and inevitably, as opaquely as doable).

My angle could also be barely defeatist, but when the EME API did not exist, we most likely would not be capable of watch Netflix on Asahi Linux in any respect, at the least, not with out violating the DMCA.

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top