Now Reading
Rendering the First glTF Mesh

Rendering the First glTF Mesh

2023-05-16 17:43:03

Could 16, 2023

Now that we’ve seen how to attract a triangle within the first post and hook up digicam controls so we are able to go searching within the second post, we’re on the level the place the avocado actually hits the display screen and we are able to begin drawing our first glTF primitives! I say the avocado hits the display screen as a result of that’s the glTF take a look at mannequin we’ll be utilizing. You’ll be able to seize it from the Khronos glTF samples repo. glTF information are available two flavors (minus different extension particular variations), a normal “.gltf” model that shops the JSON header in a single file and binary information and textures in separate information, and a “.glb” model, that mixes the JSON header and all binary or texture information right into a single file. We’ll be loading .glb information on this sequence to simplify what number of information now we have to cope with to get a mannequin into the renderer, so seize the glTF-Binary Avocado.glb and let’s get began!

Determine 1:
It takes fairly a bit to get Avocado.glb on the display screen, however this
stunning picture of our anticipated last (and scrumptious) consequence ought to
be sufficient motivation to maintain us going!

The glTF format was designed for environment friendly switch of 3D content material, and consists of a JSON half that describes the objects, and a binary half(s) containing the information for them. The binary information will be saved in separate binary information, embedded within the JSON as Base64, or within the case of glB (and the main focus of this sequence), appended as binary in the identical file following the JSON information. We’ll have a look at glB particularly in additional element later, however first we have to perceive what’s within the JSON a part of the glTF file that describes the scene.

I like to recommend having a look via the glTF 2.0 cheat sheet, which is a good useful resource to get a fast overview of what’s in a glTF file. For much more particulars, take a look at the glTF 2.0 spec. The cheat sheet is great, but it surely may be a bit overwhelming because it covers every thing that glTF helps, which is quite a bit! We’re going to be beginning small for this put up and simply getting our first GLTFMesh on the display screen, so we are able to ignore a number of the file proper now.

The sketch under is customized from the glTF cheat sheet ideas sketch to simply present the components of the file that we’ll be taking a look at for this put up: Meshes, Primitives, Accessors, BufferViews, and Buffers.

Determine 2:
The GLTF ideas we’ll be taking a look at on this put up and their relationships. A mesh is manufactured from a number of primitives. Primitives can reference a number of accessors that present information for his or her attributes. Accessors present kind data for information referenced in buffer views. Buffer views reference areas of a binary buffer supplied with the file.

The Meshes, Accessors, BufferViews and Buffers are all top-level objects within the JSON, as proven under for the Avocado.glb scene. The Scenes, Nodes, Cameras, Supplies, Textures, Photos, Samplers, Skins and Animations are additionally top-level JSON objects, however we received’t be wanting into these but. The Avocado features a few of those objects, which you’ll see under.

To view the header of a glb file you possibly can open it in a textual content editor, which can show the JSON half as readable textual content adopted by a bunch of junk for the binary information. Alternatively, I wrote a small python script to print out the JSON a part of a glb file to the console, you possibly can obtain it here. gltf.report is one other helpful web site the place you possibly can discover the content material of gltf/glb information.

{
  "accessors": [
    ...
  ],
  "bufferViews": [
    ...
  ],
  "buffers": [
    ...
  ],
  "photos": [
    ...
  ],
  "meshes": [
    {
      "primitives": [
        ...
      ],
      ...
    }
  ],
  "supplies": [
    ...
  ],
  "nodes": [
    ...
  ],
  "scene": 0,
  "scenes": [
    ...
  ],
  "textures": [
    ...
  ]
}

Let’s check out the objects we’ll be loading for this put up in a bit extra element. We’ll be utilizing the Avocado.glb as the instance right here and work our manner from the underside to the highest. We wish to have the ability to load different information than simply the Avocado, so we’ll additionally speak about potential properties of those objects that will not be utilized by the Avocado scene.

Buffers

The buffers entry for a glb file is fairly easy. A glb file that doesn’t use any extensions could have a single binary buffer following the JSON information. The size of this buffer is laid out in bytes, within the byteLength member of the Buffer object. glTF information utilizing separate binary information or Base64 encoding can have a number of entries right here, referring to completely different binary information or containing completely different Base64 encoded binary information. The buffer entry for the Avocado.glb is proven under.

"buffers": [
  {
    "byteLength": 8326600
  }
],

BufferViews

The binary information for our glTF file is packed right into a single binary buffer which may include vertex information, textures, and so on. The buffer views are used to create digital sub-buffers, or views, of this single massive binary buffer to entry particular vertex information, texture information, and so on.

A bufferview should specify the buffer it references, buffer, and the dimensions of the view in bytes, byteLength. The buffer view can optionally specify an offset from the beginning of the buffer that the view begins at, byteOffset, and the stride between components within the buffer, byteStride. The byteOffset and byteLength collectively permit defining views of subregions of the buffer. The byteStride can be utilized to outline a view containing interleaved information, for instance a buffer that interleaves positions and normals for every vertex as position0, normal0, position1, normal1, and so on. If the byteStride isn’t supplied, the buffer is assumed to be tightly full of components of dimension outlined by the accessor. Word that byteStride is required if a number of accessors reference a single buffer view.

There are some much less frequent elective properties that may also be specified, see the spec for particulars.

The bufferview objects for the Avocado.glb are proven under. We don’t have interleaved buffers for the Avocado, however Khronos offers an example if you happen to’re curious.

"bufferViews": [
  {
    "buffer": 0,
    "byteLength": 3158729
  },
  {
    "buffer": 0,
    "byteOffset": 3158729,
    "byteLength": 1655059
  },
  {
    "buffer": 0,
    "byteOffset": 4813788,
    "byteLength": 3489232
  },
  {
    "buffer": 0,
    "byteOffset": 8303020,
    "byteLength": 3248
  },
  {
    "buffer": 0,
    "byteOffset": 8306268,
    "byteLength": 4872
  },
  {
    "buffer": 0,
    "byteOffset": 8311140,
    "byteLength": 6496
  },
  {
    "buffer": 0,
    "byteOffset": 8317636,
    "byteLength": 4872
  },
  {
    "buffer": 0,
    "byteOffset": 8322508,
    "byteLength": 4092
  }
],

Accessors

An accessor takes the information outlined by a buffer view and specifies the way it ought to be interpreted by the appliance. The accessor specifies the element kind of the information, componentType, (e.g., int, float), the kind of the weather, kind, (scalar, vec2, vec3, and so on.), and the variety of components, rely. Accessors can optionally specify a further offset, byteOffset , from the beginning of the referenced buffer view. Extra particulars about these parameters and elective ones will be discovered within the spec.

The accessor offset can be utilized to use an offset to entry completely different components in an interleaved buffer, or to use a further absolute offset to entry a set of components at some offset inside the identical buffer view. In WebGPU now we have two choices for the way we cross the byte offset for a vertex attribute, and the place we cross it should rely on whether or not the accessor is referring to interleaved information or is making use of a further absolute offset. For simplicity, later on this put up we’ll cross it in essentially the most generic location.

The illustration under exhibits an interleaved attribute buffer case. Now we have a single 32 byte buffer, inside which is a few interleaved vertex information containing a vec2 and scalar for every vertex. The file offers a buffer view specifying the offset to this information and the stride between components (12 bytes). It then consists of two accessors referencing this buffer view, one for the inexperienced vec2 attribute and one for the orange scalar attribute.

Determine 3:
Buffer, BufferView and Accessor configuration for a buffer storing interleaved vertex attributes.

The illustration under exhibits a case of utilizing an accessor to use a further absolute offset. On this case, we once more have a 32 byte buffer, however inside the buffer is information containing place and regular information for a vertex. This information is specified as two vec3’s, which aren’t interleaved. Our buffer view will be made with the identical parameters as above; nevertheless, our regular accessor now applies a bigger byte offset to achieve the specified area of the buffer view.

Determine 4:
Buffer, BufferView and Accessor configuration for a buffer storing packed vertex attributes. The accessor offset is used to entry the conventional vector attribute inside the identical buffer view.

It’s vital to notice that the 2 methods may also be mixed. We might have an interleaved buffer just like the interleaved instance, however use an accessor to use each a big offset containing each absolutely the and factor offset to entry some subregion of the buffer view. That is illustrated under, the place now we have accessors made particularly for the second components within the interleaved case. These prospects will impression our selection of how we specify the accessor offsets in WebGPU.

Determine 5:
Buffer, BufferView and Accessor configuration for accessing interleaved vertex attribute information at a further absolute offset inside a buffer view.

Happily, the accessors for the Avocado are fairly easy. Every accessor references a unique buffer view containing the packed components for its information, as proven under.

"accessors": [
  {
    "bufferView": 3,
    "componentType": 5126,
    "count": 406,
    "type": "VEC2"
  },
  {
    "bufferView": 4,
    "componentType": 5126,
    "count": 406,
    "type": "VEC3"
  },
  {
    "bufferView": 5,
    "componentType": 5126,
    "count": 406,
    "type": "VEC4"
  },
  {
    "bufferView": 6,
    "componentType": 5126,
    "count": 406,
    "type": "VEC3",
    "max": [
      0.02128091,
      0.06284806,
      0.0138090011
    ],
    "min": [
      -0.02128091,
      -4.773855e-05,
      -0.013809
    ]
  },
  {
    "bufferView": 7,
    "componentType": 5123,
    "rely": 2046,
    "kind": "SCALAR"
  }
],

Meshes and Primitives

Lastly, we are able to check out how meshes are laid out in glTF. A mesh object itself is only a record of primitives, an elective identify, and some different elective parameters that we received’t want right here (see the spec). The true work of specifying the geometry for a mesh is completed by its primitives.

The primitives array of the mesh specifies the completely different geometric primitives that make up the mesh. The attributes for every primitive maps every attribute’s identify to an accessor that gives the information. The POSITION attribute is required, different elective attributes are outlined by the spec and purposes also can add customized attributes as effectively. An inventory of vertex indices for listed rendering will be supplied by specifying the accessor referencing the index information. Every primitive also can specify a fabric and topology mode (e.g., factors, strains, triangles), together with different optional parameters.

The meshes for the Avocado are listed under. The Avocado comprises a single primitive, who’s POSITION attribute references accessor 3. This accessor (above) offers vec3 float information (this format is required by the spec) for the vertex positions.

"meshes": [
  {
    "primitives": [
      {
        "attributes": {
          "TEXCOORD_0": 0,
          "NORMAL": 1,
          "TANGENT": 2,
          "POSITION": 3
        },
        "indices": 4,
        "material": 0
      }
    ],
    "identify": "Avocado"
  }
],

Typical glTF information separate the JSON and binary information into completely different information, requiring a number of community requests or disk accesses to load the information. The glB format was designed to deal with this difficulty, by combining the JSON header and all binary information right into a single file. This mix has the additional advantage of constructing the information simpler to handle as effectively, as we solely have to preserve monitor of 1 file. Nevertheless, it additionally means we have to do a bit extra byte-level entry when studying the information in order that we are able to correctly entry the JSON and binary information for the file.

A glb is required to include a JSON chunk adopted by a binary chunk, in that order (spec). Further chunks for extensions can observe the binary chunk, if wanted.

Visually, a binary glTF file as proven under

Determine 6:
The glb file structure.

The primary 12 bytes are the glb header, used to determine the file as glb file and specify its whole size in bytes (together with all headers, JSON, binary). The JSON chunk begins at 12 bytes, and features a chunk header specifying its size and sort, adopted by a JSON string. The JSON string will occupy bytes 20 to twenty + jsonChunkLength. The binary chunk follows at byte 20 + jsonChunkLength, and comprises its personal header specifying the binary chunk size and sort, adopted by the binary information for the file.

Now that we’re conversant in the components of the glTF file we want and know learn a binary glTF file, we’re able to load up a glb file and import the information to render our Avocado! We’re going to begin with merely loading the primitives for the primary mesh we discover within the file. It will work for our Avocado and quite a few different easy single-mesh glTF information obtainable within the Khronos take a look at mannequin repo and on-line.

We’ll load the file from backside to high, in the identical order that we mentioned the parts intimately above. The ultimate uploadGLB perform and supporting courses will be discovered within the repo on GitHub.

First, we have to learn the glb header, load the JSON chunk, and create a buffer comparable to the binary chunk. Our perform uploadGLB takes an ArrayBuffer, buffer, containing the glb file information, and the WebGPU Gadget, gadget, to add the information to.

First, we create a Uint32Array over the glb file information that comprises each the glb header and JSON chunk header.

export perform uploadGLB(buffer, gadget) {
    // glB has a JSON chunk and a binary chunk, doubtlessly adopted by
    // different chunks specifying extension particular information, which we ignore
    // since we do not assist any extensions.
    // Learn the glB header and the JSON chunk header collectively 
    // glB header:
    // - magic: u32 (count on: 0x46546C67)
    // - model: u32 (count on: 2)
    // - size: u32 (dimension of your entire file, in bytes)
    // JSON chunk header
    // - chunkLength: u32 (dimension of the chunk, in bytes)
    // - chunkType: u32 (count on: 0x4E4F534A for the JSON chunk)
    var header = new Uint32Array(buffer, 0, 5);
    // Validate glb file comprises appropriate magic worth
    if (header[0] != 0x46546C67) {
        throw Error("Supplied file isn't a glB file")
    }
    if (header[1] != 2) {
        throw Error("Supplied file is glTF 2.0 file");
    }
    // Validate that first chunk is JSON
    if (header[4] != 0x4E4F534A) {
        throw Error("Invalid glB: The primary chunk of the glB file isn't a JSON chunk!");
    }

    // Decode the JSON chunk of the glB file to a JSON object
    var jsonChunk =
        JSON.parse(new TextDecoder("utf-8").decode(new Uint8Array(buffer, 20, header[3])));

    // Learn the binary chunk header
    // - chunkLength: u32 (dimension of the chunk, in bytes)
    // - chunkType: u32 (count on: 0x46546C67 for the binary chunk)
    var binaryHeader = new Uint32Array(buffer, 20 + header[3], 2);
    if (binaryHeader[1] != 0x004E4942) {
        throw Error("Invalid glB: The second chunk of the glB file isn't a binary chunk!");
    }

Studying the Buffer and BufferViews

We’ll introduce two courses, GLTFBuffer and GLTFBufferView to symbolize the gltf buffer and buffer view objects in our app.

First we are able to create the GLTFBuffer. Though the glb spec permits the JSON to reference different exterior buffers along with the only embedded buffer, we’re concentrating on simply the easy and customary use case that there’s a single buffer which is the binary chunk.

The GLTFBuffer class is a simple mapping of the GLTF buffer object. We create a brand new Uint8Array view over the glb buffer handed to the constructor on the binary chunk’s beginning offset with the binary chunk’s dimension.

// in glb.js, outdoors uploadGLB
export class GLTFBuffer {
    constructor(buffer, offset, dimension) {
        this.buffer = new Uint8Array(buffer, offset, dimension);
    }
}

We are able to then create a GLTFBuffer referencing the information within the binary chunk

// inside uploadGLB
// Make a GLTFBuffer that could be a view of your entire binary chunk's information,
// we'll use this to create buffer views inside the chunk for reminiscence referenced
// by objects within the glTF scene
var binaryChunk = new GLTFBuffer(buffer, 28 + header[3], binaryHeader[0]);

The following object we have to learn are the buffer views, which we symbolize with the GLTFBufferView class. The constructor for GLTFBufferView makes a brand new Uint8Array view over simply the area of the binary chunk that the view covers. Word that the subarray API creates a view over the underlying ArrayBuffer, it doesn’t make a duplicate. One other design selection right here is that we don’t want to trace the offset for buffer views after creating the view, as a result of this offset is baked into the view object we create within the constructor.

The GLTFBufferView offers two extra strategies, addUsage and add. The latter is self-descriptive, it creates a GPU buffer and add the buffer view to it. The addUsage technique is used when parsing the remainder of the scene information to make sure that the GPU buffer we create could have the right utilization flags set for it, e.g., to permit binding it as a vertex or index buffer.

We additionally monitor a flag needsUpload to find out which buffer views really must be uploaded to the GPU. Picture information can be entry via buffer views in glb information, nevertheless we don’t have to add the PNG or JPG binary information to the GPU since we’ll as a substitute decode it to a texture. When parsing the remainder of the scene we’ll flag buffers that must be uploaded to the GPU in order that we are able to add simply what we want.

// in glb.js, outdoors uploadGLB
export class GLTFBufferView {
    constructor(buffer, view) {
        this.size = view["byteLength"];
        this.byteStride = 0;
        if (view["byteStride"] !== undefined) {
            this.byteStride = view["byteStride"];
        }

        // Create the buffer view. Word that subarray creates a brand new typed
        // view over the identical array buffer, we don't make a duplicate right here.
        var viewOffset = 0;
        if (view["byteOffset"] !== undefined) {
            viewOffset = view["byteOffset"];
        }
        this.view = buffer.buffer.subarray(viewOffset, viewOffset + this.size);

        this.needsUpload = false;
        this.gpuBuffer = null;
        this.utilization = 0;
    }

    // When this buffer is referenced as vertex information or index information we
    // add the corresponding utilization flag right here in order that the GPU buffer can
    // be created correctly.
    addUsage(utilization)  utilization;
    

    // Add the buffer view to a GPU buffer
    add(gadget) {
        // Word: should align to 4 byte dimension when mapped at creation is true
        var buf = gadget.createBuffer({
            dimension: alignTo(this.view.byteLength, 4),
            utilization: this.utilization,
            mappedAtCreation: true
        });
        new (this.view.constructor)(buf.getMappedRange()).set(this.view);
        buf.unmap();
        this.gpuBuffer = buf;
        this.needsUpload = false;
    }
}

Subsequent we are able to loop via the bufferViews specified within the JSON chunk and create corresponding GLTFBufferView objects for them. The buffer view constructor takes the buffer to make a view over and the JSON object describing the buffer view being created.

// inside uploadGLB
// Create GLTFBufferView objects for all of the buffer views within the glTF file
var bufferViews = [];
for (var i = 0; i < jsonChunk.bufferViews.size; ++i) {
    bufferViews.push(new GLTFBufferView(binaryChunk, jsonChunk.bufferViews[i]));
}

On the finish of uploadGLB, after we’ve loaded all of the meshes and scene objects, we loop via the buffer views and add people who must be uploaded to the GPU primarily based on which of them had been marked as needsUpload through the scene loading step.

// on the finish of uploadGLB earlier than returning the mesh
// Add the buffer views utilized by mesh
for (var i = 0; i < bufferViews.size; ++i) {
    if (bufferViews[i].needsUpload) {
        bufferViews[i].add(gadget);
    }
}

Studying the Accessors

The following object up the chain are accessors, which we symbolize with the GLTFAccessor class proven under. The constructor takes the GLTFBufferView and the JSON object describing the accessor and constructs the article. The thing is a direct mapping of the JSON accessor information, with the addition of storing a reference to the buffer view as a substitute of simply an index to it.

The accessor additionally offers a utility getter, byteStride , to compute the stride in bytes between components referenced by the accessor. If the buffer view specifies a byte stride we use this stride, in any other case the weather are assumed to be packed and we use the dimensions of the accessor kind because the stride. If not byte stride is specified for the buffer view it should default to 0, thus we decide between the 2 with a max.

The utility features gltfTypeSize and gltfVertexType are omitted from the put up to maintain it centered, these will be discovered on Github.

// in glb.js, outdoors uploadGLB
export class GLTFAccessor {
    constructor(view, accessor) {
        this.rely = accessor["count"];
        this.componentType = accessor["componentType"];
        this.gltfType = parseGltfType(accessor["type"]);
        this.view = view;
        this.byteOffset = 0;
        if (accessor["byteOffset"] !== undefined) {
            this.byteOffset = accessor["byteOffset"];
        }
    }

    get byteStride() {
        var elementSize = gltfTypeSize(this.componentType, this.gltfType);
        return Math.max(elementSize, this.view.byteStride);
    }

    // Get the vertex attribute kind for accessors which can be
    // used as vertex attributes
    get vertexType() {
        return gltfVertexType(this.componentType, this.gltfType);
    }
}

Again inside uploadGLB, we are able to create the accessors after the loop creating the buffer views. The method is identical, we loop via the JSON information describing the accessors and create the objects, passing the referenced buffer view and the accessor JSON object to the constructor.

// inside uploadGLB
// Create GLTFAccessor objects for the accessors within the glTF file
// We have to deal with potential errors being thrown right here if a mannequin is utilizing
// accessors for varieties we do not assist but. For instance, a mannequin with animation
// might have a MAT4 accessor, which we at present do not assist.
var accessors = [];
for (var i = 0; i < jsonChunk.accessors.size; ++i) {
    var accessorInfo = jsonChunk.accessors[i];
    var viewID = accessorInfo["bufferView"];
    accessors.push(new GLTFAccessor(bufferViews[viewID], accessorInfo));
}

Studying the Mesh’s Primitives

With the accessors, buffer views, and buffers in place we are able to now load our mesh’s primitives. We’ll symbolize every primitive with the GLTFPrimitive class proven under. We’re simply going to render the mesh geometry to begin, and ignore any extra attributes. The GLTFPrimitive constructor takes the accessors for the vertex indices and positions, and the rendering topology (triangles, triangle strip, and so on.). We’ll additionally begin by solely supporting triangles or triangle strips.

Within the constructor we additionally mark the required buffer usages for the views and mark them as needing add to the GPU in order that our primitive can use the information throughout rendering. If indices are supplied for the primitive we should add the index buffer utilization flag to the underlying buffer view. Equally, we should add the vertex buffer utilization flag to the accessor’s buffer view. The index and vertex buffers will must be uploaded to the GPU, and

The GLTFPrimitive has two strategies that we’ll use later for rendering, buildRenderPipeline and render. We’ll have a look at these intimately later once we get to rendering our mesh.

// in glb.js, outdoors uploadGLB
export class GLTFPrimitive {
    constructor(positions, indices, topology) {
        this.positions = positions;
        this.indices = indices;
        this.topology = topology;
        this.renderPipeline = null;
        // Set utilization for the positions information and flag it as needing add
        this.positions.view.needsUpload = true;
        this.positions.view.addUsage(GPUBufferUsage.VERTEX);

        if (this.indices) {
            // Set utilization for the indices information and flag it as needing add
            this.indices.view.needsUpload = true;
            this.indices.view.addUsage(GPUBufferUsage.INDEX);
        }
    }

    buildRenderPipeline(gadget,
                        shaderModule,
                        colorFormat,
                        depthFormat,
                        uniformsBGLayout)
    {
        // Extra on this later!
    }

    render(renderPassEncoder, uniformsBG) {
        // Extra on this later!
    }
}

The GLTFMesh class is fairly easy. It simply takes the identify of the mesh and the record of primitives that make up the mesh.

See Also

// in glb.js, outdoors uploadGLB
export class GLTFMesh {
    constructor(identify, primitives) {
        this.identify = identify;
        this.primitives = primitives;
    }

    buildRenderPipeline(gadget,
                        shaderModule,
                        colorFormat,
                        depthFormat,
                        uniformsBGLayout)
    {
        // Extra on this later!
    }

    render(renderPassEncoder, uniformsBG) {
        // Extra on this later!
    }
}

With every thing in place we are able to now load the GLTF primitives and the mesh from the file. On this put up we’re simply going to load the primary mesh outlined within the file, so we take jsonChunk.meshes[0] after which loop via its primitives to create the GLTFPrimitive objects. One other restriction we’ll have is that we solely assist triangles and triangle journey topologies for now. Our primitive importing loop will throw an error if we encounter an unsupported primitive kind for now.

To create every primitive we then want to seek out the accessors for its indices (if supplied) and vertex positions (required). The indices are supplied as a definite member of the primitive’s JSON object, whereas the positions are listed within the primitive attributes map because the POSITION attribute. The POSITION attribute is required by the glTF spec to be supplied.

As soon as we’ve imported all of the primitives we are able to create the mesh. The GLTFRenderMode constants are omitted for brevity, and will be discovered on Github.

// inside uploadGLB
// Load the primary mesh
var mesh = jsonChunk.meshes[0];
var meshPrimitives = [];
// Loop via the mesh's primitives and cargo them
for (var i = 0; i < mesh.primitives.size; ++i) {
    var prim = mesh.primitives[i];
    var topology = prim["mode"];
    // Default is triangles if mode specified
    if (topology === undefined) {
        topology = GLTFRenderMode.TRIANGLES;
    }
    if (topology != GLTFRenderMode.TRIANGLES &&
        topology != GLTFRenderMode.TRIANGLE_STRIP) {
        throw Error(`Unsupported primitive mode ${prim["mode"]}`);
    }

    // Discover the vertex indices accessor if supplied
    var indices = null;
    if (jsonChunk["accessors"][prim["indices"]] !== undefined) {
        indices = accessors[prim["indices"]];
    }

    // Loop via all of the attributes to seek out the POSITION attribute.
    // Whereas we solely need the place attribute proper now, we'll load
    // the others later as effectively.
    var positions = null;
    for (var attr in prim["attributes"]) {
        var accessor = accessors[prim["attributes"][attr]];
        if (attr == "POSITION") {
            positions = accessor;
        }
    }

    // Add the primitive to the mesh's record of primitives
    meshPrimitives.push(new GLTFPrimitive(positions, indices, topology));
}
// Create the GLTFMesh
var mesh = new GLTFMesh(mesh["name"], meshPrimitives);

Lastly, earlier than we return the mesh we loaded now we have to loop via the buffer views and add any that had been marked as “wants add” to the GPU in order that the information will probably be obtainable throughout rendering.

// on the finish of uploadGLB
// Add the buffers as talked about above earlier than returning the mesh
// Add the buffer views utilized by mesh
for (var i = 0; i < bufferViews.size; ++i) {
    if (bufferViews[i].needsUpload) {
        bufferViews[i].add(gadget);
    }
}

return mesh;

We’ve put in a number of work getting our mesh information loaded from the glb file, however we’re nearly there! To render the mesh we have to render every of its primitives. To render every primitive we’re going to implement the 2 strategies we noticed earlier: GLTFPrimitive.buildRenderPipeline , liable for creating the rendering pipeline for the primitive, and GLTFPrimitive.render , which can encode the rendering instructions for the primitive.

You may be questioning, “received’t this be actually inefficient for giant scenes the place my GLTF file has 100’s-1000’s (or extra) primitives?”, and also you’re proper! Making a rendering pipeline for every particular person primitive isn’t a scalable strategy. We’ll come again to how we are able to optimize this straightforward strategy in a later put up, but it surely’s sufficient to get us began for now.

Constructing a Render Pipeline for Every Primitive

First we have to construct a render pipeline for every primitive. That is finished within the GLTFPrimitive.buildRenderPipeline technique, proven under. To permit some re-use throughout primitives we’ll use the identical shader for all of them, as proper now we don’t have to deal with variations in attributes or materials utilization between primitives. That is handed because the shaderModule parameter. We’ll even be sharing the identical uniform buffer containing the digicam parameters throughout all primitives, this bind group structure is handed because the uniformsBGLayout . We additionally want the output shade format and depth codecs to construct the render pipeline. The app rendering our primitive passes these via to us as colorFormat and depthFormat respectively.

The setup right here is definitely not that completely different from the earlier posts. The principle modifications are that we now use the positions accessor’s byteStride because the vertex attribute array stride, and compute the vertex kind for it from the accessor’s kind. It’s price noting that for GLTF, the POSITION attribute should all the time be float32x3.

A key level right here is that we don’t cross the place accessor’s byteOffset because the attribute offset within the vertex state. In WebGPU, there are two methods to cross a byte offset for a vertex attribute. It could both be set within the vertex state, the place it’s handled as an offset inside arrayStride for accessing interleaved attributes, or as a world offset utilized to the buffer when calling setVertexBuffer. The attribute offset set within the vertex state is particularly for interleaved attributes, and thus WebGPU requires that offset + sizeof(format) is lower than the arrayStride of the buffer. Nevertheless, the offset utilized in setVertexBuffer has no such restriction, because it applies an absolute offset in bytes from the beginning of the buffer. Passing the offset in setVertexBuffer doesn’t stop us from supporting interleaved attributes, but it surely does imply we have to bind the identical buffer twice. Since we’re solely supporting one attribute proper now anyhow, the place, we’ll merely apply the offset in setVertexBufferto simplify dealing with the completely different offset dimension prospects in GLTF.

The final change distinction is that, if we’re rendering a triangle strip, WebGPU requires us to incorporate the index format as a part of the pipeline. That is dealt with when creating the primitive object for the pipeline descriptor.

// in glb.js GLTFPrimitive.buildRenderPipeline implementation
buildRenderPipeline(gadget,
                    shaderModule,
                    colorFormat,
                    depthFormat,
                    uniformsBGLayout)
{
    // Vertex attribute state and shader stage
    var vertexState = {
        // Shader stage data
        module: shaderModule,
        entryPoint: "vertex_main",
        // Vertex buffer data
        buffers: [{
            arrayStride: this.positions.byteStride,
            attributes: [
                // Note: We do not pass the positions.byteOffset here, as its
                // meaning can vary in different glB files, i.e., if it's
                // being used for interleaved element offset or an absolute
                // offset.
                {
                    format: this.positions.vertexType, 
                    offset: 0,
                    shaderLocation: 0
                }
            ]
        }]
    };
  
    var fragmentState = {
        // Shader data
        module: shaderModule,
        entryPoint: "fragment_main",
        // Output render goal data
        targets: [{format: colorFormat}]
    };
  
    // Our loader solely helps triangle lists and strips, so by default we set
    // the primitive topology to triangle record, and verify if it is
    // as a substitute a triangle strip
    var primitive = {topology: "triangle-list"};
    if (this.topology == GLTFRenderMode.TRIANGLE_STRIP) {
        primitive.topology = "triangle-strip";
        primitive.stripIndexFormat = this.indices.vertexType;
    }
  
    var structure = gadget.createPipelineLayout({
        bindGroupLayouts: [uniformsBGLayout]
    });
  
    this.renderPipeline = gadget.createRenderPipeline({
        structure: structure,
        vertex: vertexState,
        fragment: fragmentState,
        primitive: primitive,
        depthStencil: {
            format: depthFormat,
            depthWriteEnabled: true,
            depthCompare: "much less"
        }
    });
}

Rendering Every Primitive

Now we are able to use our rendering pipeline in GLTFPrimitive.render to render our primitive! The render technique takes as enter a render cross encoder to encode our rendering instructions into and the worldwide uniforms bindgroup to bind.

The rendering course of is much like what we’ve seen in earlier posts, with the primary modifications being that we have to set the accessor’s byte offset and size when binding the vertex buffer, and doubtlessly use an index buffer for listed rendering. When binding the index buffer we should additionally apply its accessor’s byte offset and size.

Then we are able to draw our primitive!

// in glb.js GLTFPrimitive.render implementation
render(renderPassEncoder, uniformsBG) {
    renderPassEncoder.setPipeline(this.renderPipeline);
    renderPassEncoder.setBindGroup(0, uniformsBG);

    // Apply the accessor's byteOffset right here to deal with each world and interleaved
    // offsets for the buffer. Setting the offset right here permits dealing with each circumstances,
    // with the draw back that we should repeatedly bind the identical buffer at completely different
    // offsets if we're coping with interleaved attributes.
    // Since we solely deal with positions in the mean time, this is not an issue.
    renderPassEncoder.setVertexBuffer(0,
        this.positions.view.gpuBuffer,
        this.positions.byteOffset,
        this.positions.size);

    if (this.indices) {
        renderPassEncoder.setIndexBuffer(this.indices.view.gpuBuffer,
            this.indices.vertexType
            this.indices.byteOffset,
            this.indices.size);
        renderPassEncoder.drawIndexed(this.indices.rely);
    } else {
        renderPassEncoder.draw(this.positions.rely);
    }
}

Placing it Collectively to Render the Total Mesh

The GLTFMesh variations of buildRenderPipeline and render are easy. We’ve delegated all of the work to the primitives, the place the precise geometry information lives, so the mesh simply loops via its primitives and calls the respective features.

// in glb.js GLTFMesh.buildRenderPipeline and GLTFMesh.render implementations
buildRenderPipeline(gadget,
                    shaderModule,
                    colorFormat,
                    depthFormat,
                    uniformsBGLayout)
{
    // We take a fairly easy strategy to begin. Simply loop via
    // all of the primitives and construct their respective render pipelines
    for (var i = 0; i < this.primitives.size; ++i) {
        this.primitives[i].buildRenderPipeline(gadget,
            shaderModule,
            colorFormat,
            depthFormat,
            uniformsBGLayout);
    }
}

render(renderPassEncoder, uniformsBG) {
    // We take a fairly easy strategy to begin. Simply loop via
    // all of the primitives and name their particular person draw strategies
    for (var i = 0; i < this.primitives.size; ++i) {
        this.primitives[i].render(renderPassEncoder, uniformsBG);
    }
}

With all that finished, we’re able to get the Avocado on the display screen! To render the glb file we’re going to wish to fetch it from the community or have the person add it via a kind, then arrange its render pipeline(s) and render it. Most of our utility code is identical because the earlier put up, with the modifications that we are able to take away the triangle buffers and render pipeline. For the total app code, see app.js on Github.

Loading a glb File from the Community or the Person

To render a glb file we have to get one into our app, both by fetching it over the community or letting customers of the app add their very own information to check out. I’ve packed the Avocado.glb file within the lesson repo and have used webpack to bundle every thing into an app. On the high of app.js we import it with webpack:

// high of app.js within the imports part
import avocadoGlb from "./Avocado.glb";

Then after we’ve setup our shader module, bind group structure, swapchain, and so on. as earlier than we are able to fetch the file, load it and construct the render pipeline.

// within the async lambda in app.js
// Load the packaged GLB file, Avocado.glb
var glbMesh = await fetch(avocadoGlb)
    .then(res => res.arrayBuffer()).then(buf => uploadGLB(buf, gadget));

glbMesh.buildRenderPipeline(gadget,
    shaderModule,
    swapChainFormat,
    depthFormat,
    bindGroupLayout);

I’ve additionally included a file add kind within the instance app so customers can add their very own glb information to check out. To assist that we discover the file add kind by its id and connect an onchange listener

// within the async lambda in app.js
// Setup onchange listener for file uploads
doc.getElementById("uploadGLB").onchange =
    perform () {
        var reader = new FileReader();
        reader.onerror = perform () {
            throw Error("Error studying GLB file");
        };
        reader.onload = perform () {
            glbMesh = uploadGLB(reader.consequence, gadget)
            glbMesh.buildRenderPipeline(gadget,
                shaderModule,
                swapChainFormat,
                depthFormat,
                bindGroupLayout);
        };
        if (this.information[0]) {
            reader.readAsArrayBuffer(this.information[0]);
        }
    };

Coloring the Mesh by Geometry Regular

Within the earlier posts we handed place and shade information via as vertex attributes to make our triangle look a bit extra fascinating. Nevertheless, we now solely have place information for our mesh file and have to replace our shader vertex inputs. We might simply shade it by a strong shade, however it is going to be arduous to see the floor particulars of the meshes to inform if we’ve loaded them correctly.

As an alternative, we are able to compute the geometry regular on the fly within the fragment shader by computing fragment derivatives on the world area place of the article. We are able to take the by-product of the place alongside the x and y axes and compute the cross product to seek out the conventional. After normalizing it, the conventional will probably be within the [-1, 1] vary, which we are able to rescale into [0, 1] to be used as a shade.

Our up to date shader code is proven under. We’ve eliminated the shade attribute from the VertexInput struct to match the render pipeline, which now solely offers place information. The place information additionally is available in as a float3 from gltf, so we’ve modified the kind of place as effectively. To compute the geometry regular within the fragment shader we now output the world area place from the vertex shader within the VertexOutput member. The fragment shader then computes fragment derivatives of world_pos to compute the conventional.

alias float4 = vec4<f32>;
alias float3 = vec3<f32>;

struct VertexInput {
    @location(0) place: float3,
};

struct VertexOutput {
    @builtin(place) place: float4,
    @location(0) world_pos: float3,
};

struct ViewParams {
    view_proj: mat4x4<f32>,
};

@group(0) @binding(0)
var<uniform> view_params: ViewParams;

@vertex
fn vertex_main(vert: VertexInput) -> VertexOutput {
    var out: VertexOutput;
    out.place = view_params.view_proj * float4(vert.place, 1.0);
    out.world_pos = vert.place.xyz;
    return out;
};

@fragment
fn fragment_main(in: VertexOutput) -> @location(0) float4 {
    // Compute the conventional by taking the cross product of the
    // dx & dy vectors computed via fragment derivatives
    let dx = dpdx(in.world_pos);
    let dy = dpdy(in.world_pos);
    let n = normalize(cross(dx, dy));
    return float4((n + 1.0) * 0.5, 1.0);
}

Rendering the Mesh

The remainder of our render cross and digicam setup is identical as earlier than. The Avocado mesh location is a bit completely different than the triangle we had been rendering earlier, so I tweaked the digicam parameters a bit to begin it in a greater place.

// within the async lambda in app.js
// Modify digicam place and close to/far plans to have a greater view
// of the Avocado when it is loaded
var digicam =
    new ArcballCamera([0, 0, 0.2], [0, 0, 0], [0, 1, 0],
                      0.5, [canvas.width, canvas.height]);
var proj = mat4.perspective(
    mat4.create(), 50 * Math.PI / 180.0,
    canvas.width / canvas.peak, 0.01, 1000);

Our render loop is identical as earlier than, we watch for the animationFrame promise, replace the view parameters, and begin encoding a render cross. Then we merely name glbMesh.render to render our mesh.

// within the async lambda in app.js
// Render!
whereas (true) {
    await animationFrame();

    // Replace digicam buffer
    projView = mat4.mul(projView, proj, digicam.digicam);

    var add = gadget.createBuffer(
        {dimension: 16 * 4, utilization: GPUBufferUsage.COPY_SRC, mappedAtCreation: true});
    {
        var map = new Float32Array(add.getMappedRange());
        map.set(projView);
        add.unmap();
    }

    renderPassDesc.colorAttachments[0].view =
        context.getCurrentTexture().createView();

    var commandEncoder = gadget.createCommandEncoder();
    commandEncoder.copyBufferToBuffer(add, 0, viewParamsBuffer, 0, 16 * 4);

    var renderPass = commandEncoder.beginRenderPass(renderPassDesc);

    // Render our mesh!
    glbMesh.render(renderPass, viewParamBG);

    renderPass.finish();
    gadget.queue.submit([commandEncoder.finish()]);
}

Lastly in spite of everything that work, we’ve received our Avocado on the display screen! I’ve embedded the app under as an iframe so you possibly can strive it out. You can even seize the code on Github to run it regionally or view it online directly.

Be at liberty to ask any questions or put up feedback in regards to the put up on the GitHub discussion board.


Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top