GPU: re-add parens to wiki intro

This commit is contained in:
cosmonaut 2024-10-04 12:27:51 -07:00
parent 00385951a1
commit e8c852d77c

View file

@ -30,24 +30,24 @@
*
* A basic workflow might be something like this:
*
* The app creates a GPU device with SDL_GPUCreateDevice, and assigns it to a
* window with SDL_ClaimWindowForGPUDevice--although strictly speaking you can
* The app creates a GPU device with SDL_GPUCreateDevice(), and assigns it to a
* window with SDL_ClaimWindowForGPUDevice()--although strictly speaking you can
* render offscreen entirely, perhaps for image processing, and not use a
* window at all.
*
* Next the app prepares static data (things that are created once and used
* over and over). For example:
*
* - Shaders (programs that run on the GPU): use SDL_CreateGPUShader.
* - Shaders (programs that run on the GPU): use SDL_CreateGPUShader().
* - Vertex buffers (arrays of geometry data) and other data rendering will
* need: use SDL_UploadToGPUBuffer.
* - Textures (images): use SDL_UploadToGPUTexture.
* - Samplers (how textures should be read from): use SDL_CreateGPUSampler.
* need: use SDL_UploadToGPUBuffer().
* - Textures (images): use SDL_UploadToGPUTexture().
* - Samplers (how textures should be read from): use SDL_CreateGPUSampler().
* - Render pipelines (precalculated rendering state): use
* SDL_CreateGPUGraphicsPipeline
* SDL_CreateGPUGraphicsPipeline()
*
* To render, the app creates one or more command buffers, with
* SDL_AcquireGPUCommandBuffer. Command buffers collect rendering instructions
* SDL_AcquireGPUCommandBuffer(). Command buffers collect rendering instructions
* that will be submitted to the GPU in batch. Complex scenes can use multiple
* command buffers, maybe configured across multiple threads in parallel, as
* long as they are submitted in the correct order, but many apps will just
@ -56,7 +56,7 @@
* Rendering can happen to a texture (what other APIs call a "render target")
* or it can happen to the swapchain texture (which is just a special texture
* that represents a window's contents). The app can use
* SDL_AcquireGPUSwapchainTexture to render to the window.
* SDL_AcquireGPUSwapchainTexture() to render to the window.
*
* Rendering actually happens in a Render Pass, which is encoded into a
* command buffer. One can encode multiple render passes (or alternate between
@ -66,52 +66,52 @@
* simultaneously. If the set of textures being rendered to needs to change,
* the Render Pass must be ended and a new one must be begun.
*
* The app calls SDL_BeginGPURenderPass. Then it sets states it needs for each
* The app calls SDL_BeginGPURenderPass(). Then it sets states it needs for each
* draw:
*
* - SDL_BindGPUGraphicsPipeline
* - SDL_SetGPUViewport
* - SDL_BindGPUVertexBuffers
* - SDL_BindGPUVertexSamplers
* - SDL_BindGPUGraphicsPipeline()
* - SDL_SetGPUViewport()
* - SDL_BindGPUVertexBuffers()
* - SDL_BindGPUVertexSamplers()
* - etc
*
* Then, make the actual draw commands with these states:
*
* - SDL_DrawGPUPrimitives
* - SDL_DrawGPUPrimitivesIndirect
* - SDL_DrawGPUIndexedPrimitivesIndirect
* - SDL_DrawGPUPrimitives()
* - SDL_DrawGPUPrimitivesIndirect()
* - SDL_DrawGPUIndexedPrimitivesIndirect()
* - etc
*
* After all the drawing commands for a pass are complete, the app should call
* SDL_EndGPURenderPass. Once a render pass ends all render-related state is
* SDL_EndGPURenderPass(). Once a render pass ends all render-related state is
* reset.
*
* The app can begin new Render Passes and make new draws in the same command
* buffer until the entire scene is rendered.
*
* Once all of the render commands for the scene are complete, the app calls
* SDL_SubmitGPUCommandBuffer to send it to the GPU for processing.
* SDL_SubmitGPUCommandBuffer() to send it to the GPU for processing.
*
* If the app needs to read back data from texture or buffers, the API has an
* efficient way of doing this, provided that the app is willing to tolerate
* some latency. When the app uses SDL_DownloadFromGPUTexture or
* SDL_DownloadFromGPUBuffer, submitting the command buffer with
* SubmitGPUCommandBufferAndAcquireFence will return a fence handle that the
* some latency. When the app uses SDL_DownloadFromGPUTexture() or
* SDL_DownloadFromGPUBuffer(), submitting the command buffer with
* SubmitGPUCommandBufferAndAcquireFence() will return a fence handle that the
* app can poll or wait on in a thread. Once the fence indicates that the
* command buffer is done processing, it is safe to read the downloaded data.
* Make sure to call SDL_ReleaseGPUFence when done with the fence.
* Make sure to call SDL_ReleaseGPUFence() when done with the fence.
*
* The API also has "compute" support. The app calls SDL_GPUBeginComputePass
* The API also has "compute" support. The app calls SDL_GPUBeginComputePass()
* with compute-writeable textures and/or buffers, which can be written to in
* a compute shader. Then it sets states it needs for the compute dispatches:
*
* - SDL_BindGPUComputePipeline
* - SDL_BindGPUComputeStorageBuffers
* - SDL_BindGPUComputeStorageTextures
* - SDL_BindGPUComputePipeline()
* - SDL_BindGPUComputeStorageBuffers()
* - SDL_BindGPUComputeStorageTextures()
*
* Then, dispatch compute work:
*
* - SDL_DispatchGPUCompute
* - SDL_DispatchGPUCompute()
*
* For advanced users, this opens up powerful GPU-driven workflows.
*