This better reflects how HDR content is actually used, e.g. most content is in the SDR range, with specular highlights and bright details beyond the SDR range, in the HDR headroom.
This more closely matches how HDR is handled on Apple platforms, as EDR.
This also greatly simplifies application code which no longer has to think about color scaling. SDR content is rendered at the appropriate brightness automatically, and HDR content is scaled to the correct range for the display HDR headroom.
The renderer will always use the sRGB colorspace for drawing, and will default to the sRGB output colorspace. If you want blending in linear space and HDR support, you can select the scRGB output colorspace, which is supported by the direct3d11 and direct3d12
You can't do blending directly in PQ space, which means you have to create a scene render target in linear space and use shaders to convert PQ texture data to linear, etc. All of this is out of scope for the SDL 2D renderer at the moment.
This allows color operations to happen in linear space between sRGB input and sRGB output. This is currently supported on the direct3d11, direct3d12 and opengl renderers.
This is a good resource on blending in linear space vs sRGB space:
https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Also added testcolorspace to verify colorspace changes
At the earliest place, immediatly after driverdata is set.
(Doing it in SDL_render.c, after creation, would be too late, because there're renderers that already use/change those values in the CreateRender() function).
This means the allocator's caller doesn't need to use SDL_OutOfMemory directly
if the allocation fails.
This applies to the usual allocators: SDL_malloc, SDL_calloc, SDL_realloc
(all of these regardless of if the app supplied a custom allocator or we're
using system malloc() or an internal copy of dlmalloc under the hood),
SDL_aligned_alloc, SDL_small_alloc, SDL_strdup, SDL_asprintf, SDL_wcsdup...
probably others. If it returns something you can pass to SDL_free, it should
work.
The caller might still need to use SDL_OutOfMemory if something that wasn't
SDL allocated the memory: operator new in C++ code, Objective-C's alloc
message, win32 GlobalAlloc, etc.
Fixes#8642.
This uses the same `SDL_VerbNoun` format as the rest of SDL3, and also
adds stronger effort to invalidate cached state in the backend, so cooperation
improves with apps that are using lowlevel rendering APIs directly.
Fixes#367.
Also switched the D3D11 and D3D12 renderers to use real NV12 textures for NV12 data.
The combination of these two changes allows us to implement 0-copy video decode and playback for D3D11 in testffmpeg without any access to the renderer internals.
Also, for some reason ID3D11DeviceContext_OMGetRenderTargets() was failing in the second read pixels call in the "testautomation --filter render_testViewport" test.
We already know the target view, so just use that.