You may try not to block your entire program to wait until defragmentation finishes, but do it in the background, as long as you carefully fullfill requirements described in function vmaDefragmentationBegin().
When using lost allocations, you may see some Vulkan validation layer warnings about overlapping regions of memory bound to different kinds of buffers and images. This is still valid as long as you implement proper handling of lost allocations (like in the example above) and don't use them.
You can create an allocation that is already in lost state from the beginning using function vmaCreateLostAllocation(). It may be useful if you need a "dummy" allocation that is not null.
There are some exceptions though, when you should consider mapping memory only for a short period of time:
- When operating system is Windows 7 or 8.x (Windows 10 is not affected because it uses WDDM2), device is discrete AMD GPU, and memory type is the special 256 MiB pool of
DEVICE_LOCAL + HOST_VISIBLE
memory (selected when you use VMA_MEMORY_USAGE_CPU_TO_GPU), then whenever a memory block allocated from this memory type stays mapped for the time of any call to vkQueueSubmit()
or vkQueuePresentKHR()
, this block is migrated by WDDM to system RAM, which degrades performance. It doesn't matter if that particular memory block is actually used by the command buffer being submitted.
@@ -167,10 +167,10 @@ Finding out if memory is mappable
}
-VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:2918
-uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:3272
+VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:2915
+uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:3269
void vmaGetMemoryTypeProperties(VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
Given Memory Type Index, returns Property Flags of this memory type.
-@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2742
+@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2739
You can even use VMA_ALLOCATION_CREATE_MAPPED_BIT flag while creating allocations that are not necessarily HOST_VISIBLE
(e.g. using VMA_MEMORY_USAGE_GPU_ONLY). If the allocation ends up in memory type that is HOST_VISIBLE
, it will be persistently mapped and you can use it directly. If not, the flag is just ignored. Example:
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = sizeof(ConstantBuffer);
diff --git a/docs/html/quick_start.html b/docs/html/quick_start.html
index 4affdca..c362d0d 100644
--- a/docs/html/quick_start.html
+++ b/docs/html/quick_start.html
@@ -104,11 +104,11 @@ Initialization
-
Description of a Allocator to be created.
Definition: vk_mem_alloc.h:2422
-
VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:2427
-
VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2496
-
VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:2430
-
uint32_t vulkanApiVersion
Optional. The highest version of Vulkan that the application is designed to use.
Definition: vk_mem_alloc.h:2505
+
Description of a Allocator to be created.
Definition: vk_mem_alloc.h:2419
+
VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:2424
+
VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2493
+
VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:2427
+
uint32_t vulkanApiVersion
Optional. The highest version of Vulkan that the application is designed to use.
Definition: vk_mem_alloc.h:2502
Represents main object of this library initialized.
VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
Creates Allocator object.
Only members physicalDevice
, device
, instance
are required. However, you should inform the library which Vulkan version do you use by setting VmaAllocatorCreateInfo::vulkanApiVersion and which extensions did you enable by setting VmaAllocatorCreateInfo::flags (like VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT for VK_KHR_buffer_device_address). Otherwise, VMA would use only features of Vulkan 1.0 core with no extensions.
@@ -130,10 +130,10 @@ Resource allocation
VkBuffer buffer;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation,
nullptr);
-Definition: vk_mem_alloc.h:2900
-VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2908
+Definition: vk_mem_alloc.h:2897
+VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2905
Represents single memory allocation.
-@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2742
+@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2739
VkResult vmaCreateBuffer(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Don't forget to destroy your objects when no longer needed:
diff --git a/docs/html/resource_aliasing.html b/docs/html/resource_aliasing.html
index 08b05f0..a6c52b5 100644
--- a/docs/html/resource_aliasing.html
+++ b/docs/html/resource_aliasing.html
@@ -137,12 +137,12 @@ $(function() {
vkDestroyImage(allocator, img2, nullptr);
vkDestroyImage(allocator, img1, nullptr);
-
Definition: vk_mem_alloc.h:2900
-
VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2908
+
Definition: vk_mem_alloc.h:2897
+
VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2905
Represents single memory allocation.
VkResult vmaBindImageMemory(VmaAllocator allocator, VmaAllocation allocation, VkImage image)
Binds image to allocation.
void vmaFreeMemory(VmaAllocator allocator, const VmaAllocation allocation)
Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(),...
-
@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2742
+
@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2739
VkResult vmaAllocateMemory(VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
General purpose memory allocation.
Remember that using resources that alias in memory requires proper synchronization. You need to issue a memory barrier to make sure commands that use img1
and img2
don't overlap on GPU timeline. You also need to treat a resource after aliasing as uninitialized - containing garbage data. For example, if you use img1
and then want to use img2
, you need to issue an image memory barrier for img2
with oldLayout
= VK_IMAGE_LAYOUT_UNDEFINED
.
Additional considerations:
diff --git a/docs/html/vk__mem__alloc_8h_source.html b/docs/html/vk__mem__alloc_8h_source.html
index 3578c9e..fe61ac8 100644
--- a/docs/html/vk__mem__alloc_8h_source.html
+++ b/docs/html/vk__mem__alloc_8h_source.html
@@ -94,904 +94,907 @@ $(function() {
23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
-
-
-
-
-
-
-
-
- 2028 #ifndef VMA_RECORDING_ENABLED
- 2029 #define VMA_RECORDING_ENABLED 0
-
-
- 2032 #if !defined(NOMINMAX) && defined(VMA_IMPLEMENTATION)
-
-
-
- 2036 #if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
- 2037 extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
- 2038 extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
- 2039 extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
- 2040 extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
- 2041 extern PFN_vkAllocateMemory vkAllocateMemory;
- 2042 extern PFN_vkFreeMemory vkFreeMemory;
- 2043 extern PFN_vkMapMemory vkMapMemory;
- 2044 extern PFN_vkUnmapMemory vkUnmapMemory;
- 2045 extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
- 2046 extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
- 2047 extern PFN_vkBindBufferMemory vkBindBufferMemory;
- 2048 extern PFN_vkBindImageMemory vkBindImageMemory;
- 2049 extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
- 2050 extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
- 2051 extern PFN_vkCreateBuffer vkCreateBuffer;
- 2052 extern PFN_vkDestroyBuffer vkDestroyBuffer;
- 2053 extern PFN_vkCreateImage vkCreateImage;
- 2054 extern PFN_vkDestroyImage vkDestroyImage;
- 2055 extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
- 2056 #if VMA_VULKAN_VERSION >= 1001000
- 2057 extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
- 2058 extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
- 2059 extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
- 2060 extern PFN_vkBindImageMemory2 vkBindImageMemory2;
- 2061 extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
-
-
-
-
- 2066 #include <vulkan/vulkan.h>
-
-
-
-
-
- 2072 #if !defined(VMA_VULKAN_VERSION)
- 2073 #if defined(VK_VERSION_1_2)
- 2074 #define VMA_VULKAN_VERSION 1002000
- 2075 #elif defined(VK_VERSION_1_1)
- 2076 #define VMA_VULKAN_VERSION 1001000
-
- 2078 #define VMA_VULKAN_VERSION 1000000
-
-
-
- 2082 #if !defined(VMA_DEDICATED_ALLOCATION)
- 2083 #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
- 2084 #define VMA_DEDICATED_ALLOCATION 1
-
- 2086 #define VMA_DEDICATED_ALLOCATION 0
-
-
-
- 2090 #if !defined(VMA_BIND_MEMORY2)
- 2091 #if VK_KHR_bind_memory2
- 2092 #define VMA_BIND_MEMORY2 1
-
- 2094 #define VMA_BIND_MEMORY2 0
-
-
-
- 2098 #if !defined(VMA_MEMORY_BUDGET)
- 2099 #if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
- 2100 #define VMA_MEMORY_BUDGET 1
-
- 2102 #define VMA_MEMORY_BUDGET 0
-
-
-
-
- 2107 #if !defined(VMA_BUFFER_DEVICE_ADDRESS)
- 2108 #if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
- 2109 #define VMA_BUFFER_DEVICE_ADDRESS 1
-
- 2111 #define VMA_BUFFER_DEVICE_ADDRESS 0
-
-
-
-
- 2116 #if !defined(VMA_MEMORY_PRIORITY)
- 2117 #if VK_EXT_memory_priority
- 2118 #define VMA_MEMORY_PRIORITY 1
-
- 2120 #define VMA_MEMORY_PRIORITY 0
-
-
-
-
- 2125 #if !defined(VMA_EXTERNAL_MEMORY)
- 2126 #if VK_KHR_external_memory
- 2127 #define VMA_EXTERNAL_MEMORY 1
-
- 2129 #define VMA_EXTERNAL_MEMORY 0
-
-
-
-
-
-
-
-
- 2138 #ifndef VMA_CALL_PRE
- 2139 #define VMA_CALL_PRE
+
+
+
+
+
+
+
+
+ 2025 #ifndef VMA_RECORDING_ENABLED
+ 2026 #define VMA_RECORDING_ENABLED 0
+
+
+ 2029 #if !defined(NOMINMAX) && defined(VMA_IMPLEMENTATION)
+
+
+
+ 2033 #if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
+ 2034 extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
+ 2035 extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
+ 2036 extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
+ 2037 extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
+ 2038 extern PFN_vkAllocateMemory vkAllocateMemory;
+ 2039 extern PFN_vkFreeMemory vkFreeMemory;
+ 2040 extern PFN_vkMapMemory vkMapMemory;
+ 2041 extern PFN_vkUnmapMemory vkUnmapMemory;
+ 2042 extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
+ 2043 extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
+ 2044 extern PFN_vkBindBufferMemory vkBindBufferMemory;
+ 2045 extern PFN_vkBindImageMemory vkBindImageMemory;
+ 2046 extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
+ 2047 extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
+ 2048 extern PFN_vkCreateBuffer vkCreateBuffer;
+ 2049 extern PFN_vkDestroyBuffer vkDestroyBuffer;
+ 2050 extern PFN_vkCreateImage vkCreateImage;
+ 2051 extern PFN_vkDestroyImage vkDestroyImage;
+ 2052 extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
+ 2053 #if VMA_VULKAN_VERSION >= 1001000
+ 2054 extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
+ 2055 extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
+ 2056 extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
+ 2057 extern PFN_vkBindImageMemory2 vkBindImageMemory2;
+ 2058 extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
+
+
+
+
+ 2063 #include <vulkan/vulkan.h>
+
+
+
+
+
+ 2069 #if !defined(VMA_VULKAN_VERSION)
+ 2070 #if defined(VK_VERSION_1_2)
+ 2071 #define VMA_VULKAN_VERSION 1002000
+ 2072 #elif defined(VK_VERSION_1_1)
+ 2073 #define VMA_VULKAN_VERSION 1001000
+
+ 2075 #define VMA_VULKAN_VERSION 1000000
+
+
+
+ 2079 #if !defined(VMA_DEDICATED_ALLOCATION)
+ 2080 #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
+ 2081 #define VMA_DEDICATED_ALLOCATION 1
+
+ 2083 #define VMA_DEDICATED_ALLOCATION 0
+
+
+
+ 2087 #if !defined(VMA_BIND_MEMORY2)
+ 2088 #if VK_KHR_bind_memory2
+ 2089 #define VMA_BIND_MEMORY2 1
+
+ 2091 #define VMA_BIND_MEMORY2 0
+
+
+
+ 2095 #if !defined(VMA_MEMORY_BUDGET)
+ 2096 #if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
+ 2097 #define VMA_MEMORY_BUDGET 1
+
+ 2099 #define VMA_MEMORY_BUDGET 0
+
+
+
+
+ 2104 #if !defined(VMA_BUFFER_DEVICE_ADDRESS)
+ 2105 #if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
+ 2106 #define VMA_BUFFER_DEVICE_ADDRESS 1
+
+ 2108 #define VMA_BUFFER_DEVICE_ADDRESS 0
+
+
+
+
+ 2113 #if !defined(VMA_MEMORY_PRIORITY)
+ 2114 #if VK_EXT_memory_priority
+ 2115 #define VMA_MEMORY_PRIORITY 1
+
+ 2117 #define VMA_MEMORY_PRIORITY 0
+
+
+
+
+ 2122 #if !defined(VMA_EXTERNAL_MEMORY)
+ 2123 #if VK_KHR_external_memory
+ 2124 #define VMA_EXTERNAL_MEMORY 1
+
+ 2126 #define VMA_EXTERNAL_MEMORY 0
+
+
+
+
+
+
+
+
+ 2135 #ifndef VMA_CALL_PRE
+ 2136 #define VMA_CALL_PRE
+
+ 2138 #ifndef VMA_CALL_POST
+ 2139 #define VMA_CALL_POST
- 2141 #ifndef VMA_CALL_POST
- 2142 #define VMA_CALL_POST
-
-
-
-
-
-
-
-
-
-
-
-
-
- 2156 #ifndef VMA_LEN_IF_NOT_NULL
- 2157 #define VMA_LEN_IF_NOT_NULL(len)
-
-
-
-
- 2162 #ifndef VMA_NULLABLE
-
- 2164 #define VMA_NULLABLE _Nullable
-
- 2166 #define VMA_NULLABLE
-
-
-
-
-
- 2172 #ifndef VMA_NOT_NULL
-
- 2174 #define VMA_NOT_NULL _Nonnull
-
- 2176 #define VMA_NOT_NULL
-
-
-
-
-
- 2182 #ifndef VMA_NOT_NULL_NON_DISPATCHABLE
- 2183 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
- 2184 #define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
-
- 2186 #define VMA_NOT_NULL_NON_DISPATCHABLE
-
-
-
- 2190 #ifndef VMA_NULLABLE_NON_DISPATCHABLE
- 2191 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
- 2192 #define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
-
- 2194 #define VMA_NULLABLE_NON_DISPATCHABLE
-
-
-
-
-
-
-
- 2212 uint32_t memoryType,
- 2213 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
-
- 2215 void* VMA_NULLABLE pUserData);
-
-
- 2219 uint32_t memoryType,
- 2220 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
-
- 2222 void* VMA_NULLABLE pUserData);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 2379 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
- 2380 PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
- 2381 PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
-
- 2383 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
- 2384 PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
- 2385 PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
+
+
+
+
+
+
+
+
+
+
+
+
+ 2153 #ifndef VMA_LEN_IF_NOT_NULL
+ 2154 #define VMA_LEN_IF_NOT_NULL(len)
+
+
+
+
+ 2159 #ifndef VMA_NULLABLE
+
+ 2161 #define VMA_NULLABLE _Nullable
+
+ 2163 #define VMA_NULLABLE
+
+
+
+
+
+ 2169 #ifndef VMA_NOT_NULL
+
+ 2171 #define VMA_NOT_NULL _Nonnull
+
+ 2173 #define VMA_NOT_NULL
+
+
+
+
+
+ 2179 #ifndef VMA_NOT_NULL_NON_DISPATCHABLE
+ 2180 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
+ 2181 #define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
+
+ 2183 #define VMA_NOT_NULL_NON_DISPATCHABLE
+
+
+
+ 2187 #ifndef VMA_NULLABLE_NON_DISPATCHABLE
+ 2188 #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
+ 2189 #define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
+
+ 2191 #define VMA_NULLABLE_NON_DISPATCHABLE
+
+
+
+
+
+
+
+ 2209 uint32_t memoryType,
+ 2210 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
+
+ 2212 void* VMA_NULLABLE pUserData);
+
+
+ 2216 uint32_t memoryType,
+ 2217 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
+
+ 2219 void* VMA_NULLABLE pUserData);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2376 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+ 2377 PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
+ 2378 PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
+
+ 2380 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
+ 2381 PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
+ 2382 PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
+
+ 2384 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
+ 2385 PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
- 2387 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
- 2388 PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
-
+
-
+
-
-
-
-
- 2478 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(
"VkPhysicalDeviceMemoryProperties::memoryHeapCount")
pHeapSizeLimit;
-
-
-
-
-
- 2506 #if VMA_EXTERNAL_MEMORY
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 2563 const VkPhysicalDeviceProperties* VMA_NULLABLE * VMA_NOT_NULL ppPhysicalDeviceProperties);
-
-
-
- 2571 const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE * VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
-
-
-
- 2581 uint32_t memoryTypeIndex,
- 2582 VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
-
-
-
- 2594 uint32_t frameIndex);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 2690 #ifndef VMA_STATS_STRING_ENABLED
- 2691 #define VMA_STATS_STRING_ENABLED 1
-
-
- 2694 #if VMA_STATS_STRING_ENABLED
-
-
-
-
- 2701 char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString,
- 2702 VkBool32 detailedMap);
-
-
-
- 2706 char* VMA_NULLABLE pStatsString);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+ 2475 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(
"VkPhysicalDeviceMemoryProperties::memoryHeapCount")
pHeapSizeLimit;
+
+
+
+
+
+ 2503 #if VMA_EXTERNAL_MEMORY
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2560 const VkPhysicalDeviceProperties* VMA_NULLABLE * VMA_NOT_NULL ppPhysicalDeviceProperties);
+
+
+
+ 2568 const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE * VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
+
+
+
+ 2578 uint32_t memoryTypeIndex,
+ 2579 VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
+
+
+
+ 2591 uint32_t frameIndex);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2687 #ifndef VMA_STATS_STRING_ENABLED
+ 2688 #define VMA_STATS_STRING_ENABLED 1
+
+
+ 2691 #if VMA_STATS_STRING_ENABLED
+
+
+
+
+ 2698 char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString,
+ 2699 VkBool32 detailedMap);
+
+
+
+ 2703 char* VMA_NULLABLE pStatsString);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 2967 uint32_t memoryTypeBits,
-
- 2969 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
-
-
-
- 2985 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
-
- 2987 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
-
-
-
- 3003 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
-
- 3005 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3172 VmaPool VMA_NULLABLE * VMA_NOT_NULL pPool);
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3200 size_t* VMA_NULLABLE pLostAllocationCount);
-
-
-
-
-
-
- 3227 const char* VMA_NULLABLE * VMA_NOT_NULL ppName);
-
-
-
-
- 3237 const char* VMA_NULLABLE pName);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3331 const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
-
-
-
-
-
-
- 3357 const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
-
- 3359 size_t allocationCount,
- 3360 VmaAllocation VMA_NULLABLE * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
- 3361 VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
-
-
-
- 3371 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
-
-
-
-
-
-
- 3379 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
-
-
-
-
-
-
-
-
-
-
- 3404 size_t allocationCount,
- 3405 const VmaAllocation VMA_NULLABLE * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3462 void* VMA_NULLABLE pUserData);
-
-
-
-
-
-
-
-
- 3519 void* VMA_NULLABLE * VMA_NOT_NULL ppData);
-
-
-
-
-
-
-
-
- 3557 VkDeviceSize offset,
-
-
-
-
-
- 3584 VkDeviceSize offset,
-
-
-
-
- 3603 uint32_t allocationCount,
- 3604 const VmaAllocation VMA_NOT_NULL * VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
- 3605 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
- 3606 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
-
-
-
- 3624 uint32_t allocationCount,
- 3625 const VmaAllocation VMA_NOT_NULL * VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
- 3626 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
- 3627 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3740 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE
memory;
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 3878 const VmaAllocation VMA_NOT_NULL * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
- 3879 size_t allocationCount,
- 3880 VkBool32* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationsChanged,
-
-
-
-
-
-
- 3899 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
-
-
-
-
- 3914 VkDeviceSize allocationLocalOffset,
- 3915 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
- 3916 const void* VMA_NULLABLE pNext);
-
-
-
-
- 3933 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
-
-
-
-
- 3948 VkDeviceSize allocationLocalOffset,
- 3949 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
- 3950 const void* VMA_NULLABLE pNext);
-
-
-
- 3984 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
-
- 3986 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pBuffer,
-
-
-
-
-
- 3998 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
-
- 4000 VkDeviceSize minAlignment,
- 4001 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pBuffer,
-
-
-
-
-
- 4018 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
-
-
-
-
- 4024 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
-
- 4026 VkImage VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pImage,
-
-
-
-
-
- 4043 VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
-
-
-
-
-
-
-
-
-
- 4053 #if defined(__cplusplus) && defined(__INTELLISENSE__)
- 4054 #define VMA_IMPLEMENTATION
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 2964 uint32_t memoryTypeBits,
+
+ 2966 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
+
+
+
+ 2982 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
+
+ 2984 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
+
+
+
+ 3000 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
+
+ 3002 uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3169 VmaPool VMA_NULLABLE * VMA_NOT_NULL pPool);
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3197 size_t* VMA_NULLABLE pLostAllocationCount);
+
+
+
+
+
+
+ 3224 const char* VMA_NULLABLE * VMA_NOT_NULL ppName);
+
+
+
+
+ 3234 const char* VMA_NULLABLE pName);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3328 const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
+
+
+
+
+
+
+ 3354 const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
+
+ 3356 size_t allocationCount,
+ 3357 VmaAllocation VMA_NULLABLE * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
+ 3358 VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
+
+
+
+ 3368 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
+
+
+
+
+
+
+ 3376 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
+
+
+
+
+
+
+
+
+
+
+ 3401 size_t allocationCount,
+ 3402 const VmaAllocation VMA_NULLABLE * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3459 void* VMA_NULLABLE pUserData);
+
+
+
+
+
+
+
+
+ 3516 void* VMA_NULLABLE * VMA_NOT_NULL ppData);
+
+
+
+
+
+
+
+
+ 3554 VkDeviceSize offset,
+
+
+
+
+
+ 3581 VkDeviceSize offset,
+
+
+
+
+ 3600 uint32_t allocationCount,
+ 3601 const VmaAllocation VMA_NOT_NULL * VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
+ 3602 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
+ 3603 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
+
+
+
+ 3621 uint32_t allocationCount,
+ 3622 const VmaAllocation VMA_NOT_NULL * VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
+ 3623 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
+ 3624 const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3737 VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE
memory;
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 3875 const VmaAllocation VMA_NOT_NULL * VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
+ 3876 size_t allocationCount,
+ 3877 VkBool32* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationsChanged,
+
+
+
+
+
+
+ 3896 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
+
+
+
+
+ 3911 VkDeviceSize allocationLocalOffset,
+ 3912 VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
+ 3913 const void* VMA_NULLABLE pNext);
+
+
+
+
+ 3930 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
+
+
+
+
+ 3945 VkDeviceSize allocationLocalOffset,
+ 3946 VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
+ 3947 const void* VMA_NULLABLE pNext);
+
+
+
+ 3981 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
+
+ 3983 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pBuffer,
+
+
+
+
+
+ 3995 const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
+
+ 3997 VkDeviceSize minAlignment,
+ 3998 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pBuffer,
+
+
+
+
+
+ 4015 VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
+
+
+
+
+ 4021 const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
+
+ 4023 VkImage VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pImage,
+
+
+
+
+
+ 4040 VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
+
+
+
+
+
+
+
+
+
+ 4050 #if defined(__cplusplus) && defined(__INTELLISENSE__)
+ 4051 #define VMA_IMPLEMENTATION
+
+
+ 4054 #ifdef VMA_IMPLEMENTATION
+ 4055 #undef VMA_IMPLEMENTATION
- 4057 #ifdef VMA_IMPLEMENTATION
- 4058 #undef VMA_IMPLEMENTATION
-
-
-
-
-
-
- 4065 #if VMA_RECORDING_ENABLED
-
-
- 4068 #include <windows.h>
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 4088 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
- 4089 #define VMA_STATIC_VULKAN_FUNCTIONS 1
-
-
-
-
-
-
-
-
- 4098 #if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
- 4099 #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
- 4100 #if defined(VK_NO_PROTOTYPES)
- 4101 extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
- 4102 extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
-
-
+
+
+
+
+
+ 4062 #if VMA_RECORDING_ENABLED
+
+
+ 4065 #include <windows.h>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ 4085 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
+ 4086 #define VMA_STATIC_VULKAN_FUNCTIONS 1
+
+
+
+
+
+
+
+
+ 4095 #if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
+ 4096 #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
+ 4097 #if defined(VK_NO_PROTOTYPES)
+ 4098 extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
+ 4099 extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
+
+
+
+
+
-
-
-
-
-
-
-
-
-
- 4115 #if VMA_USE_STL_CONTAINERS
- 4116 #define VMA_USE_STL_VECTOR 1
- 4117 #define VMA_USE_STL_UNORDERED_MAP 1
- 4118 #define VMA_USE_STL_LIST 1
-
-
- 4121 #ifndef VMA_USE_STL_SHARED_MUTEX
-
- 4123 #if __cplusplus >= 201703L
- 4124 #define VMA_USE_STL_SHARED_MUTEX 1
-
-
-
- 4128 #elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
- 4129 #define VMA_USE_STL_SHARED_MUTEX 1
-
- 4131 #define VMA_USE_STL_SHARED_MUTEX 0
-
-
-
-
-
-
-
- 4139 #if VMA_USE_STL_VECTOR
-
-
-
- 4143 #if VMA_USE_STL_UNORDERED_MAP
- 4144 #include <unordered_map>
-
-
- 4147 #if VMA_USE_STL_LIST
-
-
-
-
-
-
-
-
- 4156 #include <algorithm>
-
-
-
-
- 4161 #define VMA_NULL nullptr
-
-
- 4164 #if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
-
- 4166 static void* vma_aligned_alloc(
size_t alignment,
size_t size)
-
-
- 4169 if(alignment <
sizeof(
void*))
-
- 4171 alignment =
sizeof(
void*);
-
-
- 4174 return memalign(alignment, size);
-
- 4176 #elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
-
-
- 4179 #if defined(__APPLE__)
- 4180 #include <AvailabilityMacros.h>
-
-
- 4183 static void* vma_aligned_alloc(
size_t alignment,
size_t size)
-
- 4185 #if defined(__APPLE__) && (defined(MAC_OS_X_VERSION_10_16) || defined(__IPHONE_14_0))
- 4186 #if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_16 || __IPHONE_OS_VERSION_MAX_ALLOWED >= __IPHONE_14_0
-
-
-
-
-
-
- 4193 if (__builtin_available(macOS 10.15, iOS 13, *))
- 4194 return aligned_alloc(alignment, size);
-
-
+
+
+
+
+
+
+ 4112 #if VMA_USE_STL_CONTAINERS
+ 4113 #define VMA_USE_STL_VECTOR 1
+ 4114 #define VMA_USE_STL_UNORDERED_MAP 1
+ 4115 #define VMA_USE_STL_LIST 1
+
+
+ 4118 #ifndef VMA_USE_STL_SHARED_MUTEX
+
+ 4120 #if __cplusplus >= 201703L
+ 4121 #define VMA_USE_STL_SHARED_MUTEX 1
+
+
+
+ 4125 #elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
+ 4126 #define VMA_USE_STL_SHARED_MUTEX 1
+
+ 4128 #define VMA_USE_STL_SHARED_MUTEX 0
+
+
+
+
+
+
+
+ 4136 #if VMA_USE_STL_VECTOR
+
+
+
+ 4140 #if VMA_USE_STL_UNORDERED_MAP
+ 4141 #include <unordered_map>
+
+
+ 4144 #if VMA_USE_STL_LIST
+
+
+
+
+
+
+
+
+ 4153 #include <algorithm>
+
+
+
+
+ 4158 #define VMA_NULL nullptr
+
+
+ 4161 #if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
+
+ 4163 static void* vma_aligned_alloc(
size_t alignment,
size_t size)
+
+
+ 4166 if(alignment <
sizeof(
void*))
+
+ 4168 alignment =
sizeof(
void*);
+
+
+ 4171 return memalign(alignment, size);
+
+ 4173 #elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
+
+
+ 4176 #if defined(__APPLE__)
+ 4177 #include <AvailabilityMacros.h>
+
+
+ 4180 static void* vma_aligned_alloc(
size_t alignment,
size_t size)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
4198 if(alignment <
sizeof(
void*))
@@ -6312,10367 +6315,10366 @@ $(function() {
9560 m_Suballocations.size() - (
size_t)m_FreeCount,
-
- 9564 for(
const auto& suballoc : m_Suballocations)
-
- 9566 if(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
-
- 9568 PrintDetailedMap_UnusedRange(json, suballoc.offset, suballoc.size);
-
-
-
- 9572 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
-
-
-
- 9576 PrintDetailedMap_End(json);
-
-
-
-
- 9581 bool VmaBlockMetadata_Generic::CreateAllocationRequest(
- 9582 uint32_t currentFrameIndex,
- 9583 uint32_t frameInUseCount,
- 9584 VkDeviceSize bufferImageGranularity,
- 9585 VkDeviceSize allocSize,
- 9586 VkDeviceSize allocAlignment,
-
- 9588 VmaSuballocationType allocType,
- 9589 bool canMakeOtherLost,
-
- 9591 VmaAllocationRequest* pAllocationRequest)
-
- 9593 VMA_ASSERT(allocSize > 0);
- 9594 VMA_ASSERT(!upperAddress);
- 9595 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
- 9596 VMA_ASSERT(pAllocationRequest != VMA_NULL);
- 9597 VMA_HEAVY_ASSERT(Validate());
-
- 9599 pAllocationRequest->type = VmaAllocationRequestType::Normal;
-
-
- 9602 if(canMakeOtherLost ==
false &&
- 9603 m_SumFreeSize < allocSize + 2 * VMA_DEBUG_MARGIN)
-
-
-
-
-
- 9609 const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
- 9610 if(freeSuballocCount > 0)
-
-
-
-
- 9615 VmaSuballocationList::iterator*
const it = VmaBinaryFindFirstNotLess(
- 9616 m_FreeSuballocationsBySize.data(),
- 9617 m_FreeSuballocationsBySize.data() + freeSuballocCount,
- 9618 allocSize + 2 * VMA_DEBUG_MARGIN,
- 9619 VmaSuballocationItemSizeLess());
- 9620 size_t index = it - m_FreeSuballocationsBySize.data();
- 9621 for(; index < freeSuballocCount; ++index)
-
-
-
-
- 9626 bufferImageGranularity,
-
-
-
- 9630 m_FreeSuballocationsBySize[index],
-
- 9632 &pAllocationRequest->offset,
- 9633 &pAllocationRequest->itemsToMakeLostCount,
- 9634 &pAllocationRequest->sumFreeSize,
- 9635 &pAllocationRequest->sumItemSize))
-
- 9637 pAllocationRequest->item = m_FreeSuballocationsBySize[index];
-
-
-
-
- 9642 else if(strategy == VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET)
-
- 9644 for(VmaSuballocationList::iterator it = m_Suballocations.begin();
- 9645 it != m_Suballocations.end();
-
-
- 9648 if(it->type == VMA_SUBALLOCATION_TYPE_FREE && CheckAllocation(
-
-
- 9651 bufferImageGranularity,
-
-
-
-
-
- 9657 &pAllocationRequest->offset,
- 9658 &pAllocationRequest->itemsToMakeLostCount,
- 9659 &pAllocationRequest->sumFreeSize,
- 9660 &pAllocationRequest->sumItemSize))
-
- 9662 pAllocationRequest->item = it;
-
-
-
-
-
-
-
- 9670 for(
size_t index = freeSuballocCount; index--; )
-
-
-
-
- 9675 bufferImageGranularity,
-
-
-
- 9679 m_FreeSuballocationsBySize[index],
-
- 9681 &pAllocationRequest->offset,
- 9682 &pAllocationRequest->itemsToMakeLostCount,
- 9683 &pAllocationRequest->sumFreeSize,
- 9684 &pAllocationRequest->sumItemSize))
-
- 9686 pAllocationRequest->item = m_FreeSuballocationsBySize[index];
-
-
-
-
-
-
- 9693 if(canMakeOtherLost)
-
-
-
-
- 9698 VmaAllocationRequest tmpAllocRequest = {};
- 9699 tmpAllocRequest.type = VmaAllocationRequestType::Normal;
- 9700 for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
- 9701 suballocIt != m_Suballocations.end();
-
-
- 9704 if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
- 9705 suballocIt->hAllocation->CanBecomeLost())
-
-
-
-
- 9710 bufferImageGranularity,
-
-
-
-
-
- 9716 &tmpAllocRequest.offset,
- 9717 &tmpAllocRequest.itemsToMakeLostCount,
- 9718 &tmpAllocRequest.sumFreeSize,
- 9719 &tmpAllocRequest.sumItemSize))
-
-
-
- 9723 *pAllocationRequest = tmpAllocRequest;
- 9724 pAllocationRequest->item = suballocIt;
-
-
- 9727 if(!found || tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
-
- 9729 *pAllocationRequest = tmpAllocRequest;
- 9730 pAllocationRequest->item = suballocIt;
-
-
-
-
-
-
-
-
-
-
-
-
- 9743 bool VmaBlockMetadata_Generic::MakeRequestedAllocationsLost(
- 9744 uint32_t currentFrameIndex,
- 9745 uint32_t frameInUseCount,
- 9746 VmaAllocationRequest* pAllocationRequest)
-
- 9748 VMA_ASSERT(pAllocationRequest && pAllocationRequest->type == VmaAllocationRequestType::Normal);
-
- 9750 while(pAllocationRequest->itemsToMakeLostCount > 0)
-
- 9752 if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
-
- 9754 ++pAllocationRequest->item;
-
- 9756 VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
- 9757 VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
- 9758 VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
- 9759 if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
-
- 9761 pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
- 9762 --pAllocationRequest->itemsToMakeLostCount;
-
-
-
-
-
-
-
- 9770 VMA_HEAVY_ASSERT(Validate());
- 9771 VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
- 9772 VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
-
-
-
-
- 9777 uint32_t VmaBlockMetadata_Generic::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
-
- 9779 uint32_t lostAllocationCount = 0;
- 9780 for(VmaSuballocationList::iterator it = m_Suballocations.begin();
- 9781 it != m_Suballocations.end();
-
-
- 9784 if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
- 9785 it->hAllocation->CanBecomeLost() &&
- 9786 it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
-
- 9788 it = FreeSuballocation(it);
- 9789 ++lostAllocationCount;
-
-
- 9792 return lostAllocationCount;
-
-
- 9795 VkResult VmaBlockMetadata_Generic::CheckCorruption(
const void* pBlockData)
-
- 9797 for(
auto& suballoc : m_Suballocations)
-
- 9799 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
-
- 9801 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
-
- 9803 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
- 9804 return VK_ERROR_VALIDATION_FAILED_EXT;
-
- 9806 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
-
- 9808 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
- 9809 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-
-
-
-
-
- 9817 void VmaBlockMetadata_Generic::Alloc(
- 9818 const VmaAllocationRequest& request,
- 9819 VmaSuballocationType type,
- 9820 VkDeviceSize allocSize,
-
-
- 9823 VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
- 9824 VMA_ASSERT(request.item != m_Suballocations.end());
- 9825 VmaSuballocation& suballoc = *request.item;
-
- 9827 VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
-
- 9829 VMA_ASSERT(request.offset >= suballoc.offset);
- 9830 const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
- 9831 VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
- 9832 const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
-
-
-
- 9836 UnregisterFreeSuballocation(request.item);
-
- 9838 suballoc.offset = request.offset;
- 9839 suballoc.size = allocSize;
- 9840 suballoc.type = type;
- 9841 suballoc.hAllocation = hAllocation;
-
-
-
-
- 9846 VmaSuballocation paddingSuballoc = {};
- 9847 paddingSuballoc.offset = request.offset + allocSize;
- 9848 paddingSuballoc.size = paddingEnd;
- 9849 paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
- 9850 VmaSuballocationList::iterator next = request.item;
-
- 9852 const VmaSuballocationList::iterator paddingEndItem =
- 9853 m_Suballocations.insert(next, paddingSuballoc);
- 9854 RegisterFreeSuballocation(paddingEndItem);
-
-
-
-
-
- 9860 VmaSuballocation paddingSuballoc = {};
- 9861 paddingSuballoc.offset = request.offset - paddingBegin;
- 9862 paddingSuballoc.size = paddingBegin;
- 9863 paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
- 9864 const VmaSuballocationList::iterator paddingBeginItem =
- 9865 m_Suballocations.insert(request.item, paddingSuballoc);
- 9866 RegisterFreeSuballocation(paddingBeginItem);
-
-
-
- 9870 m_FreeCount = m_FreeCount - 1;
- 9871 if(paddingBegin > 0)
-
-
-
-
-
-
-
- 9879 m_SumFreeSize -= allocSize;
-
-
- 9882 void VmaBlockMetadata_Generic::Free(
const VmaAllocation allocation)
-
- 9884 for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
- 9885 suballocItem != m_Suballocations.end();
-
-
- 9888 VmaSuballocation& suballoc = *suballocItem;
- 9889 if(suballoc.hAllocation == allocation)
-
- 9891 FreeSuballocation(suballocItem);
- 9892 VMA_HEAVY_ASSERT(Validate());
-
-
-
- 9896 VMA_ASSERT(0 &&
"Not found!");
-
-
- 9899 void VmaBlockMetadata_Generic::FreeAtOffset(VkDeviceSize offset)
-
- 9901 for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
- 9902 suballocItem != m_Suballocations.end();
-
-
- 9905 VmaSuballocation& suballoc = *suballocItem;
- 9906 if(suballoc.offset == offset)
-
- 9908 FreeSuballocation(suballocItem);
-
-
-
- 9912 VMA_ASSERT(0 &&
"Not found!");
-
-
- 9915 bool VmaBlockMetadata_Generic::ValidateFreeSuballocationList()
const
-
- 9917 VkDeviceSize lastSize = 0;
- 9918 for(
size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
-
- 9920 const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
-
- 9922 VMA_VALIDATE(it->type == VMA_SUBALLOCATION_TYPE_FREE);
- 9923 VMA_VALIDATE(it->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER);
- 9924 VMA_VALIDATE(it->size >= lastSize);
- 9925 lastSize = it->size;
-
-
-
-
- 9930 bool VmaBlockMetadata_Generic::CheckAllocation(
- 9931 uint32_t currentFrameIndex,
- 9932 uint32_t frameInUseCount,
- 9933 VkDeviceSize bufferImageGranularity,
- 9934 VkDeviceSize allocSize,
- 9935 VkDeviceSize allocAlignment,
- 9936 VmaSuballocationType allocType,
- 9937 VmaSuballocationList::const_iterator suballocItem,
- 9938 bool canMakeOtherLost,
- 9939 VkDeviceSize* pOffset,
- 9940 size_t* itemsToMakeLostCount,
- 9941 VkDeviceSize* pSumFreeSize,
- 9942 VkDeviceSize* pSumItemSize)
const
-
- 9944 VMA_ASSERT(allocSize > 0);
- 9945 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
- 9946 VMA_ASSERT(suballocItem != m_Suballocations.cend());
- 9947 VMA_ASSERT(pOffset != VMA_NULL);
-
- 9949 *itemsToMakeLostCount = 0;
-
-
-
- 9953 if(canMakeOtherLost)
-
- 9955 if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
-
- 9957 *pSumFreeSize = suballocItem->size;
-
-
-
- 9961 if(suballocItem->hAllocation->CanBecomeLost() &&
- 9962 suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
-
- 9964 ++*itemsToMakeLostCount;
- 9965 *pSumItemSize = suballocItem->size;
-
-
-
-
-
-
-
-
- 9974 if(GetSize() - suballocItem->offset < allocSize)
-
-
-
-
-
- 9980 *pOffset = suballocItem->offset;
-
-
- 9983 if(VMA_DEBUG_MARGIN > 0)
-
- 9985 *pOffset += VMA_DEBUG_MARGIN;
-
-
-
- 9989 *pOffset = VmaAlignUp(*pOffset, allocAlignment);
-
-
-
- 9993 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
-
- 9995 bool bufferImageGranularityConflict =
false;
- 9996 VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
- 9997 while(prevSuballocItem != m_Suballocations.cbegin())
-
-
-10000 const VmaSuballocation& prevSuballoc = *prevSuballocItem;
-10001 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
-
-10003 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
-
-10005 bufferImageGranularityConflict =
true;
-
-
-
-
-
-
-
-10013 if(bufferImageGranularityConflict)
-
-10015 *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
-
-
-
-
-
-10021 if(*pOffset >= suballocItem->offset + suballocItem->size)
-
-
-
-
-
-10027 const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
-
-
-10030 const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
-
-10032 const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
-
-10034 if(suballocItem->offset + totalSize > GetSize())
-
-
-
-
-
-
-10041 VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
-10042 if(totalSize > suballocItem->size)
-
-10044 VkDeviceSize remainingSize = totalSize - suballocItem->size;
-10045 while(remainingSize > 0)
-
-10047 ++lastSuballocItem;
-10048 if(lastSuballocItem == m_Suballocations.cend())
-
-
-
-10052 if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
-
-10054 *pSumFreeSize += lastSuballocItem->size;
-
-
-
-10058 VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
-10059 if(lastSuballocItem->hAllocation->CanBecomeLost() &&
-10060 lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
-
-10062 ++*itemsToMakeLostCount;
-10063 *pSumItemSize += lastSuballocItem->size;
-
-
-
-
-
-
-10070 remainingSize = (lastSuballocItem->size < remainingSize) ?
-10071 remainingSize - lastSuballocItem->size : 0;
-
-
-
-
-
-10077 if(allocSize % bufferImageGranularity || *pOffset % bufferImageGranularity)
-
-10079 VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
-10080 ++nextSuballocItem;
-10081 while(nextSuballocItem != m_Suballocations.cend())
-
-10083 const VmaSuballocation& nextSuballoc = *nextSuballocItem;
-10084 if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
-
-10086 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
-
-10088 VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
-10089 if(nextSuballoc.hAllocation->CanBecomeLost() &&
-10090 nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
-
-10092 ++*itemsToMakeLostCount;
-
-
-
-
-
-
-
-
-
-
-
-
-10105 ++nextSuballocItem;
-
-
-
-
-
-10111 const VmaSuballocation& suballoc = *suballocItem;
-10112 VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10114 *pSumFreeSize = suballoc.size;
-
-
-10117 if(suballoc.size < allocSize)
-
-
-
-
-
-10123 *pOffset = suballoc.offset;
-
-
-10126 if(VMA_DEBUG_MARGIN > 0)
-
-10128 *pOffset += VMA_DEBUG_MARGIN;
-
-
-
-10132 *pOffset = VmaAlignUp(*pOffset, allocAlignment);
-
-
-
-10136 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
-
-10138 bool bufferImageGranularityConflict =
false;
-10139 VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
-10140 while(prevSuballocItem != m_Suballocations.cbegin())
-
-10142 --prevSuballocItem;
-10143 const VmaSuballocation& prevSuballoc = *prevSuballocItem;
-10144 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
-
-10146 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
-
-10148 bufferImageGranularityConflict =
true;
-
-
-
-
-
-
-
-10156 if(bufferImageGranularityConflict)
-
-10158 *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
-
-
-
-
-10163 const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
-
-
-10166 const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
-
-
-10169 if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
-
-
-
-
-
-
-10176 if(allocSize % bufferImageGranularity || *pOffset % bufferImageGranularity)
-
-10178 VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
-10179 ++nextSuballocItem;
-10180 while(nextSuballocItem != m_Suballocations.cend())
-
-10182 const VmaSuballocation& nextSuballoc = *nextSuballocItem;
-10183 if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
-
-10185 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
-
-
-
-
-
-
-
-
-
-10195 ++nextSuballocItem;
-
-
-
-
-
-
-
-
-10204 void VmaBlockMetadata_Generic::MergeFreeWithNext(VmaSuballocationList::iterator item)
-
-10206 VMA_ASSERT(item != m_Suballocations.end());
-10207 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10209 VmaSuballocationList::iterator nextItem = item;
-
-10211 VMA_ASSERT(nextItem != m_Suballocations.end());
-10212 VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10214 item->size += nextItem->size;
-
-10216 m_Suballocations.erase(nextItem);
-
-
-10219 VmaSuballocationList::iterator VmaBlockMetadata_Generic::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
-
-
-10222 VmaSuballocation& suballoc = *suballocItem;
-10223 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
-10224 suballoc.hAllocation = VK_NULL_HANDLE;
-
-
-
-10228 m_SumFreeSize += suballoc.size;
-
-
-10231 bool mergeWithNext =
false;
-10232 bool mergeWithPrev =
false;
-
-10234 VmaSuballocationList::iterator nextItem = suballocItem;
-
-10236 if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
-
-10238 mergeWithNext =
true;
-
-
-10241 VmaSuballocationList::iterator prevItem = suballocItem;
-10242 if(suballocItem != m_Suballocations.begin())
-
-
-10245 if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
-
-10247 mergeWithPrev =
true;
-
-
-
-
-
-10253 UnregisterFreeSuballocation(nextItem);
-10254 MergeFreeWithNext(suballocItem);
-
-
-
-
-10259 UnregisterFreeSuballocation(prevItem);
-10260 MergeFreeWithNext(prevItem);
-10261 RegisterFreeSuballocation(prevItem);
-
-
-
-
-10266 RegisterFreeSuballocation(suballocItem);
-10267 return suballocItem;
-
-
-
-10271 void VmaBlockMetadata_Generic::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
-
-10273 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
-10274 VMA_ASSERT(item->size > 0);
-
-
-
-10278 VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
-
-10280 if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
-
-10282 if(m_FreeSuballocationsBySize.empty())
-
-10284 m_FreeSuballocationsBySize.push_back(item);
-
-
-
-10288 VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
-
-
-
-
-
+ 9563 for(
const auto& suballoc : m_Suballocations)
+
+ 9565 if(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
+
+ 9567 PrintDetailedMap_UnusedRange(json, suballoc.offset, suballoc.size);
+
+
+
+ 9571 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
+
+
+
+ 9575 PrintDetailedMap_End(json);
+
+
+
+
+ 9580 bool VmaBlockMetadata_Generic::CreateAllocationRequest(
+ 9581 uint32_t currentFrameIndex,
+ 9582 uint32_t frameInUseCount,
+ 9583 VkDeviceSize bufferImageGranularity,
+ 9584 VkDeviceSize allocSize,
+ 9585 VkDeviceSize allocAlignment,
+
+ 9587 VmaSuballocationType allocType,
+ 9588 bool canMakeOtherLost,
+
+ 9590 VmaAllocationRequest* pAllocationRequest)
+
+ 9592 VMA_ASSERT(allocSize > 0);
+ 9593 VMA_ASSERT(!upperAddress);
+ 9594 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
+ 9595 VMA_ASSERT(pAllocationRequest != VMA_NULL);
+ 9596 VMA_HEAVY_ASSERT(Validate());
+
+ 9598 pAllocationRequest->type = VmaAllocationRequestType::Normal;
+
+
+ 9601 if(canMakeOtherLost ==
false &&
+ 9602 m_SumFreeSize < allocSize + 2 * VMA_DEBUG_MARGIN)
+
+
+
+
+
+ 9608 const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
+ 9609 if(freeSuballocCount > 0)
+
+
+
+
+ 9614 VmaSuballocationList::iterator*
const it = VmaBinaryFindFirstNotLess(
+ 9615 m_FreeSuballocationsBySize.data(),
+ 9616 m_FreeSuballocationsBySize.data() + freeSuballocCount,
+ 9617 allocSize + 2 * VMA_DEBUG_MARGIN,
+ 9618 VmaSuballocationItemSizeLess());
+ 9619 size_t index = it - m_FreeSuballocationsBySize.data();
+ 9620 for(; index < freeSuballocCount; ++index)
+
+
+
+
+ 9625 bufferImageGranularity,
+
+
+
+ 9629 m_FreeSuballocationsBySize[index],
+
+ 9631 &pAllocationRequest->offset,
+ 9632 &pAllocationRequest->itemsToMakeLostCount,
+ 9633 &pAllocationRequest->sumFreeSize,
+ 9634 &pAllocationRequest->sumItemSize))
+
+ 9636 pAllocationRequest->item = m_FreeSuballocationsBySize[index];
+
+
+
+
+ 9641 else if(strategy == VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET)
+
+ 9643 for(VmaSuballocationList::iterator it = m_Suballocations.begin();
+ 9644 it != m_Suballocations.end();
+
+
+ 9647 if(it->type == VMA_SUBALLOCATION_TYPE_FREE && CheckAllocation(
+
+
+ 9650 bufferImageGranularity,
+
+
+
+
+
+ 9656 &pAllocationRequest->offset,
+ 9657 &pAllocationRequest->itemsToMakeLostCount,
+ 9658 &pAllocationRequest->sumFreeSize,
+ 9659 &pAllocationRequest->sumItemSize))
+
+ 9661 pAllocationRequest->item = it;
+
+
+
+
+
+
+
+ 9669 for(
size_t index = freeSuballocCount; index--; )
+
+
+
+
+ 9674 bufferImageGranularity,
+
+
+
+ 9678 m_FreeSuballocationsBySize[index],
+
+ 9680 &pAllocationRequest->offset,
+ 9681 &pAllocationRequest->itemsToMakeLostCount,
+ 9682 &pAllocationRequest->sumFreeSize,
+ 9683 &pAllocationRequest->sumItemSize))
+
+ 9685 pAllocationRequest->item = m_FreeSuballocationsBySize[index];
+
+
+
+
+
+
+ 9692 if(canMakeOtherLost)
+
+
+
+
+ 9697 VmaAllocationRequest tmpAllocRequest = {};
+ 9698 tmpAllocRequest.type = VmaAllocationRequestType::Normal;
+ 9699 for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
+ 9700 suballocIt != m_Suballocations.end();
+
+
+ 9703 if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
+ 9704 suballocIt->hAllocation->CanBecomeLost())
+
+
+
+
+ 9709 bufferImageGranularity,
+
+
+
+
+
+ 9715 &tmpAllocRequest.offset,
+ 9716 &tmpAllocRequest.itemsToMakeLostCount,
+ 9717 &tmpAllocRequest.sumFreeSize,
+ 9718 &tmpAllocRequest.sumItemSize))
+
+
+
+ 9722 *pAllocationRequest = tmpAllocRequest;
+ 9723 pAllocationRequest->item = suballocIt;
+
+
+ 9726 if(!found || tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
+
+ 9728 *pAllocationRequest = tmpAllocRequest;
+ 9729 pAllocationRequest->item = suballocIt;
+
+
+
+
+
+
+
+
+
+
+
+
+ 9742 bool VmaBlockMetadata_Generic::MakeRequestedAllocationsLost(
+ 9743 uint32_t currentFrameIndex,
+ 9744 uint32_t frameInUseCount,
+ 9745 VmaAllocationRequest* pAllocationRequest)
+
+ 9747 VMA_ASSERT(pAllocationRequest && pAllocationRequest->type == VmaAllocationRequestType::Normal);
+
+ 9749 while(pAllocationRequest->itemsToMakeLostCount > 0)
+
+ 9751 if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
+
+ 9753 ++pAllocationRequest->item;
+
+ 9755 VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
+ 9756 VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
+ 9757 VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
+ 9758 if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
+
+ 9760 pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
+ 9761 --pAllocationRequest->itemsToMakeLostCount;
+
+
+
+
+
+
+
+ 9769 VMA_HEAVY_ASSERT(Validate());
+ 9770 VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
+ 9771 VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
+
+
+
+
+ 9776 uint32_t VmaBlockMetadata_Generic::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
+
+ 9778 uint32_t lostAllocationCount = 0;
+ 9779 for(VmaSuballocationList::iterator it = m_Suballocations.begin();
+ 9780 it != m_Suballocations.end();
+
+
+ 9783 if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
+ 9784 it->hAllocation->CanBecomeLost() &&
+ 9785 it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
+
+ 9787 it = FreeSuballocation(it);
+ 9788 ++lostAllocationCount;
+
+
+ 9791 return lostAllocationCount;
+
+
+ 9794 VkResult VmaBlockMetadata_Generic::CheckCorruption(
const void* pBlockData)
+
+ 9796 for(
auto& suballoc : m_Suballocations)
+
+ 9798 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
+
+ 9800 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
+
+ 9802 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
+ 9803 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+ 9805 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
+
+ 9807 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
+ 9808 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+
+
+
+
+
+ 9816 void VmaBlockMetadata_Generic::Alloc(
+ 9817 const VmaAllocationRequest& request,
+ 9818 VmaSuballocationType type,
+ 9819 VkDeviceSize allocSize,
+
+
+ 9822 VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
+ 9823 VMA_ASSERT(request.item != m_Suballocations.end());
+ 9824 VmaSuballocation& suballoc = *request.item;
+
+ 9826 VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
+
+ 9828 VMA_ASSERT(request.offset >= suballoc.offset);
+ 9829 const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
+ 9830 VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
+ 9831 const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
+
+
+
+ 9835 UnregisterFreeSuballocation(request.item);
+
+ 9837 suballoc.offset = request.offset;
+ 9838 suballoc.size = allocSize;
+ 9839 suballoc.type = type;
+ 9840 suballoc.hAllocation = hAllocation;
+
+
+
+
+ 9845 VmaSuballocation paddingSuballoc = {};
+ 9846 paddingSuballoc.offset = request.offset + allocSize;
+ 9847 paddingSuballoc.size = paddingEnd;
+ 9848 paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+ 9849 VmaSuballocationList::iterator next = request.item;
+
+ 9851 const VmaSuballocationList::iterator paddingEndItem =
+ 9852 m_Suballocations.insert(next, paddingSuballoc);
+ 9853 RegisterFreeSuballocation(paddingEndItem);
+
+
+
+
+
+ 9859 VmaSuballocation paddingSuballoc = {};
+ 9860 paddingSuballoc.offset = request.offset - paddingBegin;
+ 9861 paddingSuballoc.size = paddingBegin;
+ 9862 paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+ 9863 const VmaSuballocationList::iterator paddingBeginItem =
+ 9864 m_Suballocations.insert(request.item, paddingSuballoc);
+ 9865 RegisterFreeSuballocation(paddingBeginItem);
+
+
+
+ 9869 m_FreeCount = m_FreeCount - 1;
+ 9870 if(paddingBegin > 0)
+
+
+
+
+
+
+
+ 9878 m_SumFreeSize -= allocSize;
+
+
+ 9881 void VmaBlockMetadata_Generic::Free(
const VmaAllocation allocation)
+
+ 9883 for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
+ 9884 suballocItem != m_Suballocations.end();
+
+
+ 9887 VmaSuballocation& suballoc = *suballocItem;
+ 9888 if(suballoc.hAllocation == allocation)
+
+ 9890 FreeSuballocation(suballocItem);
+ 9891 VMA_HEAVY_ASSERT(Validate());
+
+
+
+ 9895 VMA_ASSERT(0 &&
"Not found!");
+
+
+ 9898 void VmaBlockMetadata_Generic::FreeAtOffset(VkDeviceSize offset)
+
+ 9900 for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
+ 9901 suballocItem != m_Suballocations.end();
+
+
+ 9904 VmaSuballocation& suballoc = *suballocItem;
+ 9905 if(suballoc.offset == offset)
+
+ 9907 FreeSuballocation(suballocItem);
+
+
+
+ 9911 VMA_ASSERT(0 &&
"Not found!");
+
+
+ 9914 bool VmaBlockMetadata_Generic::ValidateFreeSuballocationList()
const
+
+ 9916 VkDeviceSize lastSize = 0;
+ 9917 for(
size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
+
+ 9919 const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
+
+ 9921 VMA_VALIDATE(it->type == VMA_SUBALLOCATION_TYPE_FREE);
+ 9922 VMA_VALIDATE(it->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER);
+ 9923 VMA_VALIDATE(it->size >= lastSize);
+ 9924 lastSize = it->size;
+
+
+
+
+ 9929 bool VmaBlockMetadata_Generic::CheckAllocation(
+ 9930 uint32_t currentFrameIndex,
+ 9931 uint32_t frameInUseCount,
+ 9932 VkDeviceSize bufferImageGranularity,
+ 9933 VkDeviceSize allocSize,
+ 9934 VkDeviceSize allocAlignment,
+ 9935 VmaSuballocationType allocType,
+ 9936 VmaSuballocationList::const_iterator suballocItem,
+ 9937 bool canMakeOtherLost,
+ 9938 VkDeviceSize* pOffset,
+ 9939 size_t* itemsToMakeLostCount,
+ 9940 VkDeviceSize* pSumFreeSize,
+ 9941 VkDeviceSize* pSumItemSize)
const
+
+ 9943 VMA_ASSERT(allocSize > 0);
+ 9944 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
+ 9945 VMA_ASSERT(suballocItem != m_Suballocations.cend());
+ 9946 VMA_ASSERT(pOffset != VMA_NULL);
+
+ 9948 *itemsToMakeLostCount = 0;
+
+
+
+ 9952 if(canMakeOtherLost)
+
+ 9954 if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
+
+ 9956 *pSumFreeSize = suballocItem->size;
+
+
+
+ 9960 if(suballocItem->hAllocation->CanBecomeLost() &&
+ 9961 suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
+
+ 9963 ++*itemsToMakeLostCount;
+ 9964 *pSumItemSize = suballocItem->size;
+
+
+
+
+
+
+
+
+ 9973 if(GetSize() - suballocItem->offset < allocSize)
+
+
+
+
+
+ 9979 *pOffset = suballocItem->offset;
+
+
+ 9982 if(VMA_DEBUG_MARGIN > 0)
+
+ 9984 *pOffset += VMA_DEBUG_MARGIN;
+
+
+
+ 9988 *pOffset = VmaAlignUp(*pOffset, allocAlignment);
+
+
+
+ 9992 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
+
+ 9994 bool bufferImageGranularityConflict =
false;
+ 9995 VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
+ 9996 while(prevSuballocItem != m_Suballocations.cbegin())
+
+
+ 9999 const VmaSuballocation& prevSuballoc = *prevSuballocItem;
+10000 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
+
+10002 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
+
+10004 bufferImageGranularityConflict =
true;
+
+
+
+
+
+
+
+10012 if(bufferImageGranularityConflict)
+
+10014 *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
+
+
+
+
+
+10020 if(*pOffset >= suballocItem->offset + suballocItem->size)
+
+
+
+
+
+10026 const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
+
+
+10029 const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
+
+10031 const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
+
+10033 if(suballocItem->offset + totalSize > GetSize())
+
+
+
+
+
+
+10040 VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
+10041 if(totalSize > suballocItem->size)
+
+10043 VkDeviceSize remainingSize = totalSize - suballocItem->size;
+10044 while(remainingSize > 0)
+
+10046 ++lastSuballocItem;
+10047 if(lastSuballocItem == m_Suballocations.cend())
+
+
+
+10051 if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
+
+10053 *pSumFreeSize += lastSuballocItem->size;
+
+
+
+10057 VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
+10058 if(lastSuballocItem->hAllocation->CanBecomeLost() &&
+10059 lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
+
+10061 ++*itemsToMakeLostCount;
+10062 *pSumItemSize += lastSuballocItem->size;
+
+
+
+
+
+
+10069 remainingSize = (lastSuballocItem->size < remainingSize) ?
+10070 remainingSize - lastSuballocItem->size : 0;
+
+
+
+
+
+10076 if(allocSize % bufferImageGranularity || *pOffset % bufferImageGranularity)
+
+10078 VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
+10079 ++nextSuballocItem;
+10080 while(nextSuballocItem != m_Suballocations.cend())
+
+10082 const VmaSuballocation& nextSuballoc = *nextSuballocItem;
+10083 if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
+
+10085 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
+
+10087 VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
+10088 if(nextSuballoc.hAllocation->CanBecomeLost() &&
+10089 nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
+
+10091 ++*itemsToMakeLostCount;
+
+
+
+
+
+
+
+
+
+
+
+
+10104 ++nextSuballocItem;
+
+
+
+
+
+10110 const VmaSuballocation& suballoc = *suballocItem;
+10111 VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10113 *pSumFreeSize = suballoc.size;
+
+
+10116 if(suballoc.size < allocSize)
+
+
+
+
+
+10122 *pOffset = suballoc.offset;
+
+
+10125 if(VMA_DEBUG_MARGIN > 0)
+
+10127 *pOffset += VMA_DEBUG_MARGIN;
+
+
+
+10131 *pOffset = VmaAlignUp(*pOffset, allocAlignment);
+
+
+
+10135 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
+
+10137 bool bufferImageGranularityConflict =
false;
+10138 VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
+10139 while(prevSuballocItem != m_Suballocations.cbegin())
+
+10141 --prevSuballocItem;
+10142 const VmaSuballocation& prevSuballoc = *prevSuballocItem;
+10143 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
+
+10145 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
+
+10147 bufferImageGranularityConflict =
true;
+
+
+
+
+
+
+
+10155 if(bufferImageGranularityConflict)
+
+10157 *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
+
+
+
+
+10162 const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
+
+
+10165 const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
+
+
+10168 if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
+
+
+
+
+
+
+10175 if(allocSize % bufferImageGranularity || *pOffset % bufferImageGranularity)
+
+10177 VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
+10178 ++nextSuballocItem;
+10179 while(nextSuballocItem != m_Suballocations.cend())
+
+10181 const VmaSuballocation& nextSuballoc = *nextSuballocItem;
+10182 if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
+
+10184 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
+
+
+
+
+
+
+
+
+
+10194 ++nextSuballocItem;
+
+
+
+
+
+
+
+
+10203 void VmaBlockMetadata_Generic::MergeFreeWithNext(VmaSuballocationList::iterator item)
+
+10205 VMA_ASSERT(item != m_Suballocations.end());
+10206 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10208 VmaSuballocationList::iterator nextItem = item;
+
+10210 VMA_ASSERT(nextItem != m_Suballocations.end());
+10211 VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10213 item->size += nextItem->size;
+
+10215 m_Suballocations.erase(nextItem);
+
+
+10218 VmaSuballocationList::iterator VmaBlockMetadata_Generic::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
+
+
+10221 VmaSuballocation& suballoc = *suballocItem;
+10222 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+10223 suballoc.hAllocation = VK_NULL_HANDLE;
+
+
+
+10227 m_SumFreeSize += suballoc.size;
+
+
+10230 bool mergeWithNext =
false;
+10231 bool mergeWithPrev =
false;
+
+10233 VmaSuballocationList::iterator nextItem = suballocItem;
+
+10235 if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
+
+10237 mergeWithNext =
true;
+
+
+10240 VmaSuballocationList::iterator prevItem = suballocItem;
+10241 if(suballocItem != m_Suballocations.begin())
+
+
+10244 if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
+
+10246 mergeWithPrev =
true;
+
+
+
+
+
+10252 UnregisterFreeSuballocation(nextItem);
+10253 MergeFreeWithNext(suballocItem);
+
+
+
+
+10258 UnregisterFreeSuballocation(prevItem);
+10259 MergeFreeWithNext(prevItem);
+10260 RegisterFreeSuballocation(prevItem);
+
+
+
+
+10265 RegisterFreeSuballocation(suballocItem);
+10266 return suballocItem;
+
+
+
+10270 void VmaBlockMetadata_Generic::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
+
+10272 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
+10273 VMA_ASSERT(item->size > 0);
+
+
+
+10277 VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
+
+10279 if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
+
+10281 if(m_FreeSuballocationsBySize.empty())
+
+10283 m_FreeSuballocationsBySize.push_back(item);
+
+
+
+10287 VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
+
+
+
+
+
+
-
-10296 void VmaBlockMetadata_Generic::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
-
-10298 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
-10299 VMA_ASSERT(item->size > 0);
-
-
-
-10303 VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
-
-10305 if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
-
-10307 VmaSuballocationList::iterator*
const it = VmaBinaryFindFirstNotLess(
-10308 m_FreeSuballocationsBySize.data(),
-10309 m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
-
-10311 VmaSuballocationItemSizeLess());
-10312 for(
size_t index = it - m_FreeSuballocationsBySize.data();
-10313 index < m_FreeSuballocationsBySize.size();
-
-
-10316 if(m_FreeSuballocationsBySize[index] == item)
-
-10318 VmaVectorRemove(m_FreeSuballocationsBySize, index);
-
-
-10321 VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) &&
"Not found.");
-
-10323 VMA_ASSERT(0 &&
"Not found.");
-
-
-
-
-
-10329 bool VmaBlockMetadata_Generic::IsBufferImageGranularityConflictPossible(
-10330 VkDeviceSize bufferImageGranularity,
-10331 VmaSuballocationType& inOutPrevSuballocType)
const
-
-10333 if(bufferImageGranularity == 1 || IsEmpty())
-
-
-
-
-10338 VkDeviceSize minAlignment = VK_WHOLE_SIZE;
-10339 bool typeConflictFound =
false;
-10340 for(
const auto& suballoc : m_Suballocations)
-
-10342 const VmaSuballocationType suballocType = suballoc.type;
-10343 if(suballocType != VMA_SUBALLOCATION_TYPE_FREE)
-
-10345 minAlignment = VMA_MIN(minAlignment, suballoc.hAllocation->GetAlignment());
-10346 if(VmaIsBufferImageGranularityConflict(inOutPrevSuballocType, suballocType))
-
-10348 typeConflictFound =
true;
-
-10350 inOutPrevSuballocType = suballocType;
-
-
-
-10354 return typeConflictFound || minAlignment >= bufferImageGranularity;
-
-
-
-
-10360 VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(
VmaAllocator hAllocator) :
-10361 VmaBlockMetadata(hAllocator),
-
-10363 m_Suballocations0(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
-10364 m_Suballocations1(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
-10365 m_1stVectorIndex(0),
-10366 m_2ndVectorMode(SECOND_VECTOR_EMPTY),
-10367 m_1stNullItemsBeginCount(0),
-10368 m_1stNullItemsMiddleCount(0),
-10369 m_2ndNullItemsCount(0)
-
-
-
-10373 VmaBlockMetadata_Linear::~VmaBlockMetadata_Linear()
-
-
-
-10377 void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
-
-10379 VmaBlockMetadata::Init(size);
-10380 m_SumFreeSize = size;
-
-
-10383 bool VmaBlockMetadata_Linear::Validate()
const
-
-10385 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-10386 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-10388 VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
-10389 VMA_VALIDATE(!suballocations1st.empty() ||
-10390 suballocations2nd.empty() ||
-10391 m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
-
-10393 if(!suballocations1st.empty())
-
-
-10396 VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].hAllocation != VK_NULL_HANDLE);
-
-10398 VMA_VALIDATE(suballocations1st.back().hAllocation != VK_NULL_HANDLE);
-
-10400 if(!suballocations2nd.empty())
-
-
-10403 VMA_VALIDATE(suballocations2nd.back().hAllocation != VK_NULL_HANDLE);
-
-
-10406 VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
-10407 VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
-
-10409 VkDeviceSize sumUsedSize = 0;
-10410 const size_t suballoc1stCount = suballocations1st.size();
-10411 VkDeviceSize offset = VMA_DEBUG_MARGIN;
-
-10413 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-10415 const size_t suballoc2ndCount = suballocations2nd.size();
-10416 size_t nullItem2ndCount = 0;
-10417 for(
size_t i = 0; i < suballoc2ndCount; ++i)
-
-10419 const VmaSuballocation& suballoc = suballocations2nd[i];
-10420 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10422 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
-10423 VMA_VALIDATE(suballoc.offset >= offset);
-
-
-
-10427 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
-10428 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
-10429 sumUsedSize += suballoc.size;
-
-
-
-10433 ++nullItem2ndCount;
-
-
-10436 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
-
-
-10439 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
-
-
-10442 for(
size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
-
-10444 const VmaSuballocation& suballoc = suballocations1st[i];
-10445 VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
-10446 suballoc.hAllocation == VK_NULL_HANDLE);
-
-
-10449 size_t nullItem1stCount = m_1stNullItemsBeginCount;
-
-10451 for(
size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
-
-10453 const VmaSuballocation& suballoc = suballocations1st[i];
-10454 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10456 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
-10457 VMA_VALIDATE(suballoc.offset >= offset);
-10458 VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
-
-
-
-10462 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
-10463 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
-10464 sumUsedSize += suballoc.size;
-
-
-
-10468 ++nullItem1stCount;
-
-
-10471 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
-
-10473 VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
-
-10475 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-10477 const size_t suballoc2ndCount = suballocations2nd.size();
-10478 size_t nullItem2ndCount = 0;
-10479 for(
size_t i = suballoc2ndCount; i--; )
-
-10481 const VmaSuballocation& suballoc = suballocations2nd[i];
-10482 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
-
-10484 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
-10485 VMA_VALIDATE(suballoc.offset >= offset);
-
-
-
-10489 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
-10490 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
-10491 sumUsedSize += suballoc.size;
-
-
-
-10495 ++nullItem2ndCount;
-
-
-10498 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
-
-
-10501 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
-
-
-10504 VMA_VALIDATE(offset <= GetSize());
-10505 VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
-
-
-
-
-10510 size_t VmaBlockMetadata_Linear::GetAllocationCount()
const
-
-10512 return AccessSuballocations1st().size() - (m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount) +
-10513 AccessSuballocations2nd().size() - m_2ndNullItemsCount;
-
-
-10516 VkDeviceSize VmaBlockMetadata_Linear::GetUnusedRangeSizeMax()
const
-
-10518 const VkDeviceSize size = GetSize();
-
-
-
-
-
-
-
-
-
-
-
-10530 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-
-10532 switch(m_2ndVectorMode)
-
-10534 case SECOND_VECTOR_EMPTY:
-
-
-
-
-
-10540 const size_t suballocations1stCount = suballocations1st.size();
-10541 VMA_ASSERT(suballocations1stCount > m_1stNullItemsBeginCount);
-10542 const VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
-10543 const VmaSuballocation& lastSuballoc = suballocations1st[suballocations1stCount - 1];
-
-10545 firstSuballoc.offset,
-10546 size - (lastSuballoc.offset + lastSuballoc.size));
-
-
-
-10550 case SECOND_VECTOR_RING_BUFFER:
-
-
-
-
-10555 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-10556 const VmaSuballocation& lastSuballoc2nd = suballocations2nd.back();
-10557 const VmaSuballocation& firstSuballoc1st = suballocations1st[m_1stNullItemsBeginCount];
-10558 return firstSuballoc1st.offset - (lastSuballoc2nd.offset + lastSuballoc2nd.size);
-
-
-
-10562 case SECOND_VECTOR_DOUBLE_STACK:
-
-
-
-
-10567 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-10568 const VmaSuballocation& topSuballoc2nd = suballocations2nd.back();
-10569 const VmaSuballocation& lastSuballoc1st = suballocations1st.back();
-10570 return topSuballoc2nd.offset - (lastSuballoc1st.offset + lastSuballoc1st.size);
-
-
-
-
-
-
-
-
-
-10580 void VmaBlockMetadata_Linear::CalcAllocationStatInfo(
VmaStatInfo& outInfo)
const
-
-10582 const VkDeviceSize size = GetSize();
-10583 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-10584 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-10585 const size_t suballoc1stCount = suballocations1st.size();
-10586 const size_t suballoc2ndCount = suballocations2nd.size();
-
-
-
-
-
-
-
-
-
-
-10597 VkDeviceSize lastOffset = 0;
-
-10599 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-10601 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
-10602 size_t nextAlloc2ndIndex = 0;
-10603 while(lastOffset < freeSpace2ndTo1stEnd)
-
-
-10606 while(nextAlloc2ndIndex < suballoc2ndCount &&
-10607 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-10609 ++nextAlloc2ndIndex;
-
-
-
-10613 if(nextAlloc2ndIndex < suballoc2ndCount)
-
-10615 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-10618 if(lastOffset < suballoc.offset)
-
-
-10621 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-
-
-
-10635 lastOffset = suballoc.offset + suballoc.size;
-10636 ++nextAlloc2ndIndex;
-
-
-
-
-
-10642 if(lastOffset < freeSpace2ndTo1stEnd)
-
-10644 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
-
-
-
-
-
-
-
-10652 lastOffset = freeSpace2ndTo1stEnd;
-
-
-
-
-10657 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
-10658 const VkDeviceSize freeSpace1stTo2ndEnd =
-10659 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
-10660 while(lastOffset < freeSpace1stTo2ndEnd)
-
-
-10663 while(nextAlloc1stIndex < suballoc1stCount &&
-10664 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
-
-10666 ++nextAlloc1stIndex;
-
-
-
-10670 if(nextAlloc1stIndex < suballoc1stCount)
-
-10672 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
-
-
-10675 if(lastOffset < suballoc.offset)
-
-
-10678 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-
-
-
-10692 lastOffset = suballoc.offset + suballoc.size;
-10693 ++nextAlloc1stIndex;
-
-
-
-
-
-10699 if(lastOffset < freeSpace1stTo2ndEnd)
-
-10701 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
-
-
-
-
-
-
-
-10709 lastOffset = freeSpace1stTo2ndEnd;
-
-
-
-10713 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-10715 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
-10716 while(lastOffset < size)
-
-
-10719 while(nextAlloc2ndIndex != SIZE_MAX &&
-10720 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-10722 --nextAlloc2ndIndex;
-
-
-
-10726 if(nextAlloc2ndIndex != SIZE_MAX)
-
-10728 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-10731 if(lastOffset < suballoc.offset)
-
-
-10734 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-
-
-
-10748 lastOffset = suballoc.offset + suballoc.size;
-10749 --nextAlloc2ndIndex;
-
-
-
-
-
-10755 if(lastOffset < size)
-
-10757 const VkDeviceSize unusedRangeSize = size - lastOffset;
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-10773 void VmaBlockMetadata_Linear::AddPoolStats(
VmaPoolStats& inoutStats)
const
-
-10775 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-10776 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-10777 const VkDeviceSize size = GetSize();
-10778 const size_t suballoc1stCount = suballocations1st.size();
-10779 const size_t suballoc2ndCount = suballocations2nd.size();
-
-10781 inoutStats.
size += size;
-
-10783 VkDeviceSize lastOffset = 0;
-
-10785 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-10787 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
-10788 size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
-10789 while(lastOffset < freeSpace2ndTo1stEnd)
-
-
-10792 while(nextAlloc2ndIndex < suballoc2ndCount &&
-10793 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-10795 ++nextAlloc2ndIndex;
-
-
-
-10799 if(nextAlloc2ndIndex < suballoc2ndCount)
-
-10801 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-10804 if(lastOffset < suballoc.offset)
-
-
-10807 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-10818 lastOffset = suballoc.offset + suballoc.size;
-10819 ++nextAlloc2ndIndex;
-
-
-
-
-10824 if(lastOffset < freeSpace2ndTo1stEnd)
-
-
-10827 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
-
-
-
-
-
-
-10834 lastOffset = freeSpace2ndTo1stEnd;
-
-
-
-
-10839 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
-10840 const VkDeviceSize freeSpace1stTo2ndEnd =
-10841 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
-10842 while(lastOffset < freeSpace1stTo2ndEnd)
-
-
-10845 while(nextAlloc1stIndex < suballoc1stCount &&
-10846 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
-
-10848 ++nextAlloc1stIndex;
-
-
-
-10852 if(nextAlloc1stIndex < suballoc1stCount)
-
-10854 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
-
-
-10857 if(lastOffset < suballoc.offset)
-
-
-10860 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-10871 lastOffset = suballoc.offset + suballoc.size;
-10872 ++nextAlloc1stIndex;
-
-
-
-
-10877 if(lastOffset < freeSpace1stTo2ndEnd)
-
-
-10880 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
-
-
-
-
-
-
-10887 lastOffset = freeSpace1stTo2ndEnd;
-
-
-
-10891 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-10893 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
-10894 while(lastOffset < size)
-
-
-10897 while(nextAlloc2ndIndex != SIZE_MAX &&
-10898 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-10900 --nextAlloc2ndIndex;
-
-
-
-10904 if(nextAlloc2ndIndex != SIZE_MAX)
-
-10906 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-10909 if(lastOffset < suballoc.offset)
-
-
-10912 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-
-
-
-
-
-
-
-
-
-
-10923 lastOffset = suballoc.offset + suballoc.size;
-10924 --nextAlloc2ndIndex;
-
-
-
-
-10929 if(lastOffset < size)
-
-
-10932 const VkDeviceSize unusedRangeSize = size - lastOffset;
-
-
-
-
-
-
-
-
-
-
-
-
-10945 #if VMA_STATS_STRING_ENABLED
-10946 void VmaBlockMetadata_Linear::PrintDetailedMap(
class VmaJsonWriter& json)
const
-
-10948 const VkDeviceSize size = GetSize();
-10949 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-10950 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-10951 const size_t suballoc1stCount = suballocations1st.size();
-10952 const size_t suballoc2ndCount = suballocations2nd.size();
-
-
-
-10956 size_t unusedRangeCount = 0;
-10957 VkDeviceSize usedBytes = 0;
-
-10959 VkDeviceSize lastOffset = 0;
-
-10961 size_t alloc2ndCount = 0;
-10962 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-10964 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
-10965 size_t nextAlloc2ndIndex = 0;
-10966 while(lastOffset < freeSpace2ndTo1stEnd)
-
-
-10969 while(nextAlloc2ndIndex < suballoc2ndCount &&
-10970 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-10972 ++nextAlloc2ndIndex;
-
-
-
-10976 if(nextAlloc2ndIndex < suballoc2ndCount)
-
-10978 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-10981 if(lastOffset < suballoc.offset)
-
-
-10984 ++unusedRangeCount;
-
-
-
-
-
-10990 usedBytes += suballoc.size;
-
-
-10993 lastOffset = suballoc.offset + suballoc.size;
-10994 ++nextAlloc2ndIndex;
-
-
-
-
-10999 if(lastOffset < freeSpace2ndTo1stEnd)
-
-
-11002 ++unusedRangeCount;
-
-
-
-11006 lastOffset = freeSpace2ndTo1stEnd;
-
-
-
-
-11011 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
-11012 size_t alloc1stCount = 0;
-11013 const VkDeviceSize freeSpace1stTo2ndEnd =
-11014 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
-11015 while(lastOffset < freeSpace1stTo2ndEnd)
-
-
-11018 while(nextAlloc1stIndex < suballoc1stCount &&
-11019 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
-
-11021 ++nextAlloc1stIndex;
-
-
-
-11025 if(nextAlloc1stIndex < suballoc1stCount)
-
-11027 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
-
-
-11030 if(lastOffset < suballoc.offset)
-
-
-11033 ++unusedRangeCount;
-
-
-
-
-
-11039 usedBytes += suballoc.size;
-
-
-11042 lastOffset = suballoc.offset + suballoc.size;
-11043 ++nextAlloc1stIndex;
-
-
-
-
-11048 if(lastOffset < size)
-
-
-11051 ++unusedRangeCount;
-
-
-
-11055 lastOffset = freeSpace1stTo2ndEnd;
-
-
-
-11059 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-11061 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
-11062 while(lastOffset < size)
-
-
-11065 while(nextAlloc2ndIndex != SIZE_MAX &&
-11066 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-11068 --nextAlloc2ndIndex;
-
-
-
-11072 if(nextAlloc2ndIndex != SIZE_MAX)
-
-11074 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-11077 if(lastOffset < suballoc.offset)
-
-
-11080 ++unusedRangeCount;
-
-
-
-
-
-11086 usedBytes += suballoc.size;
-
-
-11089 lastOffset = suballoc.offset + suballoc.size;
-11090 --nextAlloc2ndIndex;
-
-
-
-
-11095 if(lastOffset < size)
-
-
-11098 ++unusedRangeCount;
-
-
-
-
-
-
-
-
-11107 const VkDeviceSize unusedBytes = size - usedBytes;
-11108 PrintDetailedMap_Begin(json, unusedBytes, alloc1stCount + alloc2ndCount, unusedRangeCount);
-
-
-
-
-11113 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-11115 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
-11116 size_t nextAlloc2ndIndex = 0;
-11117 while(lastOffset < freeSpace2ndTo1stEnd)
-
-
-11120 while(nextAlloc2ndIndex < suballoc2ndCount &&
-11121 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-11123 ++nextAlloc2ndIndex;
-
-
-
-11127 if(nextAlloc2ndIndex < suballoc2ndCount)
-
-11129 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-11132 if(lastOffset < suballoc.offset)
-
-
-11135 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-11136 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-
-11141 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
-
-
-11144 lastOffset = suballoc.offset + suballoc.size;
-11145 ++nextAlloc2ndIndex;
-
-
-
-
-11150 if(lastOffset < freeSpace2ndTo1stEnd)
-
-
-11153 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
-11154 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-11158 lastOffset = freeSpace2ndTo1stEnd;
-
-
-
-
-11163 nextAlloc1stIndex = m_1stNullItemsBeginCount;
-11164 while(lastOffset < freeSpace1stTo2ndEnd)
-
-
-11167 while(nextAlloc1stIndex < suballoc1stCount &&
-11168 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
-
-11170 ++nextAlloc1stIndex;
-
-
-
-11174 if(nextAlloc1stIndex < suballoc1stCount)
-
-11176 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
-
-
-11179 if(lastOffset < suballoc.offset)
-
-
-11182 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-11183 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-
-11188 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
-
-
-11191 lastOffset = suballoc.offset + suballoc.size;
-11192 ++nextAlloc1stIndex;
-
-
-
-
-11197 if(lastOffset < freeSpace1stTo2ndEnd)
-
-
-11200 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
-11201 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-11205 lastOffset = freeSpace1stTo2ndEnd;
-
-
-
-11209 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-11211 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
-11212 while(lastOffset < size)
-
-
-11215 while(nextAlloc2ndIndex != SIZE_MAX &&
-11216 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
-
-11218 --nextAlloc2ndIndex;
-
-
-
-11222 if(nextAlloc2ndIndex != SIZE_MAX)
-
-11224 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
-
-
-11227 if(lastOffset < suballoc.offset)
-
-
-11230 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
-11231 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-
-11236 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
-
-
-11239 lastOffset = suballoc.offset + suballoc.size;
-11240 --nextAlloc2ndIndex;
-
-
-
-
-11245 if(lastOffset < size)
-
-
-11248 const VkDeviceSize unusedRangeSize = size - lastOffset;
-11249 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
-
-
-
-
-
-
-
-
-11258 PrintDetailedMap_End(json);
-
-
-
-11262 bool VmaBlockMetadata_Linear::CreateAllocationRequest(
-11263 uint32_t currentFrameIndex,
-11264 uint32_t frameInUseCount,
-11265 VkDeviceSize bufferImageGranularity,
-11266 VkDeviceSize allocSize,
-11267 VkDeviceSize allocAlignment,
-
-11269 VmaSuballocationType allocType,
-11270 bool canMakeOtherLost,
-
-11272 VmaAllocationRequest* pAllocationRequest)
-
-11274 VMA_ASSERT(allocSize > 0);
-11275 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
-11276 VMA_ASSERT(pAllocationRequest != VMA_NULL);
-11277 VMA_HEAVY_ASSERT(Validate());
-11278 return upperAddress ?
-11279 CreateAllocationRequest_UpperAddress(
-11280 currentFrameIndex, frameInUseCount, bufferImageGranularity,
-11281 allocSize, allocAlignment, allocType, canMakeOtherLost, strategy, pAllocationRequest) :
-11282 CreateAllocationRequest_LowerAddress(
-11283 currentFrameIndex, frameInUseCount, bufferImageGranularity,
-11284 allocSize, allocAlignment, allocType, canMakeOtherLost, strategy, pAllocationRequest);
-
-
-11287 bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
-11288 uint32_t currentFrameIndex,
-11289 uint32_t frameInUseCount,
-11290 VkDeviceSize bufferImageGranularity,
-11291 VkDeviceSize allocSize,
-11292 VkDeviceSize allocAlignment,
-11293 VmaSuballocationType allocType,
-11294 bool canMakeOtherLost,
-
-11296 VmaAllocationRequest* pAllocationRequest)
-
-11298 const VkDeviceSize size = GetSize();
-11299 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-11300 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-11302 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-11304 VMA_ASSERT(0 &&
"Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
-
-
-
-
-11309 if(allocSize > size)
-
-
-
-11313 VkDeviceSize resultBaseOffset = size - allocSize;
-11314 if(!suballocations2nd.empty())
-
-11316 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
-11317 resultBaseOffset = lastSuballoc.offset - allocSize;
-11318 if(allocSize > lastSuballoc.offset)
-
-
-
-
-
-
-11325 VkDeviceSize resultOffset = resultBaseOffset;
-
-
-11328 if(VMA_DEBUG_MARGIN > 0)
-
-11330 if(resultOffset < VMA_DEBUG_MARGIN)
-
-
-
-11334 resultOffset -= VMA_DEBUG_MARGIN;
-
-
-
-11338 resultOffset = VmaAlignDown(resultOffset, allocAlignment);
-
-
-
-11342 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
-
-11344 bool bufferImageGranularityConflict =
false;
-11345 for(
size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
-
-11347 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
-11348 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
-
-11350 if(VmaIsBufferImageGranularityConflict(nextSuballoc.type, allocType))
-
-11352 bufferImageGranularityConflict =
true;
-
-
-
-
-
-
-
-11360 if(bufferImageGranularityConflict)
-
-11362 resultOffset = VmaAlignDown(resultOffset, bufferImageGranularity);
-
-
-
-
-11367 const VkDeviceSize endOf1st = !suballocations1st.empty() ?
-11368 suballocations1st.back().offset + suballocations1st.back().size :
-
-11370 if(endOf1st + VMA_DEBUG_MARGIN <= resultOffset)
-
-
-
-11374 if(bufferImageGranularity > 1)
-
-11376 for(
size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
-
-11378 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
-11379 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
-
-11381 if(VmaIsBufferImageGranularityConflict(allocType, prevSuballoc.type))
-
-
-
-
-
-
-
-
-
-
-
-
-
-11395 pAllocationRequest->offset = resultOffset;
-11396 pAllocationRequest->sumFreeSize = resultBaseOffset + allocSize - endOf1st;
-11397 pAllocationRequest->sumItemSize = 0;
-
-11399 pAllocationRequest->itemsToMakeLostCount = 0;
-11400 pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
-
-
-
-
-
-
-11407 bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
-11408 uint32_t currentFrameIndex,
-11409 uint32_t frameInUseCount,
-11410 VkDeviceSize bufferImageGranularity,
-11411 VkDeviceSize allocSize,
-11412 VkDeviceSize allocAlignment,
-11413 VmaSuballocationType allocType,
-11414 bool canMakeOtherLost,
-
-11416 VmaAllocationRequest* pAllocationRequest)
-
-11418 const VkDeviceSize size = GetSize();
-11419 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-11420 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-11422 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-
-
-11426 VkDeviceSize resultBaseOffset = 0;
-11427 if(!suballocations1st.empty())
-
-11429 const VmaSuballocation& lastSuballoc = suballocations1st.back();
-11430 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size;
-
-
-
-11434 VkDeviceSize resultOffset = resultBaseOffset;
-
-
-11437 if(VMA_DEBUG_MARGIN > 0)
-
-11439 resultOffset += VMA_DEBUG_MARGIN;
-
-
-
-11443 resultOffset = VmaAlignUp(resultOffset, allocAlignment);
-
-
-
-11447 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
-
-11449 bool bufferImageGranularityConflict =
false;
-11450 for(
size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
-
-11452 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
-11453 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
-
-11455 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
-
-11457 bufferImageGranularityConflict =
true;
-
-
-
-
-
-
-
-11465 if(bufferImageGranularityConflict)
-
-11467 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
-
-
-
-11471 const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
-11472 suballocations2nd.back().offset : size;
-
-
-11475 if(resultOffset + allocSize + VMA_DEBUG_MARGIN <= freeSpaceEnd)
-
-
-
-11479 if((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-11481 for(
size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
-
-11483 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
-11484 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
-
-11486 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
-
-
-
-
-
-
-
-
-
-
-
-
-
-11500 pAllocationRequest->offset = resultOffset;
-11501 pAllocationRequest->sumFreeSize = freeSpaceEnd - resultBaseOffset;
-11502 pAllocationRequest->sumItemSize = 0;
-
-11504 pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
-11505 pAllocationRequest->itemsToMakeLostCount = 0;
-
-
-
-
-
-
-11512 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-11514 VMA_ASSERT(!suballocations1st.empty());
-
-11516 VkDeviceSize resultBaseOffset = 0;
-11517 if(!suballocations2nd.empty())
-
-11519 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
-11520 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size;
-
-
-
-11524 VkDeviceSize resultOffset = resultBaseOffset;
-
-
-11527 if(VMA_DEBUG_MARGIN > 0)
-
-11529 resultOffset += VMA_DEBUG_MARGIN;
-
-
-
-11533 resultOffset = VmaAlignUp(resultOffset, allocAlignment);
-
-
-
-11537 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
-
-11539 bool bufferImageGranularityConflict =
false;
-11540 for(
size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
-
-11542 const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
-11543 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
-
-11545 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
-
-11547 bufferImageGranularityConflict =
true;
-
-
-
-
-
-
-
-11555 if(bufferImageGranularityConflict)
-
-11557 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
-
-
-
-11561 pAllocationRequest->itemsToMakeLostCount = 0;
-11562 pAllocationRequest->sumItemSize = 0;
-11563 size_t index1st = m_1stNullItemsBeginCount;
-
-11565 if(canMakeOtherLost)
-
-11567 while(index1st < suballocations1st.size() &&
-11568 resultOffset + allocSize + VMA_DEBUG_MARGIN > suballocations1st[index1st].offset)
-
-
-11571 const VmaSuballocation& suballoc = suballocations1st[index1st];
-11572 if(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
-
-
-
-
-
-11578 VMA_ASSERT(suballoc.hAllocation != VK_NULL_HANDLE);
-11579 if(suballoc.hAllocation->CanBecomeLost() &&
-11580 suballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
-
-11582 ++pAllocationRequest->itemsToMakeLostCount;
-11583 pAllocationRequest->sumItemSize += suballoc.size;
-
-
-
-
-
-
-
-
-
-
-
-11595 if(allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
-
-11597 while(index1st < suballocations1st.size())
-
-11599 const VmaSuballocation& suballoc = suballocations1st[index1st];
-11600 if(VmaBlocksOnSamePage(resultOffset, allocSize, suballoc.offset, bufferImageGranularity))
-
-11602 if(suballoc.hAllocation != VK_NULL_HANDLE)
-
-
-11605 if(suballoc.hAllocation->CanBecomeLost() &&
-11606 suballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
-
-11608 ++pAllocationRequest->itemsToMakeLostCount;
-11609 pAllocationRequest->sumItemSize += suballoc.size;
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-11627 if(index1st == suballocations1st.size() &&
-11628 resultOffset + allocSize + VMA_DEBUG_MARGIN > size)
-
-
-11631 VMA_DEBUG_LOG(
"Unsupported special case in custom pool with linear allocation algorithm used as ring buffer with allocations that can be lost.");
-
-
-
-
-11636 if((index1st == suballocations1st.size() && resultOffset + allocSize + VMA_DEBUG_MARGIN <= size) ||
-11637 (index1st < suballocations1st.size() && resultOffset + allocSize + VMA_DEBUG_MARGIN <= suballocations1st[index1st].offset))
-
-
-
-11641 if(allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
-
-11643 for(
size_t nextSuballocIndex = index1st;
-11644 nextSuballocIndex < suballocations1st.size();
-11645 nextSuballocIndex++)
-
-11647 const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
-11648 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
-
-11650 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
-
-
-
-
-
-
-
-
-
-
-
-
-
-11664 pAllocationRequest->offset = resultOffset;
-11665 pAllocationRequest->sumFreeSize =
-11666 (index1st < suballocations1st.size() ? suballocations1st[index1st].offset : size)
-
-11668 - pAllocationRequest->sumItemSize;
-11669 pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
-
-
-
-
-
-
-
-
-11678 bool VmaBlockMetadata_Linear::MakeRequestedAllocationsLost(
-11679 uint32_t currentFrameIndex,
-11680 uint32_t frameInUseCount,
-11681 VmaAllocationRequest* pAllocationRequest)
-
-11683 if(pAllocationRequest->itemsToMakeLostCount == 0)
-
-
-
-
-11688 VMA_ASSERT(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER);
-
-
-11691 SuballocationVectorType* suballocations = &AccessSuballocations1st();
-11692 size_t index = m_1stNullItemsBeginCount;
-11693 size_t madeLostCount = 0;
-11694 while(madeLostCount < pAllocationRequest->itemsToMakeLostCount)
-
-11696 if(index == suballocations->size())
-
-
-
-11700 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-11702 suballocations = &AccessSuballocations2nd();
-
-
-
-11706 VMA_ASSERT(!suballocations->empty());
-
-11708 VmaSuballocation& suballoc = (*suballocations)[index];
-11709 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
-
-11711 VMA_ASSERT(suballoc.hAllocation != VK_NULL_HANDLE);
-11712 VMA_ASSERT(suballoc.hAllocation->CanBecomeLost());
-11713 if(suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
-
-11715 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
-11716 suballoc.hAllocation = VK_NULL_HANDLE;
-11717 m_SumFreeSize += suballoc.size;
-11718 if(suballocations == &AccessSuballocations1st())
-
-11720 ++m_1stNullItemsMiddleCount;
-
-
-
-11724 ++m_2ndNullItemsCount;
-
-
-
-
-
-
-
-
-
-
-
-11736 CleanupAfterFree();
-
-
-
-
-
-11742 uint32_t VmaBlockMetadata_Linear::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
-
-11744 uint32_t lostAllocationCount = 0;
-
-11746 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-11747 for(
size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
-
-11749 VmaSuballocation& suballoc = suballocations1st[i];
-11750 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE &&
-11751 suballoc.hAllocation->CanBecomeLost() &&
-11752 suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
-
-11754 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
-11755 suballoc.hAllocation = VK_NULL_HANDLE;
-11756 ++m_1stNullItemsMiddleCount;
-11757 m_SumFreeSize += suballoc.size;
-11758 ++lostAllocationCount;
-
-
-
-11762 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-11763 for(
size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
-
-11765 VmaSuballocation& suballoc = suballocations2nd[i];
-11766 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE &&
-11767 suballoc.hAllocation->CanBecomeLost() &&
-11768 suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
-
-11770 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
-11771 suballoc.hAllocation = VK_NULL_HANDLE;
-11772 ++m_2ndNullItemsCount;
-11773 m_SumFreeSize += suballoc.size;
-11774 ++lostAllocationCount;
-
-
-
-11778 if(lostAllocationCount)
-
-11780 CleanupAfterFree();
-
-
-11783 return lostAllocationCount;
-
-
-11786 VkResult VmaBlockMetadata_Linear::CheckCorruption(
const void* pBlockData)
-
-11788 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-11789 for(
size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
-
-11791 const VmaSuballocation& suballoc = suballocations1st[i];
-11792 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
-
-11794 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
-
-11796 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
-11797 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-11799 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
-
-11801 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
-11802 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-
-
-11807 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-11808 for(
size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
-
-11810 const VmaSuballocation& suballoc = suballocations2nd[i];
-11811 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
-
-11813 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
-
-11815 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
-11816 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-11818 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
-
-11820 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
-11821 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-
-
-
-
-
-11829 void VmaBlockMetadata_Linear::Alloc(
-11830 const VmaAllocationRequest& request,
-11831 VmaSuballocationType type,
-11832 VkDeviceSize allocSize,
-
-
-11835 const VmaSuballocation newSuballoc = { request.offset, allocSize, hAllocation, type };
-
-11837 switch(request.type)
-
-11839 case VmaAllocationRequestType::UpperAddress:
-
-11841 VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
-11842 "CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
-11843 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-11844 suballocations2nd.push_back(newSuballoc);
-11845 m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
-
-
-11848 case VmaAllocationRequestType::EndOf1st:
-
-11850 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-
-11852 VMA_ASSERT(suballocations1st.empty() ||
-11853 request.offset >= suballocations1st.back().offset + suballocations1st.back().size);
-
-11855 VMA_ASSERT(request.offset + allocSize <= GetSize());
-
-11857 suballocations1st.push_back(newSuballoc);
-
-
-11860 case VmaAllocationRequestType::EndOf2nd:
-
-11862 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-
-11864 VMA_ASSERT(!suballocations1st.empty() &&
-11865 request.offset + allocSize <= suballocations1st[m_1stNullItemsBeginCount].offset);
-11866 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-11868 switch(m_2ndVectorMode)
-
-11870 case SECOND_VECTOR_EMPTY:
-
-11872 VMA_ASSERT(suballocations2nd.empty());
-11873 m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
-
-11875 case SECOND_VECTOR_RING_BUFFER:
-
-11877 VMA_ASSERT(!suballocations2nd.empty());
-
-11879 case SECOND_VECTOR_DOUBLE_STACK:
-11880 VMA_ASSERT(0 &&
"CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
-
-
-
-
-
-11886 suballocations2nd.push_back(newSuballoc);
-
-
-
-11890 VMA_ASSERT(0 &&
"CRITICAL INTERNAL ERROR.");
-
-
-11893 m_SumFreeSize -= newSuballoc.size;
-
-
-11896 void VmaBlockMetadata_Linear::Free(
const VmaAllocation allocation)
-
-11898 FreeAtOffset(allocation->GetOffset());
-
-
-11901 void VmaBlockMetadata_Linear::FreeAtOffset(VkDeviceSize offset)
-
-11903 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-11904 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-11906 if(!suballocations1st.empty())
-
-
-11909 VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
-11910 if(firstSuballoc.offset == offset)
-
-11912 firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
-11913 firstSuballoc.hAllocation = VK_NULL_HANDLE;
-11914 m_SumFreeSize += firstSuballoc.size;
-11915 ++m_1stNullItemsBeginCount;
-11916 CleanupAfterFree();
-
-
-
-
-
-11922 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
-11923 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
-
-11925 VmaSuballocation& lastSuballoc = suballocations2nd.back();
-11926 if(lastSuballoc.offset == offset)
-
-11928 m_SumFreeSize += lastSuballoc.size;
-11929 suballocations2nd.pop_back();
-11930 CleanupAfterFree();
-
-
-
-
-11935 else if(m_2ndVectorMode == SECOND_VECTOR_EMPTY)
-
-11937 VmaSuballocation& lastSuballoc = suballocations1st.back();
-11938 if(lastSuballoc.offset == offset)
-
-11940 m_SumFreeSize += lastSuballoc.size;
-11941 suballocations1st.pop_back();
-11942 CleanupAfterFree();
-
-
-
-
-
-
-11949 VmaSuballocation refSuballoc;
-11950 refSuballoc.offset = offset;
-
-11952 SuballocationVectorType::iterator it = VmaBinaryFindSorted(
-11953 suballocations1st.begin() + m_1stNullItemsBeginCount,
-11954 suballocations1st.end(),
-
-11956 VmaSuballocationOffsetLess());
-11957 if(it != suballocations1st.end())
-
-11959 it->type = VMA_SUBALLOCATION_TYPE_FREE;
-11960 it->hAllocation = VK_NULL_HANDLE;
-11961 ++m_1stNullItemsMiddleCount;
-11962 m_SumFreeSize += it->size;
-11963 CleanupAfterFree();
-
-
-
-
-11968 if(m_2ndVectorMode != SECOND_VECTOR_EMPTY)
-
-
-11971 VmaSuballocation refSuballoc;
-11972 refSuballoc.offset = offset;
-
-11974 SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
-11975 VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
-11976 VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
-11977 if(it != suballocations2nd.end())
-
-11979 it->type = VMA_SUBALLOCATION_TYPE_FREE;
-11980 it->hAllocation = VK_NULL_HANDLE;
-11981 ++m_2ndNullItemsCount;
-11982 m_SumFreeSize += it->size;
-11983 CleanupAfterFree();
-
-
-
-
-11988 VMA_ASSERT(0 &&
"Allocation to free not found in linear allocator!");
-
-
-11991 bool VmaBlockMetadata_Linear::ShouldCompact1st()
const
-
-11993 const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
-11994 const size_t suballocCount = AccessSuballocations1st().size();
-11995 return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
-
-
-11998 void VmaBlockMetadata_Linear::CleanupAfterFree()
-
-12000 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
-12001 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
-
-
-
-12005 suballocations1st.clear();
-12006 suballocations2nd.clear();
-12007 m_1stNullItemsBeginCount = 0;
-12008 m_1stNullItemsMiddleCount = 0;
-12009 m_2ndNullItemsCount = 0;
-12010 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
-
-
-
-12014 const size_t suballoc1stCount = suballocations1st.size();
-12015 const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
-12016 VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
-
-
-12019 while(m_1stNullItemsBeginCount < suballoc1stCount &&
-12020 suballocations1st[m_1stNullItemsBeginCount].hAllocation == VK_NULL_HANDLE)
-
-12022 ++m_1stNullItemsBeginCount;
-12023 --m_1stNullItemsMiddleCount;
-
-
-
-12027 while(m_1stNullItemsMiddleCount > 0 &&
-12028 suballocations1st.back().hAllocation == VK_NULL_HANDLE)
-
-12030 --m_1stNullItemsMiddleCount;
-12031 suballocations1st.pop_back();
-
-
-
-12035 while(m_2ndNullItemsCount > 0 &&
-12036 suballocations2nd.back().hAllocation == VK_NULL_HANDLE)
-
-12038 --m_2ndNullItemsCount;
-12039 suballocations2nd.pop_back();
-
-
-
-12043 while(m_2ndNullItemsCount > 0 &&
-12044 suballocations2nd[0].hAllocation == VK_NULL_HANDLE)
-
-12046 --m_2ndNullItemsCount;
-12047 VmaVectorRemove(suballocations2nd, 0);
-
-
-12050 if(ShouldCompact1st())
-
-12052 const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
-12053 size_t srcIndex = m_1stNullItemsBeginCount;
-12054 for(
size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
-
-12056 while(suballocations1st[srcIndex].hAllocation == VK_NULL_HANDLE)
-
-
-
-12060 if(dstIndex != srcIndex)
-
-12062 suballocations1st[dstIndex] = suballocations1st[srcIndex];
-
-
-
-12066 suballocations1st.resize(nonNullItemCount);
-12067 m_1stNullItemsBeginCount = 0;
-12068 m_1stNullItemsMiddleCount = 0;
-
-
-
-12072 if(suballocations2nd.empty())
-
-12074 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
-
-
-
-12078 if(suballocations1st.size() - m_1stNullItemsBeginCount == 0)
-
-12080 suballocations1st.clear();
-12081 m_1stNullItemsBeginCount = 0;
-
-12083 if(!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
-
-
-12086 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
-12087 m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
-12088 while(m_1stNullItemsBeginCount < suballocations2nd.size() &&
-12089 suballocations2nd[m_1stNullItemsBeginCount].hAllocation == VK_NULL_HANDLE)
-
-12091 ++m_1stNullItemsBeginCount;
-12092 --m_1stNullItemsMiddleCount;
-
-12094 m_2ndNullItemsCount = 0;
-12095 m_1stVectorIndex ^= 1;
-
-
-
-
-12100 VMA_HEAVY_ASSERT(Validate());
-
+10295 void VmaBlockMetadata_Generic::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
+
+10297 VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
+10298 VMA_ASSERT(item->size > 0);
+
+
+
+10302 VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
+
+10304 if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
+
+10306 VmaSuballocationList::iterator*
const it = VmaBinaryFindFirstNotLess(
+10307 m_FreeSuballocationsBySize.data(),
+10308 m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
+
+10310 VmaSuballocationItemSizeLess());
+10311 for(
size_t index = it - m_FreeSuballocationsBySize.data();
+10312 index < m_FreeSuballocationsBySize.size();
+
+
+10315 if(m_FreeSuballocationsBySize[index] == item)
+
+10317 VmaVectorRemove(m_FreeSuballocationsBySize, index);
+
+
+10320 VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) &&
"Not found.");
+
+10322 VMA_ASSERT(0 &&
"Not found.");
+
+
+
+
+
+10328 bool VmaBlockMetadata_Generic::IsBufferImageGranularityConflictPossible(
+10329 VkDeviceSize bufferImageGranularity,
+10330 VmaSuballocationType& inOutPrevSuballocType)
const
+
+10332 if(bufferImageGranularity == 1 || IsEmpty())
+
+
+
+
+10337 VkDeviceSize minAlignment = VK_WHOLE_SIZE;
+10338 bool typeConflictFound =
false;
+10339 for(
const auto& suballoc : m_Suballocations)
+
+10341 const VmaSuballocationType suballocType = suballoc.type;
+10342 if(suballocType != VMA_SUBALLOCATION_TYPE_FREE)
+
+10344 minAlignment = VMA_MIN(minAlignment, suballoc.hAllocation->GetAlignment());
+10345 if(VmaIsBufferImageGranularityConflict(inOutPrevSuballocType, suballocType))
+
+10347 typeConflictFound =
true;
+
+10349 inOutPrevSuballocType = suballocType;
+
+
+
+10353 return typeConflictFound || minAlignment >= bufferImageGranularity;
+
+
+
+
+10359 VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(
VmaAllocator hAllocator) :
+10360 VmaBlockMetadata(hAllocator),
+
+10362 m_Suballocations0(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
+10363 m_Suballocations1(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
+10364 m_1stVectorIndex(0),
+10365 m_2ndVectorMode(SECOND_VECTOR_EMPTY),
+10366 m_1stNullItemsBeginCount(0),
+10367 m_1stNullItemsMiddleCount(0),
+10368 m_2ndNullItemsCount(0)
+
+
+
+10372 VmaBlockMetadata_Linear::~VmaBlockMetadata_Linear()
+
+
+
+10376 void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
+
+10378 VmaBlockMetadata::Init(size);
+10379 m_SumFreeSize = size;
+
+
+10382 bool VmaBlockMetadata_Linear::Validate()
const
+
+10384 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+10385 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+10387 VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
+10388 VMA_VALIDATE(!suballocations1st.empty() ||
+10389 suballocations2nd.empty() ||
+10390 m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
+
+10392 if(!suballocations1st.empty())
+
+
+10395 VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].hAllocation != VK_NULL_HANDLE);
+
+10397 VMA_VALIDATE(suballocations1st.back().hAllocation != VK_NULL_HANDLE);
+
+10399 if(!suballocations2nd.empty())
+
+
+10402 VMA_VALIDATE(suballocations2nd.back().hAllocation != VK_NULL_HANDLE);
+
+
+10405 VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
+10406 VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
+
+10408 VkDeviceSize sumUsedSize = 0;
+10409 const size_t suballoc1stCount = suballocations1st.size();
+10410 VkDeviceSize offset = VMA_DEBUG_MARGIN;
+
+10412 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+10414 const size_t suballoc2ndCount = suballocations2nd.size();
+10415 size_t nullItem2ndCount = 0;
+10416 for(
size_t i = 0; i < suballoc2ndCount; ++i)
+
+10418 const VmaSuballocation& suballoc = suballocations2nd[i];
+10419 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10421 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
+10422 VMA_VALIDATE(suballoc.offset >= offset);
+
+
+
+10426 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
+10427 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
+10428 sumUsedSize += suballoc.size;
+
+
+
+10432 ++nullItem2ndCount;
+
+
+10435 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
+
+
+10438 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
+
+
+10441 for(
size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
+
+10443 const VmaSuballocation& suballoc = suballocations1st[i];
+10444 VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
+10445 suballoc.hAllocation == VK_NULL_HANDLE);
+
+
+10448 size_t nullItem1stCount = m_1stNullItemsBeginCount;
+
+10450 for(
size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
+
+10452 const VmaSuballocation& suballoc = suballocations1st[i];
+10453 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10455 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
+10456 VMA_VALIDATE(suballoc.offset >= offset);
+10457 VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
+
+
+
+10461 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
+10462 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
+10463 sumUsedSize += suballoc.size;
+
+
+
+10467 ++nullItem1stCount;
+
+
+10470 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
+
+10472 VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
+
+10474 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+10476 const size_t suballoc2ndCount = suballocations2nd.size();
+10477 size_t nullItem2ndCount = 0;
+10478 for(
size_t i = suballoc2ndCount; i--; )
+
+10480 const VmaSuballocation& suballoc = suballocations2nd[i];
+10481 const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
+
+10483 VMA_VALIDATE(currFree == (suballoc.hAllocation == VK_NULL_HANDLE));
+10484 VMA_VALIDATE(suballoc.offset >= offset);
+
+
+
+10488 VMA_VALIDATE(suballoc.hAllocation->GetOffset() == suballoc.offset);
+10489 VMA_VALIDATE(suballoc.hAllocation->GetSize() == suballoc.size);
+10490 sumUsedSize += suballoc.size;
+
+
+
+10494 ++nullItem2ndCount;
+
+
+10497 offset = suballoc.offset + suballoc.size + VMA_DEBUG_MARGIN;
+
+
+10500 VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
+
+
+10503 VMA_VALIDATE(offset <= GetSize());
+10504 VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
+
+
+
+
+10509 size_t VmaBlockMetadata_Linear::GetAllocationCount()
const
+
+10511 return AccessSuballocations1st().size() - (m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount) +
+10512 AccessSuballocations2nd().size() - m_2ndNullItemsCount;
+
+
+10515 VkDeviceSize VmaBlockMetadata_Linear::GetUnusedRangeSizeMax()
const
+
+10517 const VkDeviceSize size = GetSize();
+
+
+
+
+
+
+
+
+
+
+
+10529 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+
+10531 switch(m_2ndVectorMode)
+
+10533 case SECOND_VECTOR_EMPTY:
+
+
+
+
+
+10539 const size_t suballocations1stCount = suballocations1st.size();
+10540 VMA_ASSERT(suballocations1stCount > m_1stNullItemsBeginCount);
+10541 const VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
+10542 const VmaSuballocation& lastSuballoc = suballocations1st[suballocations1stCount - 1];
+
+10544 firstSuballoc.offset,
+10545 size - (lastSuballoc.offset + lastSuballoc.size));
+
+
+
+10549 case SECOND_VECTOR_RING_BUFFER:
+
+
+
+
+10554 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+10555 const VmaSuballocation& lastSuballoc2nd = suballocations2nd.back();
+10556 const VmaSuballocation& firstSuballoc1st = suballocations1st[m_1stNullItemsBeginCount];
+10557 return firstSuballoc1st.offset - (lastSuballoc2nd.offset + lastSuballoc2nd.size);
+
+
+
+10561 case SECOND_VECTOR_DOUBLE_STACK:
+
+
+
+
+10566 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+10567 const VmaSuballocation& topSuballoc2nd = suballocations2nd.back();
+10568 const VmaSuballocation& lastSuballoc1st = suballocations1st.back();
+10569 return topSuballoc2nd.offset - (lastSuballoc1st.offset + lastSuballoc1st.size);
+
+
+
+
+
+
+
+
+
+10579 void VmaBlockMetadata_Linear::CalcAllocationStatInfo(
VmaStatInfo& outInfo)
const
+
+10581 const VkDeviceSize size = GetSize();
+10582 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+10583 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+10584 const size_t suballoc1stCount = suballocations1st.size();
+10585 const size_t suballoc2ndCount = suballocations2nd.size();
+
+
+
+
+
+
+
+
+
+
+10596 VkDeviceSize lastOffset = 0;
+
+10598 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+10600 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
+10601 size_t nextAlloc2ndIndex = 0;
+10602 while(lastOffset < freeSpace2ndTo1stEnd)
+
+
+10605 while(nextAlloc2ndIndex < suballoc2ndCount &&
+10606 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+10608 ++nextAlloc2ndIndex;
+
+
+
+10612 if(nextAlloc2ndIndex < suballoc2ndCount)
+
+10614 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+10617 if(lastOffset < suballoc.offset)
+
+
+10620 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+
+
+
+10634 lastOffset = suballoc.offset + suballoc.size;
+10635 ++nextAlloc2ndIndex;
+
+
+
+
+
+10641 if(lastOffset < freeSpace2ndTo1stEnd)
+
+10643 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
+
+
+
+
+
+
+
+10651 lastOffset = freeSpace2ndTo1stEnd;
+
+
+
+
+10656 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
+10657 const VkDeviceSize freeSpace1stTo2ndEnd =
+10658 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
+10659 while(lastOffset < freeSpace1stTo2ndEnd)
+
+
+10662 while(nextAlloc1stIndex < suballoc1stCount &&
+10663 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
+
+10665 ++nextAlloc1stIndex;
+
+
+
+10669 if(nextAlloc1stIndex < suballoc1stCount)
+
+10671 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
+
+
+10674 if(lastOffset < suballoc.offset)
+
+
+10677 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+
+
+
+10691 lastOffset = suballoc.offset + suballoc.size;
+10692 ++nextAlloc1stIndex;
+
+
+
+
+
+10698 if(lastOffset < freeSpace1stTo2ndEnd)
+
+10700 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
+
+
+
+
+
+
+
+10708 lastOffset = freeSpace1stTo2ndEnd;
+
+
+
+10712 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+10714 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
+10715 while(lastOffset < size)
+
+
+10718 while(nextAlloc2ndIndex != SIZE_MAX &&
+10719 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+10721 --nextAlloc2ndIndex;
+
+
+
+10725 if(nextAlloc2ndIndex != SIZE_MAX)
+
+10727 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+10730 if(lastOffset < suballoc.offset)
+
+
+10733 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+
+
+
+10747 lastOffset = suballoc.offset + suballoc.size;
+10748 --nextAlloc2ndIndex;
+
+
+
+
+
+10754 if(lastOffset < size)
+
+10756 const VkDeviceSize unusedRangeSize = size - lastOffset;
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+10772 void VmaBlockMetadata_Linear::AddPoolStats(
VmaPoolStats& inoutStats)
const
+
+10774 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+10775 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+10776 const VkDeviceSize size = GetSize();
+10777 const size_t suballoc1stCount = suballocations1st.size();
+10778 const size_t suballoc2ndCount = suballocations2nd.size();
+
+10780 inoutStats.
size += size;
+
+10782 VkDeviceSize lastOffset = 0;
+
+10784 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+10786 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
+10787 size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
+10788 while(lastOffset < freeSpace2ndTo1stEnd)
+
+
+10791 while(nextAlloc2ndIndex < suballoc2ndCount &&
+10792 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+10794 ++nextAlloc2ndIndex;
+
+
+
+10798 if(nextAlloc2ndIndex < suballoc2ndCount)
+
+10800 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+10803 if(lastOffset < suballoc.offset)
+
+
+10806 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+10817 lastOffset = suballoc.offset + suballoc.size;
+10818 ++nextAlloc2ndIndex;
+
+
+
+
+10823 if(lastOffset < freeSpace2ndTo1stEnd)
+
+
+10826 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
+
+
+
+
+
+
+10833 lastOffset = freeSpace2ndTo1stEnd;
+
+
+
+
+10838 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
+10839 const VkDeviceSize freeSpace1stTo2ndEnd =
+10840 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
+10841 while(lastOffset < freeSpace1stTo2ndEnd)
+
+
+10844 while(nextAlloc1stIndex < suballoc1stCount &&
+10845 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
+
+10847 ++nextAlloc1stIndex;
+
+
+
+10851 if(nextAlloc1stIndex < suballoc1stCount)
+
+10853 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
+
+
+10856 if(lastOffset < suballoc.offset)
+
+
+10859 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+10870 lastOffset = suballoc.offset + suballoc.size;
+10871 ++nextAlloc1stIndex;
+
+
+
+
+10876 if(lastOffset < freeSpace1stTo2ndEnd)
+
+
+10879 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
+
+
+
+
+
+
+10886 lastOffset = freeSpace1stTo2ndEnd;
+
+
+
+10890 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+10892 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
+10893 while(lastOffset < size)
+
+
+10896 while(nextAlloc2ndIndex != SIZE_MAX &&
+10897 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+10899 --nextAlloc2ndIndex;
+
+
+
+10903 if(nextAlloc2ndIndex != SIZE_MAX)
+
+10905 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+10908 if(lastOffset < suballoc.offset)
+
+
+10911 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+
+
+
+
+
+
+
+
+
+
+10922 lastOffset = suballoc.offset + suballoc.size;
+10923 --nextAlloc2ndIndex;
+
+
+
+
+10928 if(lastOffset < size)
+
+
+10931 const VkDeviceSize unusedRangeSize = size - lastOffset;
+
+
+
+
+
+
+
+
+
+
+
+
+10944 #if VMA_STATS_STRING_ENABLED
+10945 void VmaBlockMetadata_Linear::PrintDetailedMap(
class VmaJsonWriter& json)
const
+
+10947 const VkDeviceSize size = GetSize();
+10948 const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+10949 const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+10950 const size_t suballoc1stCount = suballocations1st.size();
+10951 const size_t suballoc2ndCount = suballocations2nd.size();
+
+
+
+10955 size_t unusedRangeCount = 0;
+10956 VkDeviceSize usedBytes = 0;
+
+10958 VkDeviceSize lastOffset = 0;
+
+10960 size_t alloc2ndCount = 0;
+10961 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+10963 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
+10964 size_t nextAlloc2ndIndex = 0;
+10965 while(lastOffset < freeSpace2ndTo1stEnd)
+
+
+10968 while(nextAlloc2ndIndex < suballoc2ndCount &&
+10969 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+10971 ++nextAlloc2ndIndex;
+
+
+
+10975 if(nextAlloc2ndIndex < suballoc2ndCount)
+
+10977 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+10980 if(lastOffset < suballoc.offset)
+
+
+10983 ++unusedRangeCount;
+
+
+
+
+
+10989 usedBytes += suballoc.size;
+
+
+10992 lastOffset = suballoc.offset + suballoc.size;
+10993 ++nextAlloc2ndIndex;
+
+
+
+
+10998 if(lastOffset < freeSpace2ndTo1stEnd)
+
+
+11001 ++unusedRangeCount;
+
+
+
+11005 lastOffset = freeSpace2ndTo1stEnd;
+
+
+
+
+11010 size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
+11011 size_t alloc1stCount = 0;
+11012 const VkDeviceSize freeSpace1stTo2ndEnd =
+11013 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
+11014 while(lastOffset < freeSpace1stTo2ndEnd)
+
+
+11017 while(nextAlloc1stIndex < suballoc1stCount &&
+11018 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
+
+11020 ++nextAlloc1stIndex;
+
+
+
+11024 if(nextAlloc1stIndex < suballoc1stCount)
+
+11026 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
+
+
+11029 if(lastOffset < suballoc.offset)
+
+
+11032 ++unusedRangeCount;
+
+
+
+
+
+11038 usedBytes += suballoc.size;
+
+
+11041 lastOffset = suballoc.offset + suballoc.size;
+11042 ++nextAlloc1stIndex;
+
+
+
+
+11047 if(lastOffset < size)
+
+
+11050 ++unusedRangeCount;
+
+
+
+11054 lastOffset = freeSpace1stTo2ndEnd;
+
+
+
+11058 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+11060 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
+11061 while(lastOffset < size)
+
+
+11064 while(nextAlloc2ndIndex != SIZE_MAX &&
+11065 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+11067 --nextAlloc2ndIndex;
+
+
+
+11071 if(nextAlloc2ndIndex != SIZE_MAX)
+
+11073 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+11076 if(lastOffset < suballoc.offset)
+
+
+11079 ++unusedRangeCount;
+
+
+
+
+
+11085 usedBytes += suballoc.size;
+
+
+11088 lastOffset = suballoc.offset + suballoc.size;
+11089 --nextAlloc2ndIndex;
+
+
+
+
+11094 if(lastOffset < size)
+
+
+11097 ++unusedRangeCount;
+
+
+
+
+
+
+
+
+11106 const VkDeviceSize unusedBytes = size - usedBytes;
+11107 PrintDetailedMap_Begin(json, unusedBytes, alloc1stCount + alloc2ndCount, unusedRangeCount);
+
+
+
+
+11112 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+11114 const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
+11115 size_t nextAlloc2ndIndex = 0;
+11116 while(lastOffset < freeSpace2ndTo1stEnd)
+
+
+11119 while(nextAlloc2ndIndex < suballoc2ndCount &&
+11120 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+11122 ++nextAlloc2ndIndex;
+
+
+
+11126 if(nextAlloc2ndIndex < suballoc2ndCount)
+
+11128 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+11131 if(lastOffset < suballoc.offset)
+
+
+11134 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+11135 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+
+11140 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
+
+
+11143 lastOffset = suballoc.offset + suballoc.size;
+11144 ++nextAlloc2ndIndex;
+
+
+
+
+11149 if(lastOffset < freeSpace2ndTo1stEnd)
+
+
+11152 const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
+11153 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+11157 lastOffset = freeSpace2ndTo1stEnd;
+
+
+
+
+11162 nextAlloc1stIndex = m_1stNullItemsBeginCount;
+11163 while(lastOffset < freeSpace1stTo2ndEnd)
+
+
+11166 while(nextAlloc1stIndex < suballoc1stCount &&
+11167 suballocations1st[nextAlloc1stIndex].hAllocation == VK_NULL_HANDLE)
+
+11169 ++nextAlloc1stIndex;
+
+
+
+11173 if(nextAlloc1stIndex < suballoc1stCount)
+
+11175 const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
+
+
+11178 if(lastOffset < suballoc.offset)
+
+
+11181 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+11182 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+
+11187 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
+
+
+11190 lastOffset = suballoc.offset + suballoc.size;
+11191 ++nextAlloc1stIndex;
+
+
+
+
+11196 if(lastOffset < freeSpace1stTo2ndEnd)
+
+
+11199 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
+11200 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+11204 lastOffset = freeSpace1stTo2ndEnd;
+
+
+
+11208 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+11210 size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
+11211 while(lastOffset < size)
+
+
+11214 while(nextAlloc2ndIndex != SIZE_MAX &&
+11215 suballocations2nd[nextAlloc2ndIndex].hAllocation == VK_NULL_HANDLE)
+
+11217 --nextAlloc2ndIndex;
+
+
+
+11221 if(nextAlloc2ndIndex != SIZE_MAX)
+
+11223 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
+
+
+11226 if(lastOffset < suballoc.offset)
+
+
+11229 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
+11230 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+
+11235 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.hAllocation);
+
+
+11238 lastOffset = suballoc.offset + suballoc.size;
+11239 --nextAlloc2ndIndex;
+
+
+
+
+11244 if(lastOffset < size)
+
+
+11247 const VkDeviceSize unusedRangeSize = size - lastOffset;
+11248 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
+
+
+
+
+
+
+
+
+11257 PrintDetailedMap_End(json);
+
+
+
+11261 bool VmaBlockMetadata_Linear::CreateAllocationRequest(
+11262 uint32_t currentFrameIndex,
+11263 uint32_t frameInUseCount,
+11264 VkDeviceSize bufferImageGranularity,
+11265 VkDeviceSize allocSize,
+11266 VkDeviceSize allocAlignment,
+
+11268 VmaSuballocationType allocType,
+11269 bool canMakeOtherLost,
+
+11271 VmaAllocationRequest* pAllocationRequest)
+
+11273 VMA_ASSERT(allocSize > 0);
+11274 VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
+11275 VMA_ASSERT(pAllocationRequest != VMA_NULL);
+11276 VMA_HEAVY_ASSERT(Validate());
+11277 return upperAddress ?
+11278 CreateAllocationRequest_UpperAddress(
+11279 currentFrameIndex, frameInUseCount, bufferImageGranularity,
+11280 allocSize, allocAlignment, allocType, canMakeOtherLost, strategy, pAllocationRequest) :
+11281 CreateAllocationRequest_LowerAddress(
+11282 currentFrameIndex, frameInUseCount, bufferImageGranularity,
+11283 allocSize, allocAlignment, allocType, canMakeOtherLost, strategy, pAllocationRequest);
+
+
+11286 bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
+11287 uint32_t currentFrameIndex,
+11288 uint32_t frameInUseCount,
+11289 VkDeviceSize bufferImageGranularity,
+11290 VkDeviceSize allocSize,
+11291 VkDeviceSize allocAlignment,
+11292 VmaSuballocationType allocType,
+11293 bool canMakeOtherLost,
+
+11295 VmaAllocationRequest* pAllocationRequest)
+
+11297 const VkDeviceSize size = GetSize();
+11298 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+11299 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+11301 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+11303 VMA_ASSERT(0 &&
"Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
+
+
+
+
+11308 if(allocSize > size)
+
+
+
+11312 VkDeviceSize resultBaseOffset = size - allocSize;
+11313 if(!suballocations2nd.empty())
+
+11315 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
+11316 resultBaseOffset = lastSuballoc.offset - allocSize;
+11317 if(allocSize > lastSuballoc.offset)
+
+
+
+
+
+
+11324 VkDeviceSize resultOffset = resultBaseOffset;
+
+
+11327 if(VMA_DEBUG_MARGIN > 0)
+
+11329 if(resultOffset < VMA_DEBUG_MARGIN)
+
+
+
+11333 resultOffset -= VMA_DEBUG_MARGIN;
+
+
+
+11337 resultOffset = VmaAlignDown(resultOffset, allocAlignment);
+
+
+
+11341 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
+
+11343 bool bufferImageGranularityConflict =
false;
+11344 for(
size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
+
+11346 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
+11347 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
+
+11349 if(VmaIsBufferImageGranularityConflict(nextSuballoc.type, allocType))
+
+11351 bufferImageGranularityConflict =
true;
+
+
+
+
+
+
+
+11359 if(bufferImageGranularityConflict)
+
+11361 resultOffset = VmaAlignDown(resultOffset, bufferImageGranularity);
+
+
+
+
+11366 const VkDeviceSize endOf1st = !suballocations1st.empty() ?
+11367 suballocations1st.back().offset + suballocations1st.back().size :
+
+11369 if(endOf1st + VMA_DEBUG_MARGIN <= resultOffset)
+
+
+
+11373 if(bufferImageGranularity > 1)
+
+11375 for(
size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
+
+11377 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
+11378 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
+
+11380 if(VmaIsBufferImageGranularityConflict(allocType, prevSuballoc.type))
+
+
+
+
+
+
+
+
+
+
+
+
+
+11394 pAllocationRequest->offset = resultOffset;
+11395 pAllocationRequest->sumFreeSize = resultBaseOffset + allocSize - endOf1st;
+11396 pAllocationRequest->sumItemSize = 0;
+
+11398 pAllocationRequest->itemsToMakeLostCount = 0;
+11399 pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
+
+
+
+
+
+
+11406 bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
+11407 uint32_t currentFrameIndex,
+11408 uint32_t frameInUseCount,
+11409 VkDeviceSize bufferImageGranularity,
+11410 VkDeviceSize allocSize,
+11411 VkDeviceSize allocAlignment,
+11412 VmaSuballocationType allocType,
+11413 bool canMakeOtherLost,
+
+11415 VmaAllocationRequest* pAllocationRequest)
+
+11417 const VkDeviceSize size = GetSize();
+11418 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+11419 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+11421 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+
+
+11425 VkDeviceSize resultBaseOffset = 0;
+11426 if(!suballocations1st.empty())
+
+11428 const VmaSuballocation& lastSuballoc = suballocations1st.back();
+11429 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size;
+
+
+
+11433 VkDeviceSize resultOffset = resultBaseOffset;
+
+
+11436 if(VMA_DEBUG_MARGIN > 0)
+
+11438 resultOffset += VMA_DEBUG_MARGIN;
+
+
+
+11442 resultOffset = VmaAlignUp(resultOffset, allocAlignment);
+
+
+
+11446 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
+
+11448 bool bufferImageGranularityConflict =
false;
+11449 for(
size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
+
+11451 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
+11452 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
+
+11454 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
+
+11456 bufferImageGranularityConflict =
true;
+
+
+
+
+
+
+
+11464 if(bufferImageGranularityConflict)
+
+11466 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
+
+
+
+11470 const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
+11471 suballocations2nd.back().offset : size;
+
+
+11474 if(resultOffset + allocSize + VMA_DEBUG_MARGIN <= freeSpaceEnd)
+
+
+
+11478 if((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+11480 for(
size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
+
+11482 const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
+11483 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
+
+11485 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
+
+
+
+
+
+
+
+
+
+
+
+
+
+11499 pAllocationRequest->offset = resultOffset;
+11500 pAllocationRequest->sumFreeSize = freeSpaceEnd - resultBaseOffset;
+11501 pAllocationRequest->sumItemSize = 0;
+
+11503 pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
+11504 pAllocationRequest->itemsToMakeLostCount = 0;
+
+
+
+
+
+
+11511 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+11513 VMA_ASSERT(!suballocations1st.empty());
+
+11515 VkDeviceSize resultBaseOffset = 0;
+11516 if(!suballocations2nd.empty())
+
+11518 const VmaSuballocation& lastSuballoc = suballocations2nd.back();
+11519 resultBaseOffset = lastSuballoc.offset + lastSuballoc.size;
+
+
+
+11523 VkDeviceSize resultOffset = resultBaseOffset;
+
+
+11526 if(VMA_DEBUG_MARGIN > 0)
+
+11528 resultOffset += VMA_DEBUG_MARGIN;
+
+
+
+11532 resultOffset = VmaAlignUp(resultOffset, allocAlignment);
+
+
+
+11536 if(bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
+
+11538 bool bufferImageGranularityConflict =
false;
+11539 for(
size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
+
+11541 const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
+11542 if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
+
+11544 if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
+
+11546 bufferImageGranularityConflict =
true;
+
+
+
+
+
+
+
+11554 if(bufferImageGranularityConflict)
+
+11556 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
+
+
+
+11560 pAllocationRequest->itemsToMakeLostCount = 0;
+11561 pAllocationRequest->sumItemSize = 0;
+11562 size_t index1st = m_1stNullItemsBeginCount;
+
+11564 if(canMakeOtherLost)
+
+11566 while(index1st < suballocations1st.size() &&
+11567 resultOffset + allocSize + VMA_DEBUG_MARGIN > suballocations1st[index1st].offset)
+
+
+11570 const VmaSuballocation& suballoc = suballocations1st[index1st];
+11571 if(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
+
+
+
+
+
+11577 VMA_ASSERT(suballoc.hAllocation != VK_NULL_HANDLE);
+11578 if(suballoc.hAllocation->CanBecomeLost() &&
+11579 suballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
+
+11581 ++pAllocationRequest->itemsToMakeLostCount;
+11582 pAllocationRequest->sumItemSize += suballoc.size;
+
+
+
+
+
+
+
+
+
+
+
+11594 if(allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
+
+11596 while(index1st < suballocations1st.size())
+
+11598 const VmaSuballocation& suballoc = suballocations1st[index1st];
+11599 if(VmaBlocksOnSamePage(resultOffset, allocSize, suballoc.offset, bufferImageGranularity))
+
+11601 if(suballoc.hAllocation != VK_NULL_HANDLE)
+
+
+11604 if(suballoc.hAllocation->CanBecomeLost() &&
+11605 suballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
+
+11607 ++pAllocationRequest->itemsToMakeLostCount;
+11608 pAllocationRequest->sumItemSize += suballoc.size;
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+11626 if(index1st == suballocations1st.size() &&
+11627 resultOffset + allocSize + VMA_DEBUG_MARGIN > size)
+
+
+11630 VMA_DEBUG_LOG(
"Unsupported special case in custom pool with linear allocation algorithm used as ring buffer with allocations that can be lost.");
+
+
+
+
+11635 if((index1st == suballocations1st.size() && resultOffset + allocSize + VMA_DEBUG_MARGIN <= size) ||
+11636 (index1st < suballocations1st.size() && resultOffset + allocSize + VMA_DEBUG_MARGIN <= suballocations1st[index1st].offset))
+
+
+
+11640 if(allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
+
+11642 for(
size_t nextSuballocIndex = index1st;
+11643 nextSuballocIndex < suballocations1st.size();
+11644 nextSuballocIndex++)
+
+11646 const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
+11647 if(VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
+
+11649 if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
+
+
+
+
+
+
+
+
+
+
+
+
+
+11663 pAllocationRequest->offset = resultOffset;
+11664 pAllocationRequest->sumFreeSize =
+11665 (index1st < suballocations1st.size() ? suballocations1st[index1st].offset : size)
+
+11667 - pAllocationRequest->sumItemSize;
+11668 pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
+
+
+
+
+
+
+
+
+11677 bool VmaBlockMetadata_Linear::MakeRequestedAllocationsLost(
+11678 uint32_t currentFrameIndex,
+11679 uint32_t frameInUseCount,
+11680 VmaAllocationRequest* pAllocationRequest)
+
+11682 if(pAllocationRequest->itemsToMakeLostCount == 0)
+
+
+
+
+11687 VMA_ASSERT(m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER);
+
+
+11690 SuballocationVectorType* suballocations = &AccessSuballocations1st();
+11691 size_t index = m_1stNullItemsBeginCount;
+11692 size_t madeLostCount = 0;
+11693 while(madeLostCount < pAllocationRequest->itemsToMakeLostCount)
+
+11695 if(index == suballocations->size())
+
+
+
+11699 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+11701 suballocations = &AccessSuballocations2nd();
+
+
+
+11705 VMA_ASSERT(!suballocations->empty());
+
+11707 VmaSuballocation& suballoc = (*suballocations)[index];
+11708 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
+
+11710 VMA_ASSERT(suballoc.hAllocation != VK_NULL_HANDLE);
+11711 VMA_ASSERT(suballoc.hAllocation->CanBecomeLost());
+11712 if(suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
+
+11714 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+11715 suballoc.hAllocation = VK_NULL_HANDLE;
+11716 m_SumFreeSize += suballoc.size;
+11717 if(suballocations == &AccessSuballocations1st())
+
+11719 ++m_1stNullItemsMiddleCount;
+
+
+
+11723 ++m_2ndNullItemsCount;
+
+
+
+
+
+
+
+
+
+
+
+11735 CleanupAfterFree();
+
+
+
+
+
+11741 uint32_t VmaBlockMetadata_Linear::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
+
+11743 uint32_t lostAllocationCount = 0;
+
+11745 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+11746 for(
size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
+
+11748 VmaSuballocation& suballoc = suballocations1st[i];
+11749 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE &&
+11750 suballoc.hAllocation->CanBecomeLost() &&
+11751 suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
+
+11753 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+11754 suballoc.hAllocation = VK_NULL_HANDLE;
+11755 ++m_1stNullItemsMiddleCount;
+11756 m_SumFreeSize += suballoc.size;
+11757 ++lostAllocationCount;
+
+
+
+11761 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+11762 for(
size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
+
+11764 VmaSuballocation& suballoc = suballocations2nd[i];
+11765 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE &&
+11766 suballoc.hAllocation->CanBecomeLost() &&
+11767 suballoc.hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
+
+11769 suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+11770 suballoc.hAllocation = VK_NULL_HANDLE;
+11771 ++m_2ndNullItemsCount;
+11772 m_SumFreeSize += suballoc.size;
+11773 ++lostAllocationCount;
+
+
+
+11777 if(lostAllocationCount)
+
+11779 CleanupAfterFree();
+
+
+11782 return lostAllocationCount;
+
+
+11785 VkResult VmaBlockMetadata_Linear::CheckCorruption(
const void* pBlockData)
+
+11787 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+11788 for(
size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
+
+11790 const VmaSuballocation& suballoc = suballocations1st[i];
+11791 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
+
+11793 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
+
+11795 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
+11796 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+11798 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
+
+11800 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
+11801 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+
+
+11806 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+11807 for(
size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
+
+11809 const VmaSuballocation& suballoc = suballocations2nd[i];
+11810 if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
+
+11812 if(!VmaValidateMagicValue(pBlockData, suballoc.offset - VMA_DEBUG_MARGIN))
+
+11814 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
+11815 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+11817 if(!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
+
+11819 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
+11820 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+
+
+
+
+
+11828 void VmaBlockMetadata_Linear::Alloc(
+11829 const VmaAllocationRequest& request,
+11830 VmaSuballocationType type,
+11831 VkDeviceSize allocSize,
+
+
+11834 const VmaSuballocation newSuballoc = { request.offset, allocSize, hAllocation, type };
+
+11836 switch(request.type)
+
+11838 case VmaAllocationRequestType::UpperAddress:
+
+11840 VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
+11841 "CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
+11842 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+11843 suballocations2nd.push_back(newSuballoc);
+11844 m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
+
+
+11847 case VmaAllocationRequestType::EndOf1st:
+
+11849 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+
+11851 VMA_ASSERT(suballocations1st.empty() ||
+11852 request.offset >= suballocations1st.back().offset + suballocations1st.back().size);
+
+11854 VMA_ASSERT(request.offset + allocSize <= GetSize());
+
+11856 suballocations1st.push_back(newSuballoc);
+
+
+11859 case VmaAllocationRequestType::EndOf2nd:
+
+11861 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+
+11863 VMA_ASSERT(!suballocations1st.empty() &&
+11864 request.offset + allocSize <= suballocations1st[m_1stNullItemsBeginCount].offset);
+11865 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+11867 switch(m_2ndVectorMode)
+
+11869 case SECOND_VECTOR_EMPTY:
+
+11871 VMA_ASSERT(suballocations2nd.empty());
+11872 m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
+
+11874 case SECOND_VECTOR_RING_BUFFER:
+
+11876 VMA_ASSERT(!suballocations2nd.empty());
+
+11878 case SECOND_VECTOR_DOUBLE_STACK:
+11879 VMA_ASSERT(0 &&
"CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
+
+
+
+
+
+11885 suballocations2nd.push_back(newSuballoc);
+
+
+
+11889 VMA_ASSERT(0 &&
"CRITICAL INTERNAL ERROR.");
+
+
+11892 m_SumFreeSize -= newSuballoc.size;
+
+
+11895 void VmaBlockMetadata_Linear::Free(
const VmaAllocation allocation)
+
+11897 FreeAtOffset(allocation->GetOffset());
+
+
+11900 void VmaBlockMetadata_Linear::FreeAtOffset(VkDeviceSize offset)
+
+11902 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+11903 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+11905 if(!suballocations1st.empty())
+
+
+11908 VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
+11909 if(firstSuballoc.offset == offset)
+
+11911 firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
+11912 firstSuballoc.hAllocation = VK_NULL_HANDLE;
+11913 m_SumFreeSize += firstSuballoc.size;
+11914 ++m_1stNullItemsBeginCount;
+11915 CleanupAfterFree();
+
+
+
+
+
+11921 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
+11922 m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
+
+11924 VmaSuballocation& lastSuballoc = suballocations2nd.back();
+11925 if(lastSuballoc.offset == offset)
+
+11927 m_SumFreeSize += lastSuballoc.size;
+11928 suballocations2nd.pop_back();
+11929 CleanupAfterFree();
+
+
+
+
+11934 else if(m_2ndVectorMode == SECOND_VECTOR_EMPTY)
+
+11936 VmaSuballocation& lastSuballoc = suballocations1st.back();
+11937 if(lastSuballoc.offset == offset)
+
+11939 m_SumFreeSize += lastSuballoc.size;
+11940 suballocations1st.pop_back();
+11941 CleanupAfterFree();
+
+
+
+
+
+
+11948 VmaSuballocation refSuballoc;
+11949 refSuballoc.offset = offset;
+
+11951 SuballocationVectorType::iterator it = VmaBinaryFindSorted(
+11952 suballocations1st.begin() + m_1stNullItemsBeginCount,
+11953 suballocations1st.end(),
+
+11955 VmaSuballocationOffsetLess());
+11956 if(it != suballocations1st.end())
+
+11958 it->type = VMA_SUBALLOCATION_TYPE_FREE;
+11959 it->hAllocation = VK_NULL_HANDLE;
+11960 ++m_1stNullItemsMiddleCount;
+11961 m_SumFreeSize += it->size;
+11962 CleanupAfterFree();
+
+
+
+
+11967 if(m_2ndVectorMode != SECOND_VECTOR_EMPTY)
+
+
+11970 VmaSuballocation refSuballoc;
+11971 refSuballoc.offset = offset;
+
+11973 SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
+11974 VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
+11975 VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
+11976 if(it != suballocations2nd.end())
+
+11978 it->type = VMA_SUBALLOCATION_TYPE_FREE;
+11979 it->hAllocation = VK_NULL_HANDLE;
+11980 ++m_2ndNullItemsCount;
+11981 m_SumFreeSize += it->size;
+11982 CleanupAfterFree();
+
+
+
+
+11987 VMA_ASSERT(0 &&
"Allocation to free not found in linear allocator!");
+
+
+11990 bool VmaBlockMetadata_Linear::ShouldCompact1st()
const
+
+11992 const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
+11993 const size_t suballocCount = AccessSuballocations1st().size();
+11994 return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
+
+
+11997 void VmaBlockMetadata_Linear::CleanupAfterFree()
+
+11999 SuballocationVectorType& suballocations1st = AccessSuballocations1st();
+12000 SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
+
+
+
+12004 suballocations1st.clear();
+12005 suballocations2nd.clear();
+12006 m_1stNullItemsBeginCount = 0;
+12007 m_1stNullItemsMiddleCount = 0;
+12008 m_2ndNullItemsCount = 0;
+12009 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
+
+
+
+12013 const size_t suballoc1stCount = suballocations1st.size();
+12014 const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
+12015 VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
+
+
+12018 while(m_1stNullItemsBeginCount < suballoc1stCount &&
+12019 suballocations1st[m_1stNullItemsBeginCount].hAllocation == VK_NULL_HANDLE)
+
+12021 ++m_1stNullItemsBeginCount;
+12022 --m_1stNullItemsMiddleCount;
+
+
+
+12026 while(m_1stNullItemsMiddleCount > 0 &&
+12027 suballocations1st.back().hAllocation == VK_NULL_HANDLE)
+
+12029 --m_1stNullItemsMiddleCount;
+12030 suballocations1st.pop_back();
+
+
+
+12034 while(m_2ndNullItemsCount > 0 &&
+12035 suballocations2nd.back().hAllocation == VK_NULL_HANDLE)
+
+12037 --m_2ndNullItemsCount;
+12038 suballocations2nd.pop_back();
+
+
+
+12042 while(m_2ndNullItemsCount > 0 &&
+12043 suballocations2nd[0].hAllocation == VK_NULL_HANDLE)
+
+12045 --m_2ndNullItemsCount;
+12046 VmaVectorRemove(suballocations2nd, 0);
+
+
+12049 if(ShouldCompact1st())
+
+12051 const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
+12052 size_t srcIndex = m_1stNullItemsBeginCount;
+12053 for(
size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
+
+12055 while(suballocations1st[srcIndex].hAllocation == VK_NULL_HANDLE)
+
+
+
+12059 if(dstIndex != srcIndex)
+
+12061 suballocations1st[dstIndex] = suballocations1st[srcIndex];
+
+
+
+12065 suballocations1st.resize(nonNullItemCount);
+12066 m_1stNullItemsBeginCount = 0;
+12067 m_1stNullItemsMiddleCount = 0;
+
+
+
+12071 if(suballocations2nd.empty())
+
+12073 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
+
+
+
+12077 if(suballocations1st.size() - m_1stNullItemsBeginCount == 0)
+
+12079 suballocations1st.clear();
+12080 m_1stNullItemsBeginCount = 0;
+
+12082 if(!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
+
+
+12085 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
+12086 m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
+12087 while(m_1stNullItemsBeginCount < suballocations2nd.size() &&
+12088 suballocations2nd[m_1stNullItemsBeginCount].hAllocation == VK_NULL_HANDLE)
+
+12090 ++m_1stNullItemsBeginCount;
+12091 --m_1stNullItemsMiddleCount;
+
+12093 m_2ndNullItemsCount = 0;
+12094 m_1stVectorIndex ^= 1;
+
+
+
+
+12099 VMA_HEAVY_ASSERT(Validate());
+
+
-
-
-
-12107 VmaBlockMetadata_Buddy::VmaBlockMetadata_Buddy(
VmaAllocator hAllocator) :
-12108 VmaBlockMetadata(hAllocator),
-
-12110 m_AllocationCount(0),
-
-
-
-12114 memset(m_FreeList, 0,
sizeof(m_FreeList));
-
-
-12117 VmaBlockMetadata_Buddy::~VmaBlockMetadata_Buddy()
-
-12119 DeleteNode(m_Root);
-
-
-12122 void VmaBlockMetadata_Buddy::Init(VkDeviceSize size)
-
-12124 VmaBlockMetadata::Init(size);
-
-12126 m_UsableSize = VmaPrevPow2(size);
-12127 m_SumFreeSize = m_UsableSize;
-
-
-
-12131 while(m_LevelCount < MAX_LEVELS &&
-12132 LevelToNodeSize(m_LevelCount) >= MIN_NODE_SIZE)
-
-
-
-
-12137 Node* rootNode = vma_new(GetAllocationCallbacks(), Node)();
-12138 rootNode->offset = 0;
-12139 rootNode->type = Node::TYPE_FREE;
-12140 rootNode->parent = VMA_NULL;
-12141 rootNode->buddy = VMA_NULL;
-
-
-12144 AddToFreeListFront(0, rootNode);
-
-
-12147 bool VmaBlockMetadata_Buddy::Validate()
const
-
-
-12150 ValidationContext ctx;
-12151 if(!ValidateNode(ctx, VMA_NULL, m_Root, 0, LevelToNodeSize(0)))
-
-12153 VMA_VALIDATE(
false &&
"ValidateNode failed.");
-
-12155 VMA_VALIDATE(m_AllocationCount == ctx.calculatedAllocationCount);
-12156 VMA_VALIDATE(m_SumFreeSize == ctx.calculatedSumFreeSize);
-
-
-12159 for(uint32_t level = 0; level < m_LevelCount; ++level)
-
-12161 VMA_VALIDATE(m_FreeList[level].front == VMA_NULL ||
-12162 m_FreeList[level].front->free.prev == VMA_NULL);
-
-12164 for(Node* node = m_FreeList[level].front;
-
-12166 node = node->free.next)
-
-12168 VMA_VALIDATE(node->type == Node::TYPE_FREE);
-
-12170 if(node->free.next == VMA_NULL)
-
-12172 VMA_VALIDATE(m_FreeList[level].back == node);
-
-
-
-12176 VMA_VALIDATE(node->free.next->free.prev == node);
-
-
-
-
-
-12182 for(uint32_t level = m_LevelCount; level < MAX_LEVELS; ++level)
-
-12184 VMA_VALIDATE(m_FreeList[level].front == VMA_NULL && m_FreeList[level].back == VMA_NULL);
-
-
-
-
-
-12190 VkDeviceSize VmaBlockMetadata_Buddy::GetUnusedRangeSizeMax()
const
-
-12192 for(uint32_t level = 0; level < m_LevelCount; ++level)
-
-12194 if(m_FreeList[level].front != VMA_NULL)
-
-12196 return LevelToNodeSize(level);
-
-
-
-
-
-12202 void VmaBlockMetadata_Buddy::CalcAllocationStatInfo(
VmaStatInfo& outInfo)
const
-
-12204 const VkDeviceSize unusableSize = GetUnusableSize();
-
-
-
-
-
-
-
-
-
-
-12215 CalcAllocationStatInfoNode(outInfo, m_Root, LevelToNodeSize(0));
-
-12217 if(unusableSize > 0)
-
-
-
-
-
-
-
-
-12226 void VmaBlockMetadata_Buddy::AddPoolStats(
VmaPoolStats& inoutStats)
const
-
-12228 const VkDeviceSize unusableSize = GetUnusableSize();
-
-12230 inoutStats.
size += GetSize();
-12231 inoutStats.
unusedSize += m_SumFreeSize + unusableSize;
-
-
-
-
-12236 if(unusableSize > 0)
-
-
-
-
-
-
-12243 #if VMA_STATS_STRING_ENABLED
-
-12245 void VmaBlockMetadata_Buddy::PrintDetailedMap(
class VmaJsonWriter& json)
const
-
-
-
-12249 CalcAllocationStatInfo(stat);
-
-12251 PrintDetailedMap_Begin(
-
-
-
-
-
-12257 PrintDetailedMapNode(json, m_Root, LevelToNodeSize(0));
-
-12259 const VkDeviceSize unusableSize = GetUnusableSize();
-12260 if(unusableSize > 0)
-
-12262 PrintDetailedMap_UnusedRange(json,
-
-
-
-
-12267 PrintDetailedMap_End(json);
-
-
-
-
-12272 bool VmaBlockMetadata_Buddy::CreateAllocationRequest(
-12273 uint32_t currentFrameIndex,
-12274 uint32_t frameInUseCount,
-12275 VkDeviceSize bufferImageGranularity,
-12276 VkDeviceSize allocSize,
-12277 VkDeviceSize allocAlignment,
-
-12279 VmaSuballocationType allocType,
-12280 bool canMakeOtherLost,
-
-12282 VmaAllocationRequest* pAllocationRequest)
-
-12284 VMA_ASSERT(!upperAddress &&
"VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
-
-
-
-12288 if(allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
-12289 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
-12290 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
-
-12292 allocAlignment = VMA_MAX(allocAlignment, bufferImageGranularity);
-12293 allocSize = VMA_MAX(allocSize, bufferImageGranularity);
-
-
-12296 if(allocSize > m_UsableSize)
-
-
-
-
-12301 const uint32_t targetLevel = AllocSizeToLevel(allocSize);
-12302 for(uint32_t level = targetLevel + 1; level--; )
-
-12304 for(Node* freeNode = m_FreeList[level].front;
-12305 freeNode != VMA_NULL;
-12306 freeNode = freeNode->free.next)
-
-12308 if(freeNode->offset % allocAlignment == 0)
-
-12310 pAllocationRequest->type = VmaAllocationRequestType::Normal;
-12311 pAllocationRequest->offset = freeNode->offset;
-12312 pAllocationRequest->sumFreeSize = LevelToNodeSize(level);
-12313 pAllocationRequest->sumItemSize = 0;
-12314 pAllocationRequest->itemsToMakeLostCount = 0;
-12315 pAllocationRequest->customData = (
void*)(uintptr_t)level;
-
-
-
-
-
-
-
-
-12324 bool VmaBlockMetadata_Buddy::MakeRequestedAllocationsLost(
-12325 uint32_t currentFrameIndex,
-12326 uint32_t frameInUseCount,
-12327 VmaAllocationRequest* pAllocationRequest)
-
-
-
-
-
-12333 return pAllocationRequest->itemsToMakeLostCount == 0;
-
-
-12336 uint32_t VmaBlockMetadata_Buddy::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
-
-
-
-
-
-
-
-
-12345 void VmaBlockMetadata_Buddy::Alloc(
-12346 const VmaAllocationRequest& request,
-12347 VmaSuballocationType type,
-12348 VkDeviceSize allocSize,
-
-
-12351 VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
-
-12353 const uint32_t targetLevel = AllocSizeToLevel(allocSize);
-12354 uint32_t currLevel = (uint32_t)(uintptr_t)request.customData;
-
-12356 Node* currNode = m_FreeList[currLevel].front;
-12357 VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
-12358 while(currNode->offset != request.offset)
-
-12360 currNode = currNode->free.next;
-12361 VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
-
-
-
-12365 while(currLevel < targetLevel)
-
-
-
-12369 RemoveFromFreeList(currLevel, currNode);
-
-12371 const uint32_t childrenLevel = currLevel + 1;
-
-
-12374 Node* leftChild = vma_new(GetAllocationCallbacks(), Node)();
-12375 Node* rightChild = vma_new(GetAllocationCallbacks(), Node)();
-
-12377 leftChild->offset = currNode->offset;
-12378 leftChild->type = Node::TYPE_FREE;
-12379 leftChild->parent = currNode;
-12380 leftChild->buddy = rightChild;
-
-12382 rightChild->offset = currNode->offset + LevelToNodeSize(childrenLevel);
-12383 rightChild->type = Node::TYPE_FREE;
-12384 rightChild->parent = currNode;
-12385 rightChild->buddy = leftChild;
-
-
-12388 currNode->type = Node::TYPE_SPLIT;
-12389 currNode->split.leftChild = leftChild;
-
-
-12392 AddToFreeListFront(childrenLevel, rightChild);
-12393 AddToFreeListFront(childrenLevel, leftChild);
-
-
-
-
-12398 currNode = m_FreeList[currLevel].front;
-
-
-
-
-
-
-
-
-12407 VMA_ASSERT(currLevel == targetLevel &&
-12408 currNode != VMA_NULL &&
-12409 currNode->type == Node::TYPE_FREE);
-12410 RemoveFromFreeList(currLevel, currNode);
-
-
-12413 currNode->type = Node::TYPE_ALLOCATION;
-12414 currNode->allocation.alloc = hAllocation;
-
-12416 ++m_AllocationCount;
-
-12418 m_SumFreeSize -= allocSize;
-
-
-12421 void VmaBlockMetadata_Buddy::DeleteNode(Node* node)
-
-12423 if(node->type == Node::TYPE_SPLIT)
-
-12425 DeleteNode(node->split.leftChild->buddy);
-12426 DeleteNode(node->split.leftChild);
-
-
-12429 vma_delete(GetAllocationCallbacks(), node);
-
-
-12432 bool VmaBlockMetadata_Buddy::ValidateNode(ValidationContext& ctx,
const Node* parent,
const Node* curr, uint32_t level, VkDeviceSize levelNodeSize)
const
-
-12434 VMA_VALIDATE(level < m_LevelCount);
-12435 VMA_VALIDATE(curr->parent == parent);
-12436 VMA_VALIDATE((curr->buddy == VMA_NULL) == (parent == VMA_NULL));
-12437 VMA_VALIDATE(curr->buddy == VMA_NULL || curr->buddy->buddy == curr);
-
-
-12440 case Node::TYPE_FREE:
-
-12442 ctx.calculatedSumFreeSize += levelNodeSize;
-12443 ++ctx.calculatedFreeCount;
-
-12445 case Node::TYPE_ALLOCATION:
-12446 ++ctx.calculatedAllocationCount;
-12447 ctx.calculatedSumFreeSize += levelNodeSize - curr->allocation.alloc->GetSize();
-12448 VMA_VALIDATE(curr->allocation.alloc != VK_NULL_HANDLE);
-
-12450 case Node::TYPE_SPLIT:
-
-12452 const uint32_t childrenLevel = level + 1;
-12453 const VkDeviceSize childrenLevelNodeSize = levelNodeSize / 2;
-12454 const Node*
const leftChild = curr->split.leftChild;
-12455 VMA_VALIDATE(leftChild != VMA_NULL);
-12456 VMA_VALIDATE(leftChild->offset == curr->offset);
-12457 if(!ValidateNode(ctx, curr, leftChild, childrenLevel, childrenLevelNodeSize))
-
-12459 VMA_VALIDATE(
false &&
"ValidateNode for left child failed.");
-
-12461 const Node*
const rightChild = leftChild->buddy;
-12462 VMA_VALIDATE(rightChild->offset == curr->offset + childrenLevelNodeSize);
-12463 if(!ValidateNode(ctx, curr, rightChild, childrenLevel, childrenLevelNodeSize))
-
-12465 VMA_VALIDATE(
false &&
"ValidateNode for right child failed.");
-
-
-
-
-
-
-
-
-
-
-12476 uint32_t VmaBlockMetadata_Buddy::AllocSizeToLevel(VkDeviceSize allocSize)
const
-
-
-12479 uint32_t level = 0;
-12480 VkDeviceSize currLevelNodeSize = m_UsableSize;
-12481 VkDeviceSize nextLevelNodeSize = currLevelNodeSize >> 1;
-12482 while(allocSize <= nextLevelNodeSize && level + 1 < m_LevelCount)
-
-
-12485 currLevelNodeSize = nextLevelNodeSize;
-12486 nextLevelNodeSize = currLevelNodeSize >> 1;
-
-
-
-
-12491 void VmaBlockMetadata_Buddy::FreeAtOffset(
VmaAllocation alloc, VkDeviceSize offset)
-
-
-12494 Node* node = m_Root;
-12495 VkDeviceSize nodeOffset = 0;
-12496 uint32_t level = 0;
-12497 VkDeviceSize levelNodeSize = LevelToNodeSize(0);
-12498 while(node->type == Node::TYPE_SPLIT)
-
-12500 const VkDeviceSize nextLevelSize = levelNodeSize >> 1;
-12501 if(offset < nodeOffset + nextLevelSize)
-
-12503 node = node->split.leftChild;
-
-
-
-12507 node = node->split.leftChild->buddy;
-12508 nodeOffset += nextLevelSize;
-
-
-12511 levelNodeSize = nextLevelSize;
-
-
-12514 VMA_ASSERT(node != VMA_NULL && node->type == Node::TYPE_ALLOCATION);
-12515 VMA_ASSERT(alloc == VK_NULL_HANDLE || node->allocation.alloc == alloc);
-
-
-12518 --m_AllocationCount;
-12519 m_SumFreeSize += alloc->GetSize();
-
-12521 node->type = Node::TYPE_FREE;
-
-
-12524 while(level > 0 && node->buddy->type == Node::TYPE_FREE)
-
-12526 RemoveFromFreeList(level, node->buddy);
-12527 Node*
const parent = node->parent;
-
-12529 vma_delete(GetAllocationCallbacks(), node->buddy);
-12530 vma_delete(GetAllocationCallbacks(), node);
-12531 parent->type = Node::TYPE_FREE;
-
-
-
-
-
-
-
-12539 AddToFreeListFront(level, node);
-
-
-12542 void VmaBlockMetadata_Buddy::CalcAllocationStatInfoNode(
VmaStatInfo& outInfo,
const Node* node, VkDeviceSize levelNodeSize)
const
-
-
-
-12546 case Node::TYPE_FREE:
-
-
-
-
-
-12552 case Node::TYPE_ALLOCATION:
-
-12554 const VkDeviceSize allocSize = node->allocation.alloc->GetSize();
-
-
-
-
-
-12560 const VkDeviceSize unusedRangeSize = levelNodeSize - allocSize;
-12561 if(unusedRangeSize > 0)
-
-
-
-
-
-
-
-
-12570 case Node::TYPE_SPLIT:
-
-12572 const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
-12573 const Node*
const leftChild = node->split.leftChild;
-12574 CalcAllocationStatInfoNode(outInfo, leftChild, childrenNodeSize);
-12575 const Node*
const rightChild = leftChild->buddy;
-12576 CalcAllocationStatInfoNode(outInfo, rightChild, childrenNodeSize);
-
-
-
-
-
-
-
-12584 void VmaBlockMetadata_Buddy::AddToFreeListFront(uint32_t level, Node* node)
-
-12586 VMA_ASSERT(node->type == Node::TYPE_FREE);
-
-
-12589 Node*
const frontNode = m_FreeList[level].front;
-12590 if(frontNode == VMA_NULL)
-
-12592 VMA_ASSERT(m_FreeList[level].back == VMA_NULL);
-12593 node->free.prev = node->free.next = VMA_NULL;
-12594 m_FreeList[level].front = m_FreeList[level].back = node;
-
-
-
-12598 VMA_ASSERT(frontNode->free.prev == VMA_NULL);
-12599 node->free.prev = VMA_NULL;
-12600 node->free.next = frontNode;
-12601 frontNode->free.prev = node;
-12602 m_FreeList[level].front = node;
-
-
-
-12606 void VmaBlockMetadata_Buddy::RemoveFromFreeList(uint32_t level, Node* node)
-
-12608 VMA_ASSERT(m_FreeList[level].front != VMA_NULL);
-
-
-12611 if(node->free.prev == VMA_NULL)
-
-12613 VMA_ASSERT(m_FreeList[level].front == node);
-12614 m_FreeList[level].front = node->free.next;
-
-
-
-12618 Node*
const prevFreeNode = node->free.prev;
-12619 VMA_ASSERT(prevFreeNode->free.next == node);
-12620 prevFreeNode->free.next = node->free.next;
-
-
-
-12624 if(node->free.next == VMA_NULL)
-
-12626 VMA_ASSERT(m_FreeList[level].back == node);
-12627 m_FreeList[level].back = node->free.prev;
-
-
-
-12631 Node*
const nextFreeNode = node->free.next;
-12632 VMA_ASSERT(nextFreeNode->free.prev == node);
-12633 nextFreeNode->free.prev = node->free.prev;
-
-
-
-12637 #if VMA_STATS_STRING_ENABLED
-12638 void VmaBlockMetadata_Buddy::PrintDetailedMapNode(
class VmaJsonWriter& json,
const Node* node, VkDeviceSize levelNodeSize)
const
-
-
-
-12642 case Node::TYPE_FREE:
-12643 PrintDetailedMap_UnusedRange(json, node->offset, levelNodeSize);
-
-12645 case Node::TYPE_ALLOCATION:
-
-12647 PrintDetailedMap_Allocation(json, node->offset, node->allocation.alloc);
-12648 const VkDeviceSize allocSize = node->allocation.alloc->GetSize();
-12649 if(allocSize < levelNodeSize)
-
-12651 PrintDetailedMap_UnusedRange(json, node->offset + allocSize, levelNodeSize - allocSize);
-
-
-
-12655 case Node::TYPE_SPLIT:
-
-12657 const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
-12658 const Node*
const leftChild = node->split.leftChild;
-12659 PrintDetailedMapNode(json, leftChild, childrenNodeSize);
-12660 const Node*
const rightChild = leftChild->buddy;
-12661 PrintDetailedMapNode(json, rightChild, childrenNodeSize);
-
-
-
-
-
-
-
+
+
+12106 VmaBlockMetadata_Buddy::VmaBlockMetadata_Buddy(
VmaAllocator hAllocator) :
+12107 VmaBlockMetadata(hAllocator),
+
+12109 m_AllocationCount(0),
+
+
+
+12113 memset(m_FreeList, 0,
sizeof(m_FreeList));
+
+
+12116 VmaBlockMetadata_Buddy::~VmaBlockMetadata_Buddy()
+
+12118 DeleteNode(m_Root);
+
+
+12121 void VmaBlockMetadata_Buddy::Init(VkDeviceSize size)
+
+12123 VmaBlockMetadata::Init(size);
+
+12125 m_UsableSize = VmaPrevPow2(size);
+12126 m_SumFreeSize = m_UsableSize;
+
+
+
+12130 while(m_LevelCount < MAX_LEVELS &&
+12131 LevelToNodeSize(m_LevelCount) >= MIN_NODE_SIZE)
+
+
+
+
+12136 Node* rootNode = vma_new(GetAllocationCallbacks(), Node)();
+12137 rootNode->offset = 0;
+12138 rootNode->type = Node::TYPE_FREE;
+12139 rootNode->parent = VMA_NULL;
+12140 rootNode->buddy = VMA_NULL;
+
+
+12143 AddToFreeListFront(0, rootNode);
+
+
+12146 bool VmaBlockMetadata_Buddy::Validate()
const
+
+
+12149 ValidationContext ctx;
+12150 if(!ValidateNode(ctx, VMA_NULL, m_Root, 0, LevelToNodeSize(0)))
+
+12152 VMA_VALIDATE(
false &&
"ValidateNode failed.");
+
+12154 VMA_VALIDATE(m_AllocationCount == ctx.calculatedAllocationCount);
+12155 VMA_VALIDATE(m_SumFreeSize == ctx.calculatedSumFreeSize);
+
+
+12158 for(uint32_t level = 0; level < m_LevelCount; ++level)
+
+12160 VMA_VALIDATE(m_FreeList[level].front == VMA_NULL ||
+12161 m_FreeList[level].front->free.prev == VMA_NULL);
+
+12163 for(Node* node = m_FreeList[level].front;
+
+12165 node = node->free.next)
+
+12167 VMA_VALIDATE(node->type == Node::TYPE_FREE);
+
+12169 if(node->free.next == VMA_NULL)
+
+12171 VMA_VALIDATE(m_FreeList[level].back == node);
+
+
+
+12175 VMA_VALIDATE(node->free.next->free.prev == node);
+
+
+
+
+
+12181 for(uint32_t level = m_LevelCount; level < MAX_LEVELS; ++level)
+
+12183 VMA_VALIDATE(m_FreeList[level].front == VMA_NULL && m_FreeList[level].back == VMA_NULL);
+
+
+
+
+
+12189 VkDeviceSize VmaBlockMetadata_Buddy::GetUnusedRangeSizeMax()
const
+
+12191 for(uint32_t level = 0; level < m_LevelCount; ++level)
+
+12193 if(m_FreeList[level].front != VMA_NULL)
+
+12195 return LevelToNodeSize(level);
+
+
+
+
+
+12201 void VmaBlockMetadata_Buddy::CalcAllocationStatInfo(
VmaStatInfo& outInfo)
const
+
+12203 const VkDeviceSize unusableSize = GetUnusableSize();
+
+
+
+
+
+
+
+
+
+
+12214 CalcAllocationStatInfoNode(outInfo, m_Root, LevelToNodeSize(0));
+
+12216 if(unusableSize > 0)
+
+
+
+
+
+
+
+
+12225 void VmaBlockMetadata_Buddy::AddPoolStats(
VmaPoolStats& inoutStats)
const
+
+12227 const VkDeviceSize unusableSize = GetUnusableSize();
+
+12229 inoutStats.
size += GetSize();
+12230 inoutStats.
unusedSize += m_SumFreeSize + unusableSize;
+
+
+
+
+12235 if(unusableSize > 0)
+
+
+
+
+
+
+12242 #if VMA_STATS_STRING_ENABLED
+
+12244 void VmaBlockMetadata_Buddy::PrintDetailedMap(
class VmaJsonWriter& json)
const
+
+
+
+12248 CalcAllocationStatInfo(stat);
+
+12250 PrintDetailedMap_Begin(
+
+
+
+
+
+12256 PrintDetailedMapNode(json, m_Root, LevelToNodeSize(0));
+
+12258 const VkDeviceSize unusableSize = GetUnusableSize();
+12259 if(unusableSize > 0)
+
+12261 PrintDetailedMap_UnusedRange(json,
+
+
+
+
+12266 PrintDetailedMap_End(json);
+
+
+
+
+12271 bool VmaBlockMetadata_Buddy::CreateAllocationRequest(
+12272 uint32_t currentFrameIndex,
+12273 uint32_t frameInUseCount,
+12274 VkDeviceSize bufferImageGranularity,
+12275 VkDeviceSize allocSize,
+12276 VkDeviceSize allocAlignment,
+
+12278 VmaSuballocationType allocType,
+12279 bool canMakeOtherLost,
+
+12281 VmaAllocationRequest* pAllocationRequest)
+
+12283 VMA_ASSERT(!upperAddress &&
"VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
+
+
+
+12287 if(allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
+12288 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
+12289 allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
+
+12291 allocAlignment = VMA_MAX(allocAlignment, bufferImageGranularity);
+12292 allocSize = VMA_MAX(allocSize, bufferImageGranularity);
+
+
+12295 if(allocSize > m_UsableSize)
+
+
+
+
+12300 const uint32_t targetLevel = AllocSizeToLevel(allocSize);
+12301 for(uint32_t level = targetLevel + 1; level--; )
+
+12303 for(Node* freeNode = m_FreeList[level].front;
+12304 freeNode != VMA_NULL;
+12305 freeNode = freeNode->free.next)
+
+12307 if(freeNode->offset % allocAlignment == 0)
+
+12309 pAllocationRequest->type = VmaAllocationRequestType::Normal;
+12310 pAllocationRequest->offset = freeNode->offset;
+12311 pAllocationRequest->sumFreeSize = LevelToNodeSize(level);
+12312 pAllocationRequest->sumItemSize = 0;
+12313 pAllocationRequest->itemsToMakeLostCount = 0;
+12314 pAllocationRequest->customData = (
void*)(uintptr_t)level;
+
+
+
+
+
+
+
+
+12323 bool VmaBlockMetadata_Buddy::MakeRequestedAllocationsLost(
+12324 uint32_t currentFrameIndex,
+12325 uint32_t frameInUseCount,
+12326 VmaAllocationRequest* pAllocationRequest)
+
+
+
+
+
+12332 return pAllocationRequest->itemsToMakeLostCount == 0;
+
+
+12335 uint32_t VmaBlockMetadata_Buddy::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
+
+
+
+
+
+
+
+
+12344 void VmaBlockMetadata_Buddy::Alloc(
+12345 const VmaAllocationRequest& request,
+12346 VmaSuballocationType type,
+12347 VkDeviceSize allocSize,
+
+
+12350 VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
+
+12352 const uint32_t targetLevel = AllocSizeToLevel(allocSize);
+12353 uint32_t currLevel = (uint32_t)(uintptr_t)request.customData;
+
+12355 Node* currNode = m_FreeList[currLevel].front;
+12356 VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
+12357 while(currNode->offset != request.offset)
+
+12359 currNode = currNode->free.next;
+12360 VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
+
+
+
+12364 while(currLevel < targetLevel)
+
+
+
+12368 RemoveFromFreeList(currLevel, currNode);
+
+12370 const uint32_t childrenLevel = currLevel + 1;
+
+
+12373 Node* leftChild = vma_new(GetAllocationCallbacks(), Node)();
+12374 Node* rightChild = vma_new(GetAllocationCallbacks(), Node)();
+
+12376 leftChild->offset = currNode->offset;
+12377 leftChild->type = Node::TYPE_FREE;
+12378 leftChild->parent = currNode;
+12379 leftChild->buddy = rightChild;
+
+12381 rightChild->offset = currNode->offset + LevelToNodeSize(childrenLevel);
+12382 rightChild->type = Node::TYPE_FREE;
+12383 rightChild->parent = currNode;
+12384 rightChild->buddy = leftChild;
+
+
+12387 currNode->type = Node::TYPE_SPLIT;
+12388 currNode->split.leftChild = leftChild;
+
+
+12391 AddToFreeListFront(childrenLevel, rightChild);
+12392 AddToFreeListFront(childrenLevel, leftChild);
+
+
+
+
+12397 currNode = m_FreeList[currLevel].front;
+
+
+
+
+
+
+
+
+12406 VMA_ASSERT(currLevel == targetLevel &&
+12407 currNode != VMA_NULL &&
+12408 currNode->type == Node::TYPE_FREE);
+12409 RemoveFromFreeList(currLevel, currNode);
+
+
+12412 currNode->type = Node::TYPE_ALLOCATION;
+12413 currNode->allocation.alloc = hAllocation;
+
+12415 ++m_AllocationCount;
+
+12417 m_SumFreeSize -= allocSize;
+
+
+12420 void VmaBlockMetadata_Buddy::DeleteNode(Node* node)
+
+12422 if(node->type == Node::TYPE_SPLIT)
+
+12424 DeleteNode(node->split.leftChild->buddy);
+12425 DeleteNode(node->split.leftChild);
+
+
+12428 vma_delete(GetAllocationCallbacks(), node);
+
+
+12431 bool VmaBlockMetadata_Buddy::ValidateNode(ValidationContext& ctx,
const Node* parent,
const Node* curr, uint32_t level, VkDeviceSize levelNodeSize)
const
+
+12433 VMA_VALIDATE(level < m_LevelCount);
+12434 VMA_VALIDATE(curr->parent == parent);
+12435 VMA_VALIDATE((curr->buddy == VMA_NULL) == (parent == VMA_NULL));
+12436 VMA_VALIDATE(curr->buddy == VMA_NULL || curr->buddy->buddy == curr);
+
+
+12439 case Node::TYPE_FREE:
+
+12441 ctx.calculatedSumFreeSize += levelNodeSize;
+12442 ++ctx.calculatedFreeCount;
+
+12444 case Node::TYPE_ALLOCATION:
+12445 ++ctx.calculatedAllocationCount;
+12446 ctx.calculatedSumFreeSize += levelNodeSize - curr->allocation.alloc->GetSize();
+12447 VMA_VALIDATE(curr->allocation.alloc != VK_NULL_HANDLE);
+
+12449 case Node::TYPE_SPLIT:
+
+12451 const uint32_t childrenLevel = level + 1;
+12452 const VkDeviceSize childrenLevelNodeSize = levelNodeSize / 2;
+12453 const Node*
const leftChild = curr->split.leftChild;
+12454 VMA_VALIDATE(leftChild != VMA_NULL);
+12455 VMA_VALIDATE(leftChild->offset == curr->offset);
+12456 if(!ValidateNode(ctx, curr, leftChild, childrenLevel, childrenLevelNodeSize))
+
+12458 VMA_VALIDATE(
false &&
"ValidateNode for left child failed.");
+
+12460 const Node*
const rightChild = leftChild->buddy;
+12461 VMA_VALIDATE(rightChild->offset == curr->offset + childrenLevelNodeSize);
+12462 if(!ValidateNode(ctx, curr, rightChild, childrenLevel, childrenLevelNodeSize))
+
+12464 VMA_VALIDATE(
false &&
"ValidateNode for right child failed.");
+
+
+
+
+
+
+
+
+
+
+12475 uint32_t VmaBlockMetadata_Buddy::AllocSizeToLevel(VkDeviceSize allocSize)
const
+
+
+12478 uint32_t level = 0;
+12479 VkDeviceSize currLevelNodeSize = m_UsableSize;
+12480 VkDeviceSize nextLevelNodeSize = currLevelNodeSize >> 1;
+12481 while(allocSize <= nextLevelNodeSize && level + 1 < m_LevelCount)
+
+
+12484 currLevelNodeSize = nextLevelNodeSize;
+12485 nextLevelNodeSize = currLevelNodeSize >> 1;
+
+
+
+
+12490 void VmaBlockMetadata_Buddy::FreeAtOffset(
VmaAllocation alloc, VkDeviceSize offset)
+
+
+12493 Node* node = m_Root;
+12494 VkDeviceSize nodeOffset = 0;
+12495 uint32_t level = 0;
+12496 VkDeviceSize levelNodeSize = LevelToNodeSize(0);
+12497 while(node->type == Node::TYPE_SPLIT)
+
+12499 const VkDeviceSize nextLevelSize = levelNodeSize >> 1;
+12500 if(offset < nodeOffset + nextLevelSize)
+
+12502 node = node->split.leftChild;
+
+
+
+12506 node = node->split.leftChild->buddy;
+12507 nodeOffset += nextLevelSize;
+
+
+12510 levelNodeSize = nextLevelSize;
+
+
+12513 VMA_ASSERT(node != VMA_NULL && node->type == Node::TYPE_ALLOCATION);
+12514 VMA_ASSERT(alloc == VK_NULL_HANDLE || node->allocation.alloc == alloc);
+
+
+12517 --m_AllocationCount;
+12518 m_SumFreeSize += alloc->GetSize();
+
+12520 node->type = Node::TYPE_FREE;
+
+
+12523 while(level > 0 && node->buddy->type == Node::TYPE_FREE)
+
+12525 RemoveFromFreeList(level, node->buddy);
+12526 Node*
const parent = node->parent;
+
+12528 vma_delete(GetAllocationCallbacks(), node->buddy);
+12529 vma_delete(GetAllocationCallbacks(), node);
+12530 parent->type = Node::TYPE_FREE;
+
+
+
+
+
+
+
+12538 AddToFreeListFront(level, node);
+
+
+12541 void VmaBlockMetadata_Buddy::CalcAllocationStatInfoNode(
VmaStatInfo& outInfo,
const Node* node, VkDeviceSize levelNodeSize)
const
+
+
+
+12545 case Node::TYPE_FREE:
+
+
+
+
+
+12551 case Node::TYPE_ALLOCATION:
+
+12553 const VkDeviceSize allocSize = node->allocation.alloc->GetSize();
+
+
+
+
+
+12559 const VkDeviceSize unusedRangeSize = levelNodeSize - allocSize;
+12560 if(unusedRangeSize > 0)
+
+
+
+
+
+
+
+
+12569 case Node::TYPE_SPLIT:
+
+12571 const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
+12572 const Node*
const leftChild = node->split.leftChild;
+12573 CalcAllocationStatInfoNode(outInfo, leftChild, childrenNodeSize);
+12574 const Node*
const rightChild = leftChild->buddy;
+12575 CalcAllocationStatInfoNode(outInfo, rightChild, childrenNodeSize);
+
+
+
+
+
+
+
+12583 void VmaBlockMetadata_Buddy::AddToFreeListFront(uint32_t level, Node* node)
+
+12585 VMA_ASSERT(node->type == Node::TYPE_FREE);
+
+
+12588 Node*
const frontNode = m_FreeList[level].front;
+12589 if(frontNode == VMA_NULL)
+
+12591 VMA_ASSERT(m_FreeList[level].back == VMA_NULL);
+12592 node->free.prev = node->free.next = VMA_NULL;
+12593 m_FreeList[level].front = m_FreeList[level].back = node;
+
+
+
+12597 VMA_ASSERT(frontNode->free.prev == VMA_NULL);
+12598 node->free.prev = VMA_NULL;
+12599 node->free.next = frontNode;
+12600 frontNode->free.prev = node;
+12601 m_FreeList[level].front = node;
+
+
+
+12605 void VmaBlockMetadata_Buddy::RemoveFromFreeList(uint32_t level, Node* node)
+
+12607 VMA_ASSERT(m_FreeList[level].front != VMA_NULL);
+
+
+12610 if(node->free.prev == VMA_NULL)
+
+12612 VMA_ASSERT(m_FreeList[level].front == node);
+12613 m_FreeList[level].front = node->free.next;
+
+
+
+12617 Node*
const prevFreeNode = node->free.prev;
+12618 VMA_ASSERT(prevFreeNode->free.next == node);
+12619 prevFreeNode->free.next = node->free.next;
+
+
+
+12623 if(node->free.next == VMA_NULL)
+
+12625 VMA_ASSERT(m_FreeList[level].back == node);
+12626 m_FreeList[level].back = node->free.prev;
+
+
+
+12630 Node*
const nextFreeNode = node->free.next;
+12631 VMA_ASSERT(nextFreeNode->free.prev == node);
+12632 nextFreeNode->free.prev = node->free.prev;
+
+
+
+12636 #if VMA_STATS_STRING_ENABLED
+12637 void VmaBlockMetadata_Buddy::PrintDetailedMapNode(
class VmaJsonWriter& json,
const Node* node, VkDeviceSize levelNodeSize)
const
+
+
+
+12641 case Node::TYPE_FREE:
+12642 PrintDetailedMap_UnusedRange(json, node->offset, levelNodeSize);
+
+12644 case Node::TYPE_ALLOCATION:
+
+12646 PrintDetailedMap_Allocation(json, node->offset, node->allocation.alloc);
+12647 const VkDeviceSize allocSize = node->allocation.alloc->GetSize();
+12648 if(allocSize < levelNodeSize)
+
+12650 PrintDetailedMap_UnusedRange(json, node->offset + allocSize, levelNodeSize - allocSize);
+
+
+
+12654 case Node::TYPE_SPLIT:
+
+12656 const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
+12657 const Node*
const leftChild = node->split.leftChild;
+12658 PrintDetailedMapNode(json, leftChild, childrenNodeSize);
+12659 const Node*
const rightChild = leftChild->buddy;
+12660 PrintDetailedMapNode(json, rightChild, childrenNodeSize);
+
+
+
+
+
+
+
+
-
-
-
-12674 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(
VmaAllocator hAllocator) :
-12675 m_pMetadata(VMA_NULL),
-12676 m_MemoryTypeIndex(UINT32_MAX),
-
-12678 m_hMemory(VK_NULL_HANDLE),
-
-12680 m_pMappedData(VMA_NULL)
-
-
-
-12684 void VmaDeviceMemoryBlock::Init(
-
-
-12687 uint32_t newMemoryTypeIndex,
-12688 VkDeviceMemory newMemory,
-12689 VkDeviceSize newSize,
-
-12691 uint32_t algorithm)
-
-12693 VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
-
-12695 m_hParentPool = hParentPool;
-12696 m_MemoryTypeIndex = newMemoryTypeIndex;
-
-12698 m_hMemory = newMemory;
-
-
-
-
-12703 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator);
-
-
-12706 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Buddy)(hAllocator);
-
-
-
-
-
-12712 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Generic)(hAllocator);
-
-12714 m_pMetadata->Init(newSize);
-
-
-12717 void VmaDeviceMemoryBlock::Destroy(
VmaAllocator allocator)
-
-
-
-12721 VMA_ASSERT(m_pMetadata->IsEmpty() &&
"Some allocations were not freed before destruction of this memory block!");
-
-12723 VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
-12724 allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_pMetadata->GetSize(), m_hMemory);
-12725 m_hMemory = VK_NULL_HANDLE;
-
-12727 vma_delete(allocator, m_pMetadata);
-12728 m_pMetadata = VMA_NULL;
-
-
-12731 bool VmaDeviceMemoryBlock::Validate()
const
-
-12733 VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
-12734 (m_pMetadata->GetSize() != 0));
-
-12736 return m_pMetadata->Validate();
-
-
-12739 VkResult VmaDeviceMemoryBlock::CheckCorruption(
VmaAllocator hAllocator)
-
-12741 void* pData =
nullptr;
-12742 VkResult res = Map(hAllocator, 1, &pData);
-12743 if(res != VK_SUCCESS)
-
-
-
-
-12748 res = m_pMetadata->CheckCorruption(pData);
-
-12750 Unmap(hAllocator, 1);
-
-
-
-
-12755 VkResult VmaDeviceMemoryBlock::Map(
VmaAllocator hAllocator, uint32_t count,
void** ppData)
-
-
-
-
-
-
-12762 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
-12763 if(m_MapCount != 0)
-
-12765 m_MapCount += count;
-12766 VMA_ASSERT(m_pMappedData != VMA_NULL);
-12767 if(ppData != VMA_NULL)
-
-12769 *ppData = m_pMappedData;
-
-
-
-
-
-12775 VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
-12776 hAllocator->m_hDevice,
-
-
-
-
-
-12782 if(result == VK_SUCCESS)
-
-12784 if(ppData != VMA_NULL)
-
-12786 *ppData = m_pMappedData;
-
-12788 m_MapCount = count;
-
-
-
-
-
-12794 void VmaDeviceMemoryBlock::Unmap(
VmaAllocator hAllocator, uint32_t count)
-
-
-
-
-
-
-12801 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
-12802 if(m_MapCount >= count)
-
-12804 m_MapCount -= count;
-12805 if(m_MapCount == 0)
-
-12807 m_pMappedData = VMA_NULL;
-12808 (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
-
-
-
-
-12813 VMA_ASSERT(0 &&
"VkDeviceMemory block is being unmapped while it was not previously mapped.");
-
-
-
-12817 VkResult VmaDeviceMemoryBlock::WriteMagicValueAroundAllocation(
VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
-
-12819 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
-12820 VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
-
-
-12823 VkResult res = Map(hAllocator, 1, &pData);
-12824 if(res != VK_SUCCESS)
-
-
-
-
-12829 VmaWriteMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN);
-12830 VmaWriteMagicValue(pData, allocOffset + allocSize);
-
-12832 Unmap(hAllocator, 1);
-
-
-
-
-12837 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAroundAllocation(
VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
-
-12839 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
-12840 VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
-
-
-12843 VkResult res = Map(hAllocator, 1, &pData);
-12844 if(res != VK_SUCCESS)
-
-
-
-
-12849 if(!VmaValidateMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN))
-
-12851 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE FREED ALLOCATION!");
-
-12853 else if(!VmaValidateMagicValue(pData, allocOffset + allocSize))
-
-12855 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
-
-
-12858 Unmap(hAllocator, 1);
-
-
-
-
-12863 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
-
-
-12866 VkDeviceSize allocationLocalOffset,
-
-
-
-12870 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
-12871 hAllocation->GetBlock() ==
this);
-12872 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
-12873 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
-12874 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
-
-12876 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
-12877 return hAllocator->BindVulkanBuffer(m_hMemory, memoryOffset, hBuffer, pNext);
-
-
-12880 VkResult VmaDeviceMemoryBlock::BindImageMemory(
-
-
-12883 VkDeviceSize allocationLocalOffset,
-
-
-
-12887 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
-12888 hAllocation->GetBlock() ==
this);
-12889 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
-12890 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
-12891 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
-
-12893 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
-12894 return hAllocator->BindVulkanImage(m_hMemory, memoryOffset, hImage, pNext);
-
-
-
-
-12899 memset(&outInfo, 0,
sizeof(outInfo));
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-12918 static void VmaPostprocessCalcStatInfo(
VmaStatInfo& inoutInfo)
-
-
-
-
-
-
-
-12926 VmaPool_T::VmaPool_T(
-
-
-12929 VkDeviceSize preferredBlockSize) :
-
-
-
-12933 createInfo.memoryTypeIndex,
-12934 createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
-12935 createInfo.minBlockCount,
-12936 createInfo.maxBlockCount,
-
-12938 createInfo.frameInUseCount,
-12939 createInfo.blockSize != 0,
-
-12941 createInfo.priority,
-12942 VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
-12943 createInfo.pMemoryAllocateNext),
-
-
-
-
-
-12949 VmaPool_T::~VmaPool_T()
-
-12951 VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
-
-
-12954 void VmaPool_T::SetName(
const char* pName)
-
-12956 const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
-12957 VmaFreeString(allocs, m_Name);
-
-12959 if(pName != VMA_NULL)
-
-12961 m_Name = VmaCreateStringCopy(allocs, pName);
-
-
-
-
-
-
-
-12969 #if VMA_STATS_STRING_ENABLED
-
-
-
-12973 VmaBlockVector::VmaBlockVector(
-
-
-12976 uint32_t memoryTypeIndex,
-12977 VkDeviceSize preferredBlockSize,
-12978 size_t minBlockCount,
-12979 size_t maxBlockCount,
-12980 VkDeviceSize bufferImageGranularity,
-12981 uint32_t frameInUseCount,
-12982 bool explicitBlockSize,
-12983 uint32_t algorithm,
-
-12985 VkDeviceSize minAllocationAlignment,
-12986 void* pMemoryAllocateNext) :
-12987 m_hAllocator(hAllocator),
-12988 m_hParentPool(hParentPool),
-12989 m_MemoryTypeIndex(memoryTypeIndex),
-12990 m_PreferredBlockSize(preferredBlockSize),
-12991 m_MinBlockCount(minBlockCount),
-12992 m_MaxBlockCount(maxBlockCount),
-12993 m_BufferImageGranularity(bufferImageGranularity),
-12994 m_FrameInUseCount(frameInUseCount),
-12995 m_ExplicitBlockSize(explicitBlockSize),
-12996 m_Algorithm(algorithm),
-12997 m_Priority(priority),
-12998 m_MinAllocationAlignment(minAllocationAlignment),
-12999 m_pMemoryAllocateNext(pMemoryAllocateNext),
-13000 m_HasEmptyBlock(false),
-13001 m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
-
-
-
-
-13006 VmaBlockVector::~VmaBlockVector()
-
-13008 for(
size_t i = m_Blocks.size(); i--; )
-
-13010 m_Blocks[i]->Destroy(m_hAllocator);
-13011 vma_delete(m_hAllocator, m_Blocks[i]);
-
-
-
-13015 VkResult VmaBlockVector::CreateMinBlocks()
-
-13017 for(
size_t i = 0; i < m_MinBlockCount; ++i)
-
-13019 VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
-13020 if(res != VK_SUCCESS)
-
-
-
-
-
-
-
-13028 void VmaBlockVector::GetPoolStats(
VmaPoolStats* pStats)
-
-13030 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-13032 const size_t blockCount = m_Blocks.size();
-
-
-
-
-
-
-
-
-13041 for(uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
-
-13043 const VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
-13044 VMA_ASSERT(pBlock);
-13045 VMA_HEAVY_ASSERT(pBlock->Validate());
-13046 pBlock->m_pMetadata->AddPoolStats(*pStats);
-
-
-
-13050 bool VmaBlockVector::IsEmpty()
-
-13052 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
-13053 return m_Blocks.empty();
-
-
-13056 bool VmaBlockVector::IsCorruptionDetectionEnabled()
const
-
-13058 const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
-13059 return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
-13060 (VMA_DEBUG_MARGIN > 0) &&
-
-13062 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
-
-
-13065 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
-
-13067 VkResult VmaBlockVector::Allocate(
-13068 uint32_t currentFrameIndex,
-
-13070 VkDeviceSize alignment,
-
-13072 VmaSuballocationType suballocType,
-13073 size_t allocationCount,
-
-
-
-13077 VkResult res = VK_SUCCESS;
-
-13079 alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
-
-13081 if(IsCorruptionDetectionEnabled())
-
-13083 size = VmaAlignUp<VkDeviceSize>(size,
sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
-13084 alignment = VmaAlignUp<VkDeviceSize>(alignment,
sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
-
-
-
-13088 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
-13089 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
-
-13091 res = AllocatePage(
-
-
-
-
-
-13097 pAllocations + allocIndex);
-13098 if(res != VK_SUCCESS)
-
-
-
-
-
-
-13105 if(res != VK_SUCCESS)
-
-
-13108 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
-13109 while(allocIndex--)
-
-13111 VmaAllocation_T*
const alloc = pAllocations[allocIndex];
-13112 const VkDeviceSize allocSize = alloc->GetSize();
-
-13114 m_hAllocator->m_Budget.RemoveAllocation(heapIndex, allocSize);
-
-13116 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
-
-
-
-
-
-13122 VkResult VmaBlockVector::AllocatePage(
-13123 uint32_t currentFrameIndex,
-
-13125 VkDeviceSize alignment,
-
-13127 VmaSuballocationType suballocType,
-
-
-
-
-
-
-
-13135 VkDeviceSize freeMemory;
-
-13137 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
-
-13139 m_hAllocator->GetBudget(&heapBudget, heapIndex, 1);
-
-
-
-13143 const bool canFallbackToDedicated = !IsCustomPool();
-13144 const bool canCreateNewBlock =
-
-13146 (m_Blocks.size() < m_MaxBlockCount) &&
-13147 (freeMemory >= size || !canFallbackToDedicated);
-
-
-
-
-
-
-13154 canMakeOtherLost =
false;
-
-
-
-13158 if(isUpperAddress &&
-
-
-13161 return VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-
-
-
-
-
-
-
-
-
-
-
-13175 return VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-
-13179 if(size + 2 * VMA_DEBUG_MARGIN > m_PreferredBlockSize)
-
-13181 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-
-
-
-13189 if(!canMakeOtherLost || canCreateNewBlock)
-
-
-
-
-
-
-
-
-13198 if(!m_Blocks.empty())
-
-13200 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks.back();
-13201 VMA_ASSERT(pCurrBlock);
-13202 VkResult res = AllocateFromBlock(
-
-
-
-
-
-
-
-
-
-13212 if(res == VK_SUCCESS)
-
-13214 VMA_DEBUG_LOG(
" Returned from last block #%u", pCurrBlock->GetId());
-
-
-
-
-
-
-
-
-
-13224 for(
size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
-
-13226 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
-13227 VMA_ASSERT(pCurrBlock);
-13228 VkResult res = AllocateFromBlock(
-
-
-
-
-
-
-
-
-
-13238 if(res == VK_SUCCESS)
-
-13240 VMA_DEBUG_LOG(
" Returned from existing block #%u", pCurrBlock->GetId());
-
-
-
-
-
-
-
-13248 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
-
-13250 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
-13251 VMA_ASSERT(pCurrBlock);
-13252 VkResult res = AllocateFromBlock(
-
-
-
-
-
-
-
-
-
-13262 if(res == VK_SUCCESS)
-
-13264 VMA_DEBUG_LOG(
" Returned from existing block #%u", pCurrBlock->GetId());
-
-
-
-
-
-
-
-13272 if(canCreateNewBlock)
-
-
-13275 VkDeviceSize newBlockSize = m_PreferredBlockSize;
-13276 uint32_t newBlockSizeShift = 0;
-13277 const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
-
-13279 if(!m_ExplicitBlockSize)
-
-
-13282 const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
-13283 for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
-
-13285 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
-13286 if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
-
-13288 newBlockSize = smallerNewBlockSize;
-13289 ++newBlockSizeShift;
-
-
-
-
-
-
-
-
-13298 size_t newBlockIndex = 0;
-13299 VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
-13300 CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-13302 if(!m_ExplicitBlockSize)
-
-13304 while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
-
-13306 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
-13307 if(smallerNewBlockSize >= size)
-
-13309 newBlockSize = smallerNewBlockSize;
-13310 ++newBlockSizeShift;
-13311 res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
-13312 CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-
-
-
-
-13321 if(res == VK_SUCCESS)
-
-13323 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[newBlockIndex];
-13324 VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
-
-13326 res = AllocateFromBlock(
-
-
-
-
-
-
-
-
-
-13336 if(res == VK_SUCCESS)
-
-13338 VMA_DEBUG_LOG(
" Created new block #%u Size=%llu", pBlock->GetId(), newBlockSize);
-
-
-
-
-
-13344 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-
-
-13351 if(canMakeOtherLost)
-
-13353 uint32_t tryIndex = 0;
-13354 for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
-
-13356 VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
-13357 VmaAllocationRequest bestRequest = {};
-13358 VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
-
-
-
-
-
-13364 for(
size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
-
-13366 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
-13367 VMA_ASSERT(pCurrBlock);
-13368 VmaAllocationRequest currRequest = {};
-13369 if(pCurrBlock->m_pMetadata->CreateAllocationRequest(
-
-
-13372 m_BufferImageGranularity,
-
-
-
-
-
-
-
-
-13381 const VkDeviceSize currRequestCost = currRequest.CalcCost();
-13382 if(pBestRequestBlock == VMA_NULL ||
-13383 currRequestCost < bestRequestCost)
-
-13385 pBestRequestBlock = pCurrBlock;
-13386 bestRequest = currRequest;
-13387 bestRequestCost = currRequestCost;
-
-13389 if(bestRequestCost == 0)
-
-
-
-
-
-
-
-
-
-
-13400 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
-
-13402 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
-13403 VMA_ASSERT(pCurrBlock);
-13404 VmaAllocationRequest currRequest = {};
-13405 if(pCurrBlock->m_pMetadata->CreateAllocationRequest(
-
-
-13408 m_BufferImageGranularity,
-
-
-
-
-
-
-
-
-13417 const VkDeviceSize currRequestCost = currRequest.CalcCost();
-13418 if(pBestRequestBlock == VMA_NULL ||
-13419 currRequestCost < bestRequestCost ||
-
-
-13422 pBestRequestBlock = pCurrBlock;
-13423 bestRequest = currRequest;
-13424 bestRequestCost = currRequestCost;
-
-13426 if(bestRequestCost == 0 ||
-
-
-
-
-
-
-
-
-
-13436 if(pBestRequestBlock != VMA_NULL)
-
-
-
-13440 VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
-13441 if(res != VK_SUCCESS)
-
-
-
-
-
-13447 if(pBestRequestBlock->m_pMetadata->MakeRequestedAllocationsLost(
-
-
-
-
-
-13453 *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(currentFrameIndex, isUserDataString);
-13454 pBestRequestBlock->m_pMetadata->Alloc(bestRequest, suballocType, size, *pAllocation);
-13455 UpdateHasEmptyBlock();
-13456 (*pAllocation)->InitBlockAllocation(
-
-13458 bestRequest.offset,
-
-
-
-
-
-
-13465 VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
-13466 VMA_DEBUG_LOG(
" Returned from existing block");
-13467 (*pAllocation)->SetUserData(m_hAllocator, createInfo.
pUserData);
-13468 m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), size);
-13469 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
-
-13471 m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
-
-13473 if(IsCorruptionDetectionEnabled())
-
-13475 VkResult res = pBestRequestBlock->WriteMagicValueAroundAllocation(m_hAllocator, bestRequest.offset, size);
-13476 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to write magic value.");
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-13491 if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
-
-13493 return VK_ERROR_TOO_MANY_OBJECTS;
-
-
-
-13497 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-13500 void VmaBlockVector::Free(
-
-
-13503 VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
-
-13505 bool budgetExceeded =
false;
-
-13507 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
-
-13509 m_hAllocator->GetBudget(&heapBudget, heapIndex, 1);
-13510 budgetExceeded = heapBudget.
usage >= heapBudget.
budget;
-
-
-
-
-13515 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-13517 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
-
-13519 if(IsCorruptionDetectionEnabled())
-
-13521 VkResult res = pBlock->ValidateMagicValueAroundAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
-13522 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to validate magic value.");
-
-
-13525 if(hAllocation->IsPersistentMap())
-
-13527 pBlock->Unmap(m_hAllocator, 1);
-
-
-13530 pBlock->m_pMetadata->Free(hAllocation);
-13531 VMA_HEAVY_ASSERT(pBlock->Validate());
-
-13533 VMA_DEBUG_LOG(
" Freed from MemoryTypeIndex=%u", m_MemoryTypeIndex);
-
-13535 const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
-
-13537 if(pBlock->m_pMetadata->IsEmpty())
-
-
-13540 if((m_HasEmptyBlock || budgetExceeded) && canDeleteBlock)
-
-13542 pBlockToDelete = pBlock;
-
-
-
-
-
-
-13549 else if(m_HasEmptyBlock && canDeleteBlock)
-
-13551 VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
-13552 if(pLastBlock->m_pMetadata->IsEmpty())
-
-13554 pBlockToDelete = pLastBlock;
-13555 m_Blocks.pop_back();
-
-
-
-13559 UpdateHasEmptyBlock();
-13560 IncrementallySortBlocks();
-
-
-
-
-13565 if(pBlockToDelete != VMA_NULL)
-
-13567 VMA_DEBUG_LOG(
" Deleted empty block");
-13568 pBlockToDelete->Destroy(m_hAllocator);
-13569 vma_delete(m_hAllocator, pBlockToDelete);
-
-
-
-13573 VkDeviceSize VmaBlockVector::CalcMaxBlockSize()
const
-
-13575 VkDeviceSize result = 0;
-13576 for(
size_t i = m_Blocks.size(); i--; )
-
-13578 result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
-13579 if(result >= m_PreferredBlockSize)
-
-
-
-
-
-
-
-13587 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
-
-13589 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
-
-13591 if(m_Blocks[blockIndex] == pBlock)
-
-13593 VmaVectorRemove(m_Blocks, blockIndex);
-
-
-
-
-
-
-13600 void VmaBlockVector::IncrementallySortBlocks()
-
-
-
-
-13605 for(
size_t i = 1; i < m_Blocks.size(); ++i)
-
-13607 if(m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
-
-13609 VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
-
-
-
-
-
-
-13616 VkResult VmaBlockVector::AllocateFromBlock(
-13617 VmaDeviceMemoryBlock* pBlock,
-13618 uint32_t currentFrameIndex,
-
-13620 VkDeviceSize alignment,
-
-
-13623 VmaSuballocationType suballocType,
-
-
-
-
-
-
-
-
-13632 VmaAllocationRequest currRequest = {};
-13633 if(pBlock->m_pMetadata->CreateAllocationRequest(
-
-
-13636 m_BufferImageGranularity,
-
-
-
-
-
-
-
-
-
-13646 VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
-
-
-
-13650 VkResult res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
-13651 if(res != VK_SUCCESS)
-
-
-
-
-
-13657 *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(currentFrameIndex, isUserDataString);
-13658 pBlock->m_pMetadata->Alloc(currRequest, suballocType, size, *pAllocation);
-13659 UpdateHasEmptyBlock();
-13660 (*pAllocation)->InitBlockAllocation(
-
-13662 currRequest.offset,
-
-
-
-
-
-
-13669 VMA_HEAVY_ASSERT(pBlock->Validate());
-13670 (*pAllocation)->SetUserData(m_hAllocator, pUserData);
-13671 m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), size);
-13672 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
-
-13674 m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
-
-13676 if(IsCorruptionDetectionEnabled())
-
-13678 VkResult res = pBlock->WriteMagicValueAroundAllocation(m_hAllocator, currRequest.offset, size);
-13679 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to write magic value.");
-
-
-
-13683 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-13686 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize,
size_t* pNewBlockIndex)
-
-13688 VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
-13689 allocInfo.pNext = m_pMemoryAllocateNext;
-13690 allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
-13691 allocInfo.allocationSize = blockSize;
-
-13693 #if VMA_BUFFER_DEVICE_ADDRESS
-
-13695 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
-13696 if(m_hAllocator->m_UseKhrBufferDeviceAddress)
-
-13698 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
-13699 VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
-
-
-
-13703 #if VMA_MEMORY_PRIORITY
-13704 VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
-13705 if(m_hAllocator->m_UseExtMemoryPriority)
-
-13707 priorityInfo.priority = m_Priority;
-13708 VmaPnextChainPushFront(&allocInfo, &priorityInfo);
-
-
-
-13712 #if VMA_EXTERNAL_MEMORY
-
-13714 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
-13715 exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(m_MemoryTypeIndex);
-13716 if(exportMemoryAllocInfo.handleTypes != 0)
-
-13718 VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
-
-
-
-13722 VkDeviceMemory mem = VK_NULL_HANDLE;
-13723 VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
-
-
-
-
-
-
-
-
-13732 VmaDeviceMemoryBlock*
const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
-
-
-
-
-
-13738 allocInfo.allocationSize,
-
-
-
-13742 m_Blocks.push_back(pBlock);
-13743 if(pNewBlockIndex != VMA_NULL)
-
-13745 *pNewBlockIndex = m_Blocks.size() - 1;
-
-
-
-
-
-13751 void VmaBlockVector::ApplyDefragmentationMovesCpu(
-13752 class VmaBlockVectorDefragmentationContext* pDefragCtx,
-13753 const VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves)
-
-13755 const size_t blockCount = m_Blocks.size();
-13756 const bool isNonCoherent = m_hAllocator->IsMemoryTypeNonCoherent(m_MemoryTypeIndex);
-
-
-
-13760 BLOCK_FLAG_USED = 0x00000001,
-13761 BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION = 0x00000002,
-
-
-
-
-
-
-
-13769 VmaVector< BlockInfo, VmaStlAllocator<BlockInfo> >
-13770 blockInfo(blockCount, BlockInfo(), VmaStlAllocator<BlockInfo>(m_hAllocator->GetAllocationCallbacks()));
-13771 memset(blockInfo.data(), 0, blockCount *
sizeof(BlockInfo));
-
-
-13774 const size_t moveCount = moves.size();
-13775 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
-
-13777 const VmaDefragmentationMove& move = moves[moveIndex];
-13778 blockInfo[move.srcBlockIndex].flags |= BLOCK_FLAG_USED;
-13779 blockInfo[move.dstBlockIndex].flags |= BLOCK_FLAG_USED;
-
-
-13782 VMA_ASSERT(pDefragCtx->res == VK_SUCCESS);
-
-
-13785 for(
size_t blockIndex = 0; pDefragCtx->res == VK_SUCCESS && blockIndex < blockCount; ++blockIndex)
-
-13787 BlockInfo& currBlockInfo = blockInfo[blockIndex];
-13788 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
-13789 if((currBlockInfo.flags & BLOCK_FLAG_USED) != 0)
-
-13791 currBlockInfo.pMappedData = pBlock->GetMappedData();
-
-13793 if(currBlockInfo.pMappedData == VMA_NULL)
-
-13795 pDefragCtx->res = pBlock->Map(m_hAllocator, 1, &currBlockInfo.pMappedData);
-13796 if(pDefragCtx->res == VK_SUCCESS)
-
-13798 currBlockInfo.flags |= BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION;
-
-
-
-
-
-
-13805 if(pDefragCtx->res == VK_SUCCESS)
-
-13807 const VkDeviceSize nonCoherentAtomSize = m_hAllocator->m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
-13808 VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
-
-13810 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
-
-13812 const VmaDefragmentationMove& move = moves[moveIndex];
-
-13814 const BlockInfo& srcBlockInfo = blockInfo[move.srcBlockIndex];
-13815 const BlockInfo& dstBlockInfo = blockInfo[move.dstBlockIndex];
-
-13817 VMA_ASSERT(srcBlockInfo.pMappedData && dstBlockInfo.pMappedData);
-
-
-
-
-13822 VmaDeviceMemoryBlock*
const pSrcBlock = m_Blocks[move.srcBlockIndex];
-13823 memRange.memory = pSrcBlock->GetDeviceMemory();
-13824 memRange.offset = VmaAlignDown(move.srcOffset, nonCoherentAtomSize);
-13825 memRange.size = VMA_MIN(
-13826 VmaAlignUp(move.size + (move.srcOffset - memRange.offset), nonCoherentAtomSize),
-13827 pSrcBlock->m_pMetadata->GetSize() - memRange.offset);
-13828 (*m_hAllocator->GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hAllocator->m_hDevice, 1, &memRange);
-
-
-
-
-13833 reinterpret_cast<char*
>(dstBlockInfo.pMappedData) + move.dstOffset,
-13834 reinterpret_cast<char*
>(srcBlockInfo.pMappedData) + move.srcOffset,
-13835 static_cast<size_t>(move.size));
-
-13837 if(IsCorruptionDetectionEnabled())
-
-13839 VmaWriteMagicValue(dstBlockInfo.pMappedData, move.dstOffset - VMA_DEBUG_MARGIN);
-13840 VmaWriteMagicValue(dstBlockInfo.pMappedData, move.dstOffset + move.size);
-
-
-
-
-
-13846 VmaDeviceMemoryBlock*
const pDstBlock = m_Blocks[move.dstBlockIndex];
-13847 memRange.memory = pDstBlock->GetDeviceMemory();
-13848 memRange.offset = VmaAlignDown(move.dstOffset, nonCoherentAtomSize);
-13849 memRange.size = VMA_MIN(
-13850 VmaAlignUp(move.size + (move.dstOffset - memRange.offset), nonCoherentAtomSize),
-13851 pDstBlock->m_pMetadata->GetSize() - memRange.offset);
-13852 (*m_hAllocator->GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hAllocator->m_hDevice, 1, &memRange);
-
-
-
-
-
-
-13859 for(
size_t blockIndex = blockCount; blockIndex--; )
-
-13861 const BlockInfo& currBlockInfo = blockInfo[blockIndex];
-13862 if((currBlockInfo.flags & BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION) != 0)
-
-13864 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
-13865 pBlock->Unmap(m_hAllocator, 1);
-
-
-
-
-13870 void VmaBlockVector::ApplyDefragmentationMovesGpu(
-13871 class VmaBlockVectorDefragmentationContext* pDefragCtx,
-13872 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
-13873 VkCommandBuffer commandBuffer)
-
-13875 const size_t blockCount = m_Blocks.size();
-
-13877 pDefragCtx->blockContexts.resize(blockCount);
-13878 memset(pDefragCtx->blockContexts.data(), 0, blockCount *
sizeof(VmaBlockDefragmentationContext));
-
-
-13881 const size_t moveCount = moves.size();
-13882 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
-
-13884 const VmaDefragmentationMove& move = moves[moveIndex];
-
-
-
-
-13889 pDefragCtx->blockContexts[move.srcBlockIndex].flags |= VmaBlockDefragmentationContext::BLOCK_FLAG_USED;
-13890 pDefragCtx->blockContexts[move.dstBlockIndex].flags |= VmaBlockDefragmentationContext::BLOCK_FLAG_USED;
-
-
-
-13894 VMA_ASSERT(pDefragCtx->res == VK_SUCCESS);
-
-
-
-13898 VkBufferCreateInfo bufCreateInfo;
-13899 VmaFillGpuDefragmentationBufferCreateInfo(bufCreateInfo);
-
-13901 for(
size_t blockIndex = 0; pDefragCtx->res == VK_SUCCESS && blockIndex < blockCount; ++blockIndex)
-
-13903 VmaBlockDefragmentationContext& currBlockCtx = pDefragCtx->blockContexts[blockIndex];
-13904 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
-13905 if((currBlockCtx.flags & VmaBlockDefragmentationContext::BLOCK_FLAG_USED) != 0)
-
-13907 bufCreateInfo.size = pBlock->m_pMetadata->GetSize();
-13908 pDefragCtx->res = (*m_hAllocator->GetVulkanFunctions().vkCreateBuffer)(
-13909 m_hAllocator->m_hDevice, &bufCreateInfo, m_hAllocator->GetAllocationCallbacks(), &currBlockCtx.hBuffer);
-13910 if(pDefragCtx->res == VK_SUCCESS)
-
-13912 pDefragCtx->res = (*m_hAllocator->GetVulkanFunctions().vkBindBufferMemory)(
-13913 m_hAllocator->m_hDevice, currBlockCtx.hBuffer, pBlock->GetDeviceMemory(), 0);
-
-
-
-
-
-
-13920 if(pDefragCtx->res == VK_SUCCESS)
-
-13922 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
-
-13924 const VmaDefragmentationMove& move = moves[moveIndex];
-
-13926 const VmaBlockDefragmentationContext& srcBlockCtx = pDefragCtx->blockContexts[move.srcBlockIndex];
-13927 const VmaBlockDefragmentationContext& dstBlockCtx = pDefragCtx->blockContexts[move.dstBlockIndex];
-
-13929 VMA_ASSERT(srcBlockCtx.hBuffer && dstBlockCtx.hBuffer);
-
-13931 VkBufferCopy region = {
-
-
-
-13935 (*m_hAllocator->GetVulkanFunctions().vkCmdCopyBuffer)(
-13936 commandBuffer, srcBlockCtx.hBuffer, dstBlockCtx.hBuffer, 1, ®ion);
-
-
-
-
-13941 if(pDefragCtx->res == VK_SUCCESS && moveCount > 0)
-
-13943 pDefragCtx->res = VK_NOT_READY;
-
-
-
-
-
-13949 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
-
-13951 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
-13952 if(pBlock->m_pMetadata->IsEmpty())
-
-13954 if(m_Blocks.size() > m_MinBlockCount)
-
-13956 if(pDefragmentationStats != VMA_NULL)
-
-
-13959 pDefragmentationStats->
bytesFreed += pBlock->m_pMetadata->GetSize();
-
-
-13962 VmaVectorRemove(m_Blocks, blockIndex);
-13963 pBlock->Destroy(m_hAllocator);
-13964 vma_delete(m_hAllocator, pBlock);
-
-
-
-
-
-
-
-13972 UpdateHasEmptyBlock();
-
-
-13975 void VmaBlockVector::UpdateHasEmptyBlock()
-
-13977 m_HasEmptyBlock =
false;
-13978 for(
size_t index = 0, count = m_Blocks.size(); index < count; ++index)
-
-13980 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[index];
-13981 if(pBlock->m_pMetadata->IsEmpty())
-
-13983 m_HasEmptyBlock =
true;
-
-
-
-
-
-13989 #if VMA_STATS_STRING_ENABLED
-
-13991 void VmaBlockVector::PrintDetailedMap(
class VmaJsonWriter& json)
-
-13993 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-13995 json.BeginObject();
-
-
-
-13999 const char* poolName = m_hParentPool->GetName();
-14000 if(poolName != VMA_NULL && poolName[0] !=
'\0')
-
-14002 json.WriteString(
"Name");
-14003 json.WriteString(poolName);
-
-
-14006 json.WriteString(
"MemoryTypeIndex");
-14007 json.WriteNumber(m_MemoryTypeIndex);
-
-14009 json.WriteString(
"BlockSize");
-14010 json.WriteNumber(m_PreferredBlockSize);
-
-14012 json.WriteString(
"BlockCount");
-14013 json.BeginObject(
true);
-14014 if(m_MinBlockCount > 0)
-
-14016 json.WriteString(
"Min");
-14017 json.WriteNumber((uint64_t)m_MinBlockCount);
-
-14019 if(m_MaxBlockCount < SIZE_MAX)
-
-14021 json.WriteString(
"Max");
-14022 json.WriteNumber((uint64_t)m_MaxBlockCount);
-
-14024 json.WriteString(
"Cur");
-14025 json.WriteNumber((uint64_t)m_Blocks.size());
-
-
-14028 if(m_FrameInUseCount > 0)
-
-14030 json.WriteString(
"FrameInUseCount");
-14031 json.WriteNumber(m_FrameInUseCount);
-
-
-14034 if(m_Algorithm != 0)
-
-14036 json.WriteString(
"Algorithm");
-14037 json.WriteString(VmaAlgorithmToStr(m_Algorithm));
-
-
-
-
-14042 json.WriteString(
"PreferredBlockSize");
-14043 json.WriteNumber(m_PreferredBlockSize);
-
-
-14046 json.WriteString(
"Blocks");
-14047 json.BeginObject();
-14048 for(
size_t i = 0; i < m_Blocks.size(); ++i)
-
-14050 json.BeginString();
-14051 json.ContinueString(m_Blocks[i]->GetId());
-
-
-14054 m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
-
-
-
-
-
-
-
-
-14063 void VmaBlockVector::Defragment(
-14064 class VmaBlockVectorDefragmentationContext* pCtx,
-
-14066 VkDeviceSize& maxCpuBytesToMove, uint32_t& maxCpuAllocationsToMove,
-14067 VkDeviceSize& maxGpuBytesToMove, uint32_t& maxGpuAllocationsToMove,
-14068 VkCommandBuffer commandBuffer)
-
-14070 pCtx->res = VK_SUCCESS;
-
-14072 const VkMemoryPropertyFlags memPropFlags =
-14073 m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags;
-14074 const bool isHostVisible = (memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
-
-14076 const bool canDefragmentOnCpu = maxCpuBytesToMove > 0 && maxCpuAllocationsToMove > 0 &&
-
-14078 const bool canDefragmentOnGpu = maxGpuBytesToMove > 0 && maxGpuAllocationsToMove > 0 &&
-14079 !IsCorruptionDetectionEnabled() &&
-14080 ((1u << m_MemoryTypeIndex) & m_hAllocator->GetGpuDefragmentationMemoryTypeBits()) != 0;
-
-
-14083 if(canDefragmentOnCpu || canDefragmentOnGpu)
-
-14085 bool defragmentOnGpu;
-
-14087 if(canDefragmentOnGpu != canDefragmentOnCpu)
-
-14089 defragmentOnGpu = canDefragmentOnGpu;
-
-
-
-
-14094 defragmentOnGpu = (memPropFlags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0 ||
-14095 m_hAllocator->IsIntegratedGpu();
-
-
-14098 bool overlappingMoveSupported = !defragmentOnGpu;
-
-14100 if(m_hAllocator->m_UseMutex)
-
-
-
-14104 if(!m_Mutex.TryLockWrite())
-
-14106 pCtx->res = VK_ERROR_INITIALIZATION_FAILED;
-
-
-
-
-
-14112 m_Mutex.LockWrite();
-14113 pCtx->mutexLocked =
true;
-
-
-
-14117 pCtx->Begin(overlappingMoveSupported, flags);
-
-
-
-14121 const VkDeviceSize maxBytesToMove = defragmentOnGpu ? maxGpuBytesToMove : maxCpuBytesToMove;
-14122 const uint32_t maxAllocationsToMove = defragmentOnGpu ? maxGpuAllocationsToMove : maxCpuAllocationsToMove;
-14123 pCtx->res = pCtx->GetAlgorithm()->Defragment(pCtx->defragmentationMoves, maxBytesToMove, maxAllocationsToMove, flags);
-
-
-14126 if(pStats != VMA_NULL)
-
-14128 const VkDeviceSize bytesMoved = pCtx->GetAlgorithm()->GetBytesMoved();
-14129 const uint32_t allocationsMoved = pCtx->GetAlgorithm()->GetAllocationsMoved();
-
-
-14132 VMA_ASSERT(bytesMoved <= maxBytesToMove);
-14133 VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
-14134 if(defragmentOnGpu)
-
-14136 maxGpuBytesToMove -= bytesMoved;
-14137 maxGpuAllocationsToMove -= allocationsMoved;
-
-
-
-14141 maxCpuBytesToMove -= bytesMoved;
-14142 maxCpuAllocationsToMove -= allocationsMoved;
-
-
-
-
-
-14148 if(m_hAllocator->m_UseMutex)
-14149 m_Mutex.UnlockWrite();
-
-14151 if(pCtx->res >= VK_SUCCESS && !pCtx->defragmentationMoves.empty())
-14152 pCtx->res = VK_NOT_READY;
-
-
-
-
-14157 if(pCtx->res >= VK_SUCCESS)
-
-14159 if(defragmentOnGpu)
-
-14161 ApplyDefragmentationMovesGpu(pCtx, pCtx->defragmentationMoves, commandBuffer);
-
-
-
-14165 ApplyDefragmentationMovesCpu(pCtx, pCtx->defragmentationMoves);
-
-
-
-
-
-14171 void VmaBlockVector::DefragmentationEnd(
-14172 class VmaBlockVectorDefragmentationContext* pCtx,
-
-
-
-
-
-14178 VMA_ASSERT(pCtx->mutexLocked ==
false);
-
-
-
-14182 m_Mutex.LockWrite();
-14183 pCtx->mutexLocked =
true;
-
-
-
-14187 if(pCtx->mutexLocked || !m_hAllocator->m_UseMutex)
-
-
-14190 for(
size_t blockIndex = pCtx->blockContexts.size(); blockIndex--;)
-
-14192 VmaBlockDefragmentationContext &blockCtx = pCtx->blockContexts[blockIndex];
-14193 if(blockCtx.hBuffer)
-
-14195 (*m_hAllocator->GetVulkanFunctions().vkDestroyBuffer)(m_hAllocator->m_hDevice, blockCtx.hBuffer, m_hAllocator->GetAllocationCallbacks());
-
-
-
-14199 if(pCtx->res >= VK_SUCCESS)
-
-14201 FreeEmptyBlocks(pStats);
-
-
-
-14205 if(pCtx->mutexLocked)
-
-14207 VMA_ASSERT(m_hAllocator->m_UseMutex);
-14208 m_Mutex.UnlockWrite();
-
-
-
-14212 uint32_t VmaBlockVector::ProcessDefragmentations(
-14213 class VmaBlockVectorDefragmentationContext *pCtx,
-
-
-14216 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-14218 const uint32_t moveCount = VMA_MIN(uint32_t(pCtx->defragmentationMoves.size()) - pCtx->defragmentationMovesProcessed, maxMoves);
-
-14220 for(uint32_t i = 0; i < moveCount; ++ i)
-
-14222 VmaDefragmentationMove& move = pCtx->defragmentationMoves[pCtx->defragmentationMovesProcessed + i];
-
-
-14225 pMove->
memory = move.pDstBlock->GetDeviceMemory();
-14226 pMove->
offset = move.dstOffset;
-
-
-
-
-14231 pCtx->defragmentationMovesProcessed += moveCount;
-
-
-
-
-14236 void VmaBlockVector::CommitDefragmentations(
-14237 class VmaBlockVectorDefragmentationContext *pCtx,
-
-
-14240 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-14242 for(uint32_t i = pCtx->defragmentationMovesCommitted; i < pCtx->defragmentationMovesProcessed; ++ i)
-
-14244 const VmaDefragmentationMove &move = pCtx->defragmentationMoves[i];
-
-14246 move.pSrcBlock->m_pMetadata->FreeAtOffset(move.srcOffset);
-14247 move.hAllocation->ChangeBlockAllocation(m_hAllocator, move.pDstBlock, move.dstOffset);
-
-
-14250 pCtx->defragmentationMovesCommitted = pCtx->defragmentationMovesProcessed;
-14251 FreeEmptyBlocks(pStats);
-
-
-14254 size_t VmaBlockVector::CalcAllocationCount()
const
-
-
-14257 for(
size_t i = 0; i < m_Blocks.size(); ++i)
-
-14259 result += m_Blocks[i]->m_pMetadata->GetAllocationCount();
-
-
-
-
-14264 bool VmaBlockVector::IsBufferImageGranularityConflictPossible()
const
-
-14266 if(m_BufferImageGranularity == 1)
-
-
-
-14270 VmaSuballocationType lastSuballocType = VMA_SUBALLOCATION_TYPE_FREE;
-14271 for(
size_t i = 0, count = m_Blocks.size(); i < count; ++i)
-
-14273 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[i];
-14274 VMA_ASSERT(m_Algorithm == 0);
-14275 VmaBlockMetadata_Generic*
const pMetadata = (VmaBlockMetadata_Generic*)pBlock->m_pMetadata;
-14276 if(pMetadata->IsBufferImageGranularityConflictPossible(m_BufferImageGranularity, lastSuballocType))
-
-
-
-
-
-
-
-14284 void VmaBlockVector::MakePoolAllocationsLost(
-14285 uint32_t currentFrameIndex,
-14286 size_t* pLostAllocationCount)
-
-14288 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
-14289 size_t lostAllocationCount = 0;
-14290 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
-
-14292 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
-14293 VMA_ASSERT(pBlock);
-14294 lostAllocationCount += pBlock->m_pMetadata->MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
-
-14296 if(pLostAllocationCount != VMA_NULL)
-
-14298 *pLostAllocationCount = lostAllocationCount;
-
-
-
-14302 VkResult VmaBlockVector::CheckCorruption()
-
-14304 if(!IsCorruptionDetectionEnabled())
-
-14306 return VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-14309 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
-14310 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
-
-14312 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
-14313 VMA_ASSERT(pBlock);
-14314 VkResult res = pBlock->CheckCorruption(m_hAllocator);
-14315 if(res != VK_SUCCESS)
-
-
-
-
-
-
-
-14323 void VmaBlockVector::AddStats(
VmaStats* pStats)
-
-14325 const uint32_t memTypeIndex = m_MemoryTypeIndex;
-14326 const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
-
-14328 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
-
-14330 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
-
-14332 const VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
-14333 VMA_ASSERT(pBlock);
-14334 VMA_HEAVY_ASSERT(pBlock->Validate());
-
-14336 pBlock->m_pMetadata->CalcAllocationStatInfo(allocationStatInfo);
-14337 VmaAddStatInfo(pStats->
total, allocationStatInfo);
-14338 VmaAddStatInfo(pStats->
memoryType[memTypeIndex], allocationStatInfo);
-14339 VmaAddStatInfo(pStats->
memoryHeap[memHeapIndex], allocationStatInfo);
-
-
-
-
-
-14346 VmaDefragmentationAlgorithm_Generic::VmaDefragmentationAlgorithm_Generic(
-
-14348 VmaBlockVector* pBlockVector,
-14349 uint32_t currentFrameIndex,
-14350 bool overlappingMoveSupported) :
-14351 VmaDefragmentationAlgorithm(hAllocator, pBlockVector, currentFrameIndex),
-14352 m_AllocationCount(0),
-14353 m_AllAllocations(false),
-
-14355 m_AllocationsMoved(0),
-14356 m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
-
-
-14359 const size_t blockCount = m_pBlockVector->m_Blocks.size();
-14360 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
-
-14362 BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
-14363 pBlockInfo->m_OriginalBlockIndex = blockIndex;
-14364 pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
-14365 m_Blocks.push_back(pBlockInfo);
-
-
-
-14369 VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
-
-
-14372 VmaDefragmentationAlgorithm_Generic::~VmaDefragmentationAlgorithm_Generic()
-
-14374 for(
size_t i = m_Blocks.size(); i--; )
-
-14376 vma_delete(m_hAllocator, m_Blocks[i]);
-
-
-
-14380 void VmaDefragmentationAlgorithm_Generic::AddAllocation(
VmaAllocation hAlloc, VkBool32* pChanged)
-
-
-14383 if(hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
-
-14385 VmaDeviceMemoryBlock* pBlock = hAlloc->GetBlock();
-14386 BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
-14387 if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
-
-14389 AllocationInfo allocInfo = AllocationInfo(hAlloc, pChanged);
-14390 (*it)->m_Allocations.push_back(allocInfo);
-
-
-
-
-
-
-14397 ++m_AllocationCount;
-
-
-
-14401 VkResult VmaDefragmentationAlgorithm_Generic::DefragmentRound(
-14402 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
-14403 VkDeviceSize maxBytesToMove,
-14404 uint32_t maxAllocationsToMove,
-14405 bool freeOldAllocations)
-
-14407 if(m_Blocks.empty())
-
-
-
-
-
-
-
-
-
-
-
-
-14420 size_t srcBlockMinIndex = 0;
-
-
-
-
-
-
-
-
-
-
-
-
-14433 size_t srcBlockIndex = m_Blocks.size() - 1;
-14434 size_t srcAllocIndex = SIZE_MAX;
-
-
-
-
-
-14440 while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
-
-14442 if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
-
-
-14445 if(srcBlockIndex == srcBlockMinIndex)
-
-
-
-
-
-
-14452 srcAllocIndex = SIZE_MAX;
-
-
-
-
-14457 srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
-
-
-
-14461 BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
-14462 AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
-
-14464 const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
-14465 const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
-14466 const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
-14467 const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
-
-
-14470 for(
size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
-
-14472 BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
-14473 VmaAllocationRequest dstAllocRequest;
-14474 if(pDstBlockInfo->m_pBlock->m_pMetadata->CreateAllocationRequest(
-14475 m_CurrentFrameIndex,
-14476 m_pBlockVector->GetFrameInUseCount(),
-14477 m_pBlockVector->GetBufferImageGranularity(),
-
-
-
-
-
-
-14484 &dstAllocRequest) &&
-
-14486 dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
-
-14488 VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
-
-
-14491 if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
-14492 (m_BytesMoved + size > maxBytesToMove))
-
-
-
-
-14497 VmaDefragmentationMove move = {};
-14498 move.srcBlockIndex = pSrcBlockInfo->m_OriginalBlockIndex;
-14499 move.dstBlockIndex = pDstBlockInfo->m_OriginalBlockIndex;
-14500 move.srcOffset = srcOffset;
-14501 move.dstOffset = dstAllocRequest.offset;
-
-14503 move.hAllocation = allocInfo.m_hAllocation;
-14504 move.pSrcBlock = pSrcBlockInfo->m_pBlock;
-14505 move.pDstBlock = pDstBlockInfo->m_pBlock;
-
-14507 moves.push_back(move);
-
-14509 pDstBlockInfo->m_pBlock->m_pMetadata->Alloc(
-
-
-
-14513 allocInfo.m_hAllocation);
-
-14515 if(freeOldAllocations)
-
-14517 pSrcBlockInfo->m_pBlock->m_pMetadata->FreeAtOffset(srcOffset);
-14518 allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
-
-
-14521 if(allocInfo.m_pChanged != VMA_NULL)
-
-14523 *allocInfo.m_pChanged = VK_TRUE;
-
-
-14526 ++m_AllocationsMoved;
-14527 m_BytesMoved += size;
-
-14529 VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
-
-
-
-
-
-
-
-14537 if(srcAllocIndex > 0)
-
-
-
-
-
-14543 if(srcBlockIndex > 0)
-
-
-14546 srcAllocIndex = SIZE_MAX;
-
-
-
-
-
-
-
-
-
-14556 size_t VmaDefragmentationAlgorithm_Generic::CalcBlocksWithNonMovableCount()
const
-
-
-14559 for(
size_t i = 0; i < m_Blocks.size(); ++i)
-
-14561 if(m_Blocks[i]->m_HasNonMovableAllocations)
-
-
-
-
-
-
-
-14569 VkResult VmaDefragmentationAlgorithm_Generic::Defragment(
-14570 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
-14571 VkDeviceSize maxBytesToMove,
-14572 uint32_t maxAllocationsToMove,
-
-
-14575 if(!m_AllAllocations && m_AllocationCount == 0)
-
-
-
-
-14580 const size_t blockCount = m_Blocks.size();
-14581 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
-
-14583 BlockInfo* pBlockInfo = m_Blocks[blockIndex];
-
-14585 if(m_AllAllocations)
-
-14587 VmaBlockMetadata_Generic* pMetadata = (VmaBlockMetadata_Generic*)pBlockInfo->m_pBlock->m_pMetadata;
-14588 for(VmaSuballocationList::const_iterator it = pMetadata->m_Suballocations.begin();
-14589 it != pMetadata->m_Suballocations.end();
-
-
-14592 if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
-
-14594 AllocationInfo allocInfo = AllocationInfo(it->hAllocation, VMA_NULL);
-14595 pBlockInfo->m_Allocations.push_back(allocInfo);
-
-
-
-
-14600 pBlockInfo->CalcHasNonMovableAllocations();
-
-
-
-14604 pBlockInfo->SortAllocationsByOffsetDescending();
-
-
-
-
-
-14610 VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
-
-
-14613 const uint32_t roundCount = 2;
-
-
-14616 VkResult result = VK_SUCCESS;
-14617 for(uint32_t round = 0; (round < roundCount) && (result == VK_SUCCESS); ++round)
-
-
-
-
-
-
-
-14625 bool VmaDefragmentationAlgorithm_Generic::MoveMakesSense(
-14626 size_t dstBlockIndex, VkDeviceSize dstOffset,
-14627 size_t srcBlockIndex, VkDeviceSize srcOffset)
-
-14629 if(dstBlockIndex < srcBlockIndex)
-
-
-
-14633 if(dstBlockIndex > srcBlockIndex)
-
-
-
-14637 if(dstOffset < srcOffset)
-
-
-
-
-
-
-
-
-14647 VmaDefragmentationAlgorithm_Fast::VmaDefragmentationAlgorithm_Fast(
-
-14649 VmaBlockVector* pBlockVector,
-14650 uint32_t currentFrameIndex,
-14651 bool overlappingMoveSupported) :
-14652 VmaDefragmentationAlgorithm(hAllocator, pBlockVector, currentFrameIndex),
-14653 m_OverlappingMoveSupported(overlappingMoveSupported),
-14654 m_AllocationCount(0),
-14655 m_AllAllocations(false),
-
-14657 m_AllocationsMoved(0),
-14658 m_BlockInfos(VmaStlAllocator<BlockInfo>(hAllocator->GetAllocationCallbacks()))
-
-14660 VMA_ASSERT(VMA_DEBUG_MARGIN == 0);
-
-
-
-14664 VmaDefragmentationAlgorithm_Fast::~VmaDefragmentationAlgorithm_Fast()
-
-
-
-14668 VkResult VmaDefragmentationAlgorithm_Fast::Defragment(
-14669 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
-14670 VkDeviceSize maxBytesToMove,
-14671 uint32_t maxAllocationsToMove,
-
-
-14674 VMA_ASSERT(m_AllAllocations || m_pBlockVector->CalcAllocationCount() == m_AllocationCount);
-
-14676 const size_t blockCount = m_pBlockVector->GetBlockCount();
-14677 if(blockCount == 0 || maxBytesToMove == 0 || maxAllocationsToMove == 0)
-
-
-
-
-14682 PreprocessMetadata();
-
-
-
-14686 m_BlockInfos.resize(blockCount);
-14687 for(
size_t i = 0; i < blockCount; ++i)
-
-14689 m_BlockInfos[i].origBlockIndex = i;
-
-
-14692 VMA_SORT(m_BlockInfos.begin(), m_BlockInfos.end(), [
this](
const BlockInfo& lhs,
const BlockInfo& rhs) ->
bool {
-14693 return m_pBlockVector->GetBlock(lhs.origBlockIndex)->m_pMetadata->GetSumFreeSize() <
-14694 m_pBlockVector->GetBlock(rhs.origBlockIndex)->m_pMetadata->GetSumFreeSize();
-
-
-
-
-14699 FreeSpaceDatabase freeSpaceDb;
-
-14701 size_t dstBlockInfoIndex = 0;
-14702 size_t dstOrigBlockIndex = m_BlockInfos[dstBlockInfoIndex].origBlockIndex;
-14703 VmaDeviceMemoryBlock* pDstBlock = m_pBlockVector->GetBlock(dstOrigBlockIndex);
-14704 VmaBlockMetadata_Generic* pDstMetadata = (VmaBlockMetadata_Generic*)pDstBlock->m_pMetadata;
-14705 VkDeviceSize dstBlockSize = pDstMetadata->GetSize();
-14706 VkDeviceSize dstOffset = 0;
-
-
-14709 for(
size_t srcBlockInfoIndex = 0; !end && srcBlockInfoIndex < blockCount; ++srcBlockInfoIndex)
-
-14711 const size_t srcOrigBlockIndex = m_BlockInfos[srcBlockInfoIndex].origBlockIndex;
-14712 VmaDeviceMemoryBlock*
const pSrcBlock = m_pBlockVector->GetBlock(srcOrigBlockIndex);
-14713 VmaBlockMetadata_Generic*
const pSrcMetadata = (VmaBlockMetadata_Generic*)pSrcBlock->m_pMetadata;
-14714 for(VmaSuballocationList::iterator srcSuballocIt = pSrcMetadata->m_Suballocations.begin();
-14715 !end && srcSuballocIt != pSrcMetadata->m_Suballocations.end(); )
-
-14717 VmaAllocation_T*
const pAlloc = srcSuballocIt->hAllocation;
-14718 const VkDeviceSize srcAllocAlignment = pAlloc->GetAlignment();
-14719 const VkDeviceSize srcAllocSize = srcSuballocIt->size;
-14720 if(m_AllocationsMoved == maxAllocationsToMove ||
-14721 m_BytesMoved + srcAllocSize > maxBytesToMove)
-
-
-
-
-14726 const VkDeviceSize srcAllocOffset = srcSuballocIt->offset;
-
-14728 VmaDefragmentationMove move = {};
-
-14730 size_t freeSpaceInfoIndex;
-14731 VkDeviceSize dstAllocOffset;
-14732 if(freeSpaceDb.Fetch(srcAllocAlignment, srcAllocSize,
-14733 freeSpaceInfoIndex, dstAllocOffset))
-
-14735 size_t freeSpaceOrigBlockIndex = m_BlockInfos[freeSpaceInfoIndex].origBlockIndex;
-14736 VmaDeviceMemoryBlock* pFreeSpaceBlock = m_pBlockVector->GetBlock(freeSpaceOrigBlockIndex);
-14737 VmaBlockMetadata_Generic* pFreeSpaceMetadata = (VmaBlockMetadata_Generic*)pFreeSpaceBlock->m_pMetadata;
-
-
-14740 if(freeSpaceInfoIndex == srcBlockInfoIndex)
-
-14742 VMA_ASSERT(dstAllocOffset <= srcAllocOffset);
-
-
-
-14746 VmaSuballocation suballoc = *srcSuballocIt;
-14747 suballoc.offset = dstAllocOffset;
-14748 suballoc.hAllocation->ChangeOffset(dstAllocOffset);
-14749 m_BytesMoved += srcAllocSize;
-14750 ++m_AllocationsMoved;
-
-14752 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
-
-14754 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
-14755 srcSuballocIt = nextSuballocIt;
-
-14757 InsertSuballoc(pFreeSpaceMetadata, suballoc);
-
-14759 move.srcBlockIndex = srcOrigBlockIndex;
-14760 move.dstBlockIndex = freeSpaceOrigBlockIndex;
-14761 move.srcOffset = srcAllocOffset;
-14762 move.dstOffset = dstAllocOffset;
-14763 move.size = srcAllocSize;
-
-14765 moves.push_back(move);
-
-
-
-
-
-
-14772 VMA_ASSERT(freeSpaceInfoIndex < srcBlockInfoIndex);
-
-14774 VmaSuballocation suballoc = *srcSuballocIt;
-14775 suballoc.offset = dstAllocOffset;
-14776 suballoc.hAllocation->ChangeBlockAllocation(m_hAllocator, pFreeSpaceBlock, dstAllocOffset);
-14777 m_BytesMoved += srcAllocSize;
-14778 ++m_AllocationsMoved;
-
-14780 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
-
-14782 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
-14783 srcSuballocIt = nextSuballocIt;
-
-14785 InsertSuballoc(pFreeSpaceMetadata, suballoc);
-
-14787 move.srcBlockIndex = srcOrigBlockIndex;
-14788 move.dstBlockIndex = freeSpaceOrigBlockIndex;
-14789 move.srcOffset = srcAllocOffset;
-14790 move.dstOffset = dstAllocOffset;
-14791 move.size = srcAllocSize;
-
-14793 moves.push_back(move);
-
-
-
-
-14798 dstAllocOffset = VmaAlignUp(dstOffset, srcAllocAlignment);
-
-
-14801 while(dstBlockInfoIndex < srcBlockInfoIndex &&
-14802 dstAllocOffset + srcAllocSize > dstBlockSize)
-
-
-14805 freeSpaceDb.Register(dstBlockInfoIndex, dstOffset, dstBlockSize - dstOffset);
-
-14807 ++dstBlockInfoIndex;
-14808 dstOrigBlockIndex = m_BlockInfos[dstBlockInfoIndex].origBlockIndex;
-14809 pDstBlock = m_pBlockVector->GetBlock(dstOrigBlockIndex);
-14810 pDstMetadata = (VmaBlockMetadata_Generic*)pDstBlock->m_pMetadata;
-14811 dstBlockSize = pDstMetadata->GetSize();
-
-14813 dstAllocOffset = 0;
-
-
-
-14817 if(dstBlockInfoIndex == srcBlockInfoIndex)
-
-14819 VMA_ASSERT(dstAllocOffset <= srcAllocOffset);
-
-14821 const bool overlap = dstAllocOffset + srcAllocSize > srcAllocOffset;
-
-14823 bool skipOver = overlap;
-14824 if(overlap && m_OverlappingMoveSupported && dstAllocOffset < srcAllocOffset)
-
-
-
-14828 skipOver = (srcAllocOffset - dstAllocOffset) * 64 < srcAllocSize;
-
-
-
-
-14833 freeSpaceDb.Register(dstBlockInfoIndex, dstOffset, srcAllocOffset - dstOffset);
-
-14835 dstOffset = srcAllocOffset + srcAllocSize;
-
-
-
-
-
-14841 srcSuballocIt->offset = dstAllocOffset;
-14842 srcSuballocIt->hAllocation->ChangeOffset(dstAllocOffset);
-14843 dstOffset = dstAllocOffset + srcAllocSize;
-14844 m_BytesMoved += srcAllocSize;
-14845 ++m_AllocationsMoved;
-
-
-14848 move.srcBlockIndex = srcOrigBlockIndex;
-14849 move.dstBlockIndex = dstOrigBlockIndex;
-14850 move.srcOffset = srcAllocOffset;
-14851 move.dstOffset = dstAllocOffset;
-14852 move.size = srcAllocSize;
-
-14854 moves.push_back(move);
-
-
-
-
-
-
-
-14862 VMA_ASSERT(dstBlockInfoIndex < srcBlockInfoIndex);
-14863 VMA_ASSERT(dstAllocOffset + srcAllocSize <= dstBlockSize);
-
-14865 VmaSuballocation suballoc = *srcSuballocIt;
-14866 suballoc.offset = dstAllocOffset;
-14867 suballoc.hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlock, dstAllocOffset);
-14868 dstOffset = dstAllocOffset + srcAllocSize;
-14869 m_BytesMoved += srcAllocSize;
-14870 ++m_AllocationsMoved;
-
-14872 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
-
-14874 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
-14875 srcSuballocIt = nextSuballocIt;
-
-14877 pDstMetadata->m_Suballocations.push_back(suballoc);
-
-14879 move.srcBlockIndex = srcOrigBlockIndex;
-14880 move.dstBlockIndex = dstOrigBlockIndex;
-14881 move.srcOffset = srcAllocOffset;
-14882 move.dstOffset = dstAllocOffset;
-14883 move.size = srcAllocSize;
-
-14885 moves.push_back(move);
-
-
-
-
-
-14891 m_BlockInfos.clear();
-
-14893 PostprocessMetadata();
-
-
-
-
-14898 void VmaDefragmentationAlgorithm_Fast::PreprocessMetadata()
-
-14900 const size_t blockCount = m_pBlockVector->GetBlockCount();
-14901 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
-
-14903 VmaBlockMetadata_Generic*
const pMetadata =
-14904 (VmaBlockMetadata_Generic*)m_pBlockVector->GetBlock(blockIndex)->m_pMetadata;
-14905 pMetadata->m_FreeCount = 0;
-14906 pMetadata->m_SumFreeSize = pMetadata->GetSize();
-14907 pMetadata->m_FreeSuballocationsBySize.clear();
-14908 for(VmaSuballocationList::iterator it = pMetadata->m_Suballocations.begin();
-14909 it != pMetadata->m_Suballocations.end(); )
-
-14911 if(it->type == VMA_SUBALLOCATION_TYPE_FREE)
-
-14913 VmaSuballocationList::iterator nextIt = it;
-
-14915 pMetadata->m_Suballocations.erase(it);
-
-
-
-
-
-
-
-
-
-
-14926 void VmaDefragmentationAlgorithm_Fast::PostprocessMetadata()
-
-14928 const size_t blockCount = m_pBlockVector->GetBlockCount();
-14929 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
-
-14931 VmaBlockMetadata_Generic*
const pMetadata =
-14932 (VmaBlockMetadata_Generic*)m_pBlockVector->GetBlock(blockIndex)->m_pMetadata;
-14933 const VkDeviceSize blockSize = pMetadata->GetSize();
-
-
-14936 if(pMetadata->m_Suballocations.empty())
-
-14938 pMetadata->m_FreeCount = 1;
-
-14940 VmaSuballocation suballoc = {
-
-
-
-14944 VMA_SUBALLOCATION_TYPE_FREE };
-14945 pMetadata->m_Suballocations.push_back(suballoc);
-14946 pMetadata->RegisterFreeSuballocation(pMetadata->m_Suballocations.begin());
-
-
-
-
-14951 VkDeviceSize offset = 0;
-14952 VmaSuballocationList::iterator it;
-14953 for(it = pMetadata->m_Suballocations.begin();
-14954 it != pMetadata->m_Suballocations.end();
-
-
-14957 VMA_ASSERT(it->type != VMA_SUBALLOCATION_TYPE_FREE);
-14958 VMA_ASSERT(it->offset >= offset);
-
-
-14961 if(it->offset > offset)
-
-14963 ++pMetadata->m_FreeCount;
-14964 const VkDeviceSize freeSize = it->offset - offset;
-14965 VmaSuballocation suballoc = {
-
-
-
-14969 VMA_SUBALLOCATION_TYPE_FREE };
-14970 VmaSuballocationList::iterator precedingFreeIt = pMetadata->m_Suballocations.insert(it, suballoc);
-14971 if(freeSize >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
-
-14973 pMetadata->m_FreeSuballocationsBySize.push_back(precedingFreeIt);
-
-
-
-14977 pMetadata->m_SumFreeSize -= it->size;
-14978 offset = it->offset + it->size;
-
-
-
-14982 if(offset < blockSize)
-
-14984 ++pMetadata->m_FreeCount;
-14985 const VkDeviceSize freeSize = blockSize - offset;
-14986 VmaSuballocation suballoc = {
-
-
-
-14990 VMA_SUBALLOCATION_TYPE_FREE };
-14991 VMA_ASSERT(it == pMetadata->m_Suballocations.end());
-14992 VmaSuballocationList::iterator trailingFreeIt = pMetadata->m_Suballocations.insert(it, suballoc);
-14993 if(freeSize > VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
-
-14995 pMetadata->m_FreeSuballocationsBySize.push_back(trailingFreeIt);
-
-
-
-
-15000 pMetadata->m_FreeSuballocationsBySize.begin(),
-15001 pMetadata->m_FreeSuballocationsBySize.end(),
-15002 VmaSuballocationItemSizeLess());
-
-
-15005 VMA_HEAVY_ASSERT(pMetadata->Validate());
-
-
-
-15009 void VmaDefragmentationAlgorithm_Fast::InsertSuballoc(VmaBlockMetadata_Generic* pMetadata,
const VmaSuballocation& suballoc)
-
-
-15012 VmaSuballocationList::iterator it = pMetadata->m_Suballocations.begin();
-15013 while(it != pMetadata->m_Suballocations.end())
-
-15015 if(it->offset < suballoc.offset)
-
-
-
-
-15020 pMetadata->m_Suballocations.insert(it, suballoc);
-
-
-
-
-15026 VmaBlockVectorDefragmentationContext::VmaBlockVectorDefragmentationContext(
-
-
-15029 VmaBlockVector* pBlockVector,
-15030 uint32_t currFrameIndex) :
-
-15032 mutexLocked(false),
-15033 blockContexts(VmaStlAllocator<VmaBlockDefragmentationContext>(hAllocator->GetAllocationCallbacks())),
-15034 defragmentationMoves(VmaStlAllocator<VmaDefragmentationMove>(hAllocator->GetAllocationCallbacks())),
-15035 defragmentationMovesProcessed(0),
-15036 defragmentationMovesCommitted(0),
-15037 hasDefragmentationPlan(0),
-15038 m_hAllocator(hAllocator),
-15039 m_hCustomPool(hCustomPool),
-15040 m_pBlockVector(pBlockVector),
-15041 m_CurrFrameIndex(currFrameIndex),
-15042 m_pAlgorithm(VMA_NULL),
-15043 m_Allocations(VmaStlAllocator<AllocInfo>(hAllocator->GetAllocationCallbacks())),
-15044 m_AllAllocations(false)
-
-
-
-15048 VmaBlockVectorDefragmentationContext::~VmaBlockVectorDefragmentationContext()
-
-15050 vma_delete(m_hAllocator, m_pAlgorithm);
-
-
-15053 void VmaBlockVectorDefragmentationContext::AddAllocation(
VmaAllocation hAlloc, VkBool32* pChanged)
-
-15055 AllocInfo info = { hAlloc, pChanged };
-15056 m_Allocations.push_back(info);
-
-
-15059 void VmaBlockVectorDefragmentationContext::Begin(
bool overlappingMoveSupported,
VmaDefragmentationFlags flags)
-
-15061 const bool allAllocations = m_AllAllocations ||
-15062 m_Allocations.size() == m_pBlockVector->CalcAllocationCount();
-
-
-
-
-
-
-
-
-
-
-
-
-15075 if(VMA_DEBUG_MARGIN == 0 &&
-
-15077 !m_pBlockVector->IsBufferImageGranularityConflictPossible() &&
-
-
-15080 m_pAlgorithm = vma_new(m_hAllocator, VmaDefragmentationAlgorithm_Fast)(
-15081 m_hAllocator, m_pBlockVector, m_CurrFrameIndex, overlappingMoveSupported);
-
-
-
-15085 m_pAlgorithm = vma_new(m_hAllocator, VmaDefragmentationAlgorithm_Generic)(
-15086 m_hAllocator, m_pBlockVector, m_CurrFrameIndex, overlappingMoveSupported);
-
-
-
-
-15091 m_pAlgorithm->AddAll();
-
-
-
-15095 for(
size_t i = 0, count = m_Allocations.size(); i < count; ++i)
-
-15097 m_pAlgorithm->AddAllocation(m_Allocations[i].hAlloc, m_Allocations[i].pChanged);
-
-
-
-
-
-
-15105 VmaDefragmentationContext_T::VmaDefragmentationContext_T(
-
-15107 uint32_t currFrameIndex,
-
-
-15110 m_hAllocator(hAllocator),
-15111 m_CurrFrameIndex(currFrameIndex),
-
-
-15114 m_CustomPoolContexts(VmaStlAllocator<VmaBlockVectorDefragmentationContext*>(hAllocator->GetAllocationCallbacks()))
-
-15116 memset(m_DefaultPoolContexts, 0,
sizeof(m_DefaultPoolContexts));
-
-
-15119 VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
-
-15121 for(
size_t i = m_CustomPoolContexts.size(); i--; )
-
-15123 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_CustomPoolContexts[i];
-15124 pBlockVectorCtx->GetBlockVector()->DefragmentationEnd(pBlockVectorCtx, m_Flags, m_pStats);
-15125 vma_delete(m_hAllocator, pBlockVectorCtx);
-
-15127 for(
size_t i = m_hAllocator->m_MemProps.memoryTypeCount; i--; )
-
-15129 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_DefaultPoolContexts[i];
-15130 if(pBlockVectorCtx)
-
-15132 pBlockVectorCtx->GetBlockVector()->DefragmentationEnd(pBlockVectorCtx, m_Flags, m_pStats);
-15133 vma_delete(m_hAllocator, pBlockVectorCtx);
-
-
-
-
-15138 void VmaDefragmentationContext_T::AddPools(uint32_t poolCount,
const VmaPool* pPools)
-
-15140 for(uint32_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
-
-15142 VmaPool pool = pPools[poolIndex];
-
-
-15145 if(pool->m_BlockVector.GetAlgorithm() == 0)
-
-15147 VmaBlockVectorDefragmentationContext* pBlockVectorDefragCtx = VMA_NULL;
-
-15149 for(
size_t i = m_CustomPoolContexts.size(); i--; )
-
-15151 if(m_CustomPoolContexts[i]->GetCustomPool() == pool)
-
-15153 pBlockVectorDefragCtx = m_CustomPoolContexts[i];
-
-
-
-
-15158 if(!pBlockVectorDefragCtx)
-
-15160 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
-
-
-15163 &pool->m_BlockVector,
-
-15165 m_CustomPoolContexts.push_back(pBlockVectorDefragCtx);
-
-
-15168 pBlockVectorDefragCtx->AddAll();
-
-
-
-
-15173 void VmaDefragmentationContext_T::AddAllocations(
-15174 uint32_t allocationCount,
-
-15176 VkBool32* pAllocationsChanged)
-
-
-15179 for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
-
-
-15182 VMA_ASSERT(hAlloc);
-
-15184 if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
-
-15186 (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
-
-15188 VmaBlockVectorDefragmentationContext* pBlockVectorDefragCtx = VMA_NULL;
-
-15190 const VmaPool hAllocPool = hAlloc->GetBlock()->GetParentPool();
-
-15192 if(hAllocPool != VK_NULL_HANDLE)
-
-
-15195 if(hAllocPool->m_BlockVector.GetAlgorithm() == 0)
-
-15197 for(
size_t i = m_CustomPoolContexts.size(); i--; )
-
-15199 if(m_CustomPoolContexts[i]->GetCustomPool() == hAllocPool)
-
-15201 pBlockVectorDefragCtx = m_CustomPoolContexts[i];
-
-
-
-15205 if(!pBlockVectorDefragCtx)
-
-15207 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
-
-
-15210 &hAllocPool->m_BlockVector,
-
-15212 m_CustomPoolContexts.push_back(pBlockVectorDefragCtx);
-
-
-
-
-
-
-15219 const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
-15220 pBlockVectorDefragCtx = m_DefaultPoolContexts[memTypeIndex];
-15221 if(!pBlockVectorDefragCtx)
-
-15223 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
-
-
-15226 m_hAllocator->m_pBlockVectors[memTypeIndex],
-
-15228 m_DefaultPoolContexts[memTypeIndex] = pBlockVectorDefragCtx;
-
-
-
-15232 if(pBlockVectorDefragCtx)
-
-15234 VkBool32*
const pChanged = (pAllocationsChanged != VMA_NULL) ?
-15235 &pAllocationsChanged[allocIndex] : VMA_NULL;
-15236 pBlockVectorDefragCtx->AddAllocation(hAlloc, pChanged);
-
-
-
-
-
-15242 VkResult VmaDefragmentationContext_T::Defragment(
-15243 VkDeviceSize maxCpuBytesToMove, uint32_t maxCpuAllocationsToMove,
-15244 VkDeviceSize maxGpuBytesToMove, uint32_t maxGpuAllocationsToMove,
-
-
-
-
-
-
-
-
-
-
-
-15256 m_MaxCpuBytesToMove = maxCpuBytesToMove;
-15257 m_MaxCpuAllocationsToMove = maxCpuAllocationsToMove;
-
-15259 m_MaxGpuBytesToMove = maxGpuBytesToMove;
-15260 m_MaxGpuAllocationsToMove = maxGpuAllocationsToMove;
-
-15262 if(m_MaxCpuBytesToMove == 0 && m_MaxCpuAllocationsToMove == 0 &&
-15263 m_MaxGpuBytesToMove == 0 && m_MaxGpuAllocationsToMove == 0)
-
-
-15266 return VK_NOT_READY;
-
-
-15269 if(commandBuffer == VK_NULL_HANDLE)
-
-15271 maxGpuBytesToMove = 0;
-15272 maxGpuAllocationsToMove = 0;
-
-
-15275 VkResult res = VK_SUCCESS;
-
-
-15278 for(uint32_t memTypeIndex = 0;
-15279 memTypeIndex < m_hAllocator->GetMemoryTypeCount() && res >= VK_SUCCESS;
-
-
-15282 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
-15283 if(pBlockVectorCtx)
-
-15285 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
-15286 pBlockVectorCtx->GetBlockVector()->Defragment(
-
-
-15289 maxCpuBytesToMove, maxCpuAllocationsToMove,
-15290 maxGpuBytesToMove, maxGpuAllocationsToMove,
-
-15292 if(pBlockVectorCtx->res != VK_SUCCESS)
-
-15294 res = pBlockVectorCtx->res;
-
-
-
-
-
-15300 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
-15301 customCtxIndex < customCtxCount && res >= VK_SUCCESS;
-
-
-15304 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
-15305 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
-15306 pBlockVectorCtx->GetBlockVector()->Defragment(
-
-
-15309 maxCpuBytesToMove, maxCpuAllocationsToMove,
-15310 maxGpuBytesToMove, maxGpuAllocationsToMove,
-
-15312 if(pBlockVectorCtx->res != VK_SUCCESS)
-
-15314 res = pBlockVectorCtx->res;
-
-
-
-
-
-
-
-
-
-
-
-
-15327 for(uint32_t memTypeIndex = 0;
-15328 memTypeIndex < m_hAllocator->GetMemoryTypeCount();
-
-
-15331 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
-15332 if(pBlockVectorCtx)
-
-15334 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
-
-15336 if(!pBlockVectorCtx->hasDefragmentationPlan)
-
-15338 pBlockVectorCtx->GetBlockVector()->Defragment(
-
-
-15341 m_MaxCpuBytesToMove, m_MaxCpuAllocationsToMove,
-15342 m_MaxGpuBytesToMove, m_MaxGpuAllocationsToMove,
-
-
-15345 if(pBlockVectorCtx->res < VK_SUCCESS)
-
-
-15348 pBlockVectorCtx->hasDefragmentationPlan =
true;
-
-
-15351 const uint32_t processed = pBlockVectorCtx->GetBlockVector()->ProcessDefragmentations(
-
-15353 pCurrentMove, movesLeft);
-
-15355 movesLeft -= processed;
-15356 pCurrentMove += processed;
-
-
-
-
-15361 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
-15362 customCtxIndex < customCtxCount;
-
-
-15365 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
-15366 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
-
-15368 if(!pBlockVectorCtx->hasDefragmentationPlan)
-
-15370 pBlockVectorCtx->GetBlockVector()->Defragment(
-
-
-15373 m_MaxCpuBytesToMove, m_MaxCpuAllocationsToMove,
-15374 m_MaxGpuBytesToMove, m_MaxGpuAllocationsToMove,
-
-
-15377 if(pBlockVectorCtx->res < VK_SUCCESS)
-
-
-15380 pBlockVectorCtx->hasDefragmentationPlan =
true;
-
-
-15383 const uint32_t processed = pBlockVectorCtx->GetBlockVector()->ProcessDefragmentations(
-
-15385 pCurrentMove, movesLeft);
-
-15387 movesLeft -= processed;
-15388 pCurrentMove += processed;
-
-
-
-
-
-
-15395 VkResult VmaDefragmentationContext_T::DefragmentPassEnd()
-
-15397 VkResult res = VK_SUCCESS;
-
-
-15400 for(uint32_t memTypeIndex = 0;
-15401 memTypeIndex < m_hAllocator->GetMemoryTypeCount();
-
-
-15404 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
-15405 if(pBlockVectorCtx)
-
-15407 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
-
-15409 if(!pBlockVectorCtx->hasDefragmentationPlan)
-
-15411 res = VK_NOT_READY;
-
-
-
-15415 pBlockVectorCtx->GetBlockVector()->CommitDefragmentations(
-15416 pBlockVectorCtx, m_pStats);
-
-15418 if(pBlockVectorCtx->defragmentationMoves.size() != pBlockVectorCtx->defragmentationMovesCommitted)
-15419 res = VK_NOT_READY;
-
-
-
-
-15424 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
-15425 customCtxIndex < customCtxCount;
-
-
-15428 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
-15429 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
-
-15431 if(!pBlockVectorCtx->hasDefragmentationPlan)
-
-15433 res = VK_NOT_READY;
-
-
-
-15437 pBlockVectorCtx->GetBlockVector()->CommitDefragmentations(
-15438 pBlockVectorCtx, m_pStats);
-
-15440 if(pBlockVectorCtx->defragmentationMoves.size() != pBlockVectorCtx->defragmentationMovesCommitted)
-15441 res = VK_NOT_READY;
-
-
-
-
-
-
-
-15450 #if VMA_RECORDING_ENABLED
-
-15452 VmaRecorder::VmaRecorder() :
-
-
-
-15456 m_RecordingStartTime(std::chrono::high_resolution_clock::now())
-
-
-
-
-
-15462 m_UseMutex = useMutex;
-15463 m_Flags = settings.
flags;
-
-15465 #if defined(_WIN32)
-
-15467 errno_t err = fopen_s(&m_File, settings.
pFilePath,
"wb");
-
-
-
-15471 return VK_ERROR_INITIALIZATION_FAILED;
-
-
-
-15475 m_File = fopen(settings.
pFilePath,
"wb");
-
-
-
-15479 return VK_ERROR_INITIALIZATION_FAILED;
-
-
-
-
-15484 fprintf(m_File,
"%s\n",
"Vulkan Memory Allocator,Calls recording");
-15485 fprintf(m_File,
"%s\n",
"1,8");
-
-
-
-
-15490 VmaRecorder::~VmaRecorder()
-
-15492 if(m_File != VMA_NULL)
-
-
-
-
-
-15498 void VmaRecorder::RecordCreateAllocator(uint32_t frameIndex)
-
-15500 CallParams callParams;
-15501 GetBasicParams(callParams);
-
-15503 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15504 fprintf(m_File,
"%u,%.3f,%u,vmaCreateAllocator\n", callParams.threadId, callParams.time, frameIndex);
-
-
-
-15508 void VmaRecorder::RecordDestroyAllocator(uint32_t frameIndex)
-
-15510 CallParams callParams;
-15511 GetBasicParams(callParams);
-
-15513 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15514 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyAllocator\n", callParams.threadId, callParams.time, frameIndex);
-
-
-
-
-
-15520 CallParams callParams;
-15521 GetBasicParams(callParams);
-
-15523 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15524 fprintf(m_File,
"%u,%.3f,%u,vmaCreatePool,%u,%u,%llu,%llu,%llu,%u,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-
-
-
-
-
-
-15535 void VmaRecorder::RecordDestroyPool(uint32_t frameIndex,
VmaPool pool)
-
-15537 CallParams callParams;
-15538 GetBasicParams(callParams);
-
-15540 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15541 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyPool,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15546 void VmaRecorder::RecordAllocateMemory(uint32_t frameIndex,
-15547 const VkMemoryRequirements& vkMemReq,
-
-
-
-15551 CallParams callParams;
-15552 GetBasicParams(callParams);
-
-15554 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15555 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
-15556 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemory,%llu,%llu,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-
-15558 vkMemReq.alignment,
-15559 vkMemReq.memoryTypeBits,
-
-
-
-
-
-
-
-15567 userDataStr.GetString());
-
-
-
-15571 void VmaRecorder::RecordAllocateMemoryPages(uint32_t frameIndex,
-15572 const VkMemoryRequirements& vkMemReq,
-
-15574 uint64_t allocationCount,
-
-
-15577 CallParams callParams;
-15578 GetBasicParams(callParams);
-
-15580 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15581 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
-15582 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryPages,%llu,%llu,%u,%u,%u,%u,%u,%u,%p,", callParams.threadId, callParams.time, frameIndex,
-
-15584 vkMemReq.alignment,
-15585 vkMemReq.memoryTypeBits,
-
-
-
-
-
-
-15592 PrintPointerList(allocationCount, pAllocations);
-15593 fprintf(m_File,
",%s\n", userDataStr.GetString());
-
-
-
-15597 void VmaRecorder::RecordAllocateMemoryForBuffer(uint32_t frameIndex,
-15598 const VkMemoryRequirements& vkMemReq,
-15599 bool requiresDedicatedAllocation,
-15600 bool prefersDedicatedAllocation,
-
-
-
-15604 CallParams callParams;
-15605 GetBasicParams(callParams);
-
-15607 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15608 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
-15609 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryForBuffer,%llu,%llu,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-
-15611 vkMemReq.alignment,
-15612 vkMemReq.memoryTypeBits,
-15613 requiresDedicatedAllocation ? 1 : 0,
-15614 prefersDedicatedAllocation ? 1 : 0,
-
-
-
-
-
-
-
-15622 userDataStr.GetString());
-
-
-
-15626 void VmaRecorder::RecordAllocateMemoryForImage(uint32_t frameIndex,
-15627 const VkMemoryRequirements& vkMemReq,
-15628 bool requiresDedicatedAllocation,
-15629 bool prefersDedicatedAllocation,
-
-
-
-15633 CallParams callParams;
-15634 GetBasicParams(callParams);
-
-15636 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15637 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
-15638 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryForImage,%llu,%llu,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-
-15640 vkMemReq.alignment,
-15641 vkMemReq.memoryTypeBits,
-15642 requiresDedicatedAllocation ? 1 : 0,
-15643 prefersDedicatedAllocation ? 1 : 0,
-
-
-
-
-
-
-
-15651 userDataStr.GetString());
-
-
-
-15655 void VmaRecorder::RecordFreeMemory(uint32_t frameIndex,
-
-
-15658 CallParams callParams;
-15659 GetBasicParams(callParams);
-
-15661 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15662 fprintf(m_File,
"%u,%.3f,%u,vmaFreeMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15667 void VmaRecorder::RecordFreeMemoryPages(uint32_t frameIndex,
-15668 uint64_t allocationCount,
-
-
-15671 CallParams callParams;
-15672 GetBasicParams(callParams);
-
-15674 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15675 fprintf(m_File,
"%u,%.3f,%u,vmaFreeMemoryPages,", callParams.threadId, callParams.time, frameIndex);
-15676 PrintPointerList(allocationCount, pAllocations);
-15677 fprintf(m_File,
"\n");
-
-
-
-15681 void VmaRecorder::RecordSetAllocationUserData(uint32_t frameIndex,
-
-15683 const void* pUserData)
-
-15685 CallParams callParams;
-15686 GetBasicParams(callParams);
-
-15688 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15689 UserDataString userDataStr(
-
-
-15692 fprintf(m_File,
"%u,%.3f,%u,vmaSetAllocationUserData,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-
-15694 userDataStr.GetString());
-
-
-
-15698 void VmaRecorder::RecordCreateLostAllocation(uint32_t frameIndex,
-
-
-15701 CallParams callParams;
-15702 GetBasicParams(callParams);
-
-15704 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15705 fprintf(m_File,
"%u,%.3f,%u,vmaCreateLostAllocation,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15710 void VmaRecorder::RecordMapMemory(uint32_t frameIndex,
-
-
-15713 CallParams callParams;
-15714 GetBasicParams(callParams);
-
-15716 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15717 fprintf(m_File,
"%u,%.3f,%u,vmaMapMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15722 void VmaRecorder::RecordUnmapMemory(uint32_t frameIndex,
-
-
-15725 CallParams callParams;
-15726 GetBasicParams(callParams);
-
-15728 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15729 fprintf(m_File,
"%u,%.3f,%u,vmaUnmapMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15734 void VmaRecorder::RecordFlushAllocation(uint32_t frameIndex,
-15735 VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
-
-15737 CallParams callParams;
-15738 GetBasicParams(callParams);
-
-15740 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15741 fprintf(m_File,
"%u,%.3f,%u,vmaFlushAllocation,%p,%llu,%llu\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-
-
-15748 void VmaRecorder::RecordInvalidateAllocation(uint32_t frameIndex,
-15749 VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
-
-15751 CallParams callParams;
-15752 GetBasicParams(callParams);
-
-15754 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15755 fprintf(m_File,
"%u,%.3f,%u,vmaInvalidateAllocation,%p,%llu,%llu\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-
-
-15762 void VmaRecorder::RecordCreateBuffer(uint32_t frameIndex,
-15763 const VkBufferCreateInfo& bufCreateInfo,
-
-
-
-15767 CallParams callParams;
-15768 GetBasicParams(callParams);
-
-15770 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15771 UserDataString userDataStr(allocCreateInfo.
flags, allocCreateInfo.
pUserData);
-15772 fprintf(m_File,
"%u,%.3f,%u,vmaCreateBuffer,%u,%llu,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-15773 bufCreateInfo.flags,
-15774 bufCreateInfo.size,
-15775 bufCreateInfo.usage,
-15776 bufCreateInfo.sharingMode,
-15777 allocCreateInfo.
flags,
-15778 allocCreateInfo.
usage,
-
-
-
-15782 allocCreateInfo.
pool,
-
-15784 userDataStr.GetString());
-
-
-
-15788 void VmaRecorder::RecordCreateImage(uint32_t frameIndex,
-15789 const VkImageCreateInfo& imageCreateInfo,
-
-
-
-15793 CallParams callParams;
-15794 GetBasicParams(callParams);
-
-15796 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15797 UserDataString userDataStr(allocCreateInfo.
flags, allocCreateInfo.
pUserData);
-15798 fprintf(m_File,
"%u,%.3f,%u,vmaCreateImage,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-15799 imageCreateInfo.flags,
-15800 imageCreateInfo.imageType,
-15801 imageCreateInfo.format,
-15802 imageCreateInfo.extent.width,
-15803 imageCreateInfo.extent.height,
-15804 imageCreateInfo.extent.depth,
-15805 imageCreateInfo.mipLevels,
-15806 imageCreateInfo.arrayLayers,
-15807 imageCreateInfo.samples,
-15808 imageCreateInfo.tiling,
-15809 imageCreateInfo.usage,
-15810 imageCreateInfo.sharingMode,
-15811 imageCreateInfo.initialLayout,
-15812 allocCreateInfo.
flags,
-15813 allocCreateInfo.
usage,
-
-
-
-15817 allocCreateInfo.
pool,
-
-15819 userDataStr.GetString());
-
-
-
-15823 void VmaRecorder::RecordDestroyBuffer(uint32_t frameIndex,
-
-
-15826 CallParams callParams;
-15827 GetBasicParams(callParams);
-
-15829 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15830 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyBuffer,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15835 void VmaRecorder::RecordDestroyImage(uint32_t frameIndex,
-
-
-15838 CallParams callParams;
-15839 GetBasicParams(callParams);
-
-15841 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15842 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyImage,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15847 void VmaRecorder::RecordTouchAllocation(uint32_t frameIndex,
-
-
-15850 CallParams callParams;
-15851 GetBasicParams(callParams);
-
-15853 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15854 fprintf(m_File,
"%u,%.3f,%u,vmaTouchAllocation,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15859 void VmaRecorder::RecordGetAllocationInfo(uint32_t frameIndex,
-
-
-15862 CallParams callParams;
-15863 GetBasicParams(callParams);
-
-15865 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15866 fprintf(m_File,
"%u,%.3f,%u,vmaGetAllocationInfo,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15871 void VmaRecorder::RecordMakePoolAllocationsLost(uint32_t frameIndex,
-
-
-15874 CallParams callParams;
-15875 GetBasicParams(callParams);
-
-15877 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15878 fprintf(m_File,
"%u,%.3f,%u,vmaMakePoolAllocationsLost,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15883 void VmaRecorder::RecordDefragmentationBegin(uint32_t frameIndex,
-
-
-
-15887 CallParams callParams;
-15888 GetBasicParams(callParams);
-
-15890 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15891 fprintf(m_File,
"%u,%.3f,%u,vmaDefragmentationBegin,%u,", callParams.threadId, callParams.time, frameIndex,
-
-
-15894 fprintf(m_File,
",");
-
-15896 fprintf(m_File,
",%llu,%u,%llu,%u,%p,%p\n",
-
-
-
-
-
-
-
-
-
-15906 void VmaRecorder::RecordDefragmentationEnd(uint32_t frameIndex,
-
-
-15909 CallParams callParams;
-15910 GetBasicParams(callParams);
-
-15912 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15913 fprintf(m_File,
"%u,%.3f,%u,vmaDefragmentationEnd,%p\n", callParams.threadId, callParams.time, frameIndex,
-
-
-
-
-15918 void VmaRecorder::RecordSetPoolName(uint32_t frameIndex,
-
-
-
-15922 CallParams callParams;
-15923 GetBasicParams(callParams);
-
-15925 VmaMutexLock lock(m_FileMutex, m_UseMutex);
-15926 fprintf(m_File,
"%u,%.3f,%u,vmaSetPoolName,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
-15927 pool, name != VMA_NULL ? name :
"");
-
-
-
-
-
-15933 if(pUserData != VMA_NULL)
-
-
-
-15937 m_Str = (
const char*)pUserData;
-
-
-
-
-15942 snprintf(m_PtrStr, 17,
"%p", pUserData);
-
-
-
-
-
-
-
-
-
-15952 void VmaRecorder::WriteConfiguration(
-15953 const VkPhysicalDeviceProperties& devProps,
-15954 const VkPhysicalDeviceMemoryProperties& memProps,
-15955 uint32_t vulkanApiVersion,
-15956 bool dedicatedAllocationExtensionEnabled,
-15957 bool bindMemory2ExtensionEnabled,
-15958 bool memoryBudgetExtensionEnabled,
-15959 bool deviceCoherentMemoryExtensionEnabled)
-
-15961 fprintf(m_File,
"Config,Begin\n");
-
-15963 fprintf(m_File,
"VulkanApiVersion,%u,%u\n", VK_VERSION_MAJOR(vulkanApiVersion), VK_VERSION_MINOR(vulkanApiVersion));
-
-15965 fprintf(m_File,
"PhysicalDevice,apiVersion,%u\n", devProps.apiVersion);
-15966 fprintf(m_File,
"PhysicalDevice,driverVersion,%u\n", devProps.driverVersion);
-15967 fprintf(m_File,
"PhysicalDevice,vendorID,%u\n", devProps.vendorID);
-15968 fprintf(m_File,
"PhysicalDevice,deviceID,%u\n", devProps.deviceID);
-15969 fprintf(m_File,
"PhysicalDevice,deviceType,%u\n", devProps.deviceType);
-15970 fprintf(m_File,
"PhysicalDevice,deviceName,%s\n", devProps.deviceName);
-
-15972 fprintf(m_File,
"PhysicalDeviceLimits,maxMemoryAllocationCount,%u\n", devProps.limits.maxMemoryAllocationCount);
-15973 fprintf(m_File,
"PhysicalDeviceLimits,bufferImageGranularity,%llu\n", devProps.limits.bufferImageGranularity);
-15974 fprintf(m_File,
"PhysicalDeviceLimits,nonCoherentAtomSize,%llu\n", devProps.limits.nonCoherentAtomSize);
-
-15976 fprintf(m_File,
"PhysicalDeviceMemory,HeapCount,%u\n", memProps.memoryHeapCount);
-15977 for(uint32_t i = 0; i < memProps.memoryHeapCount; ++i)
-
-15979 fprintf(m_File,
"PhysicalDeviceMemory,Heap,%u,size,%llu\n", i, memProps.memoryHeaps[i].size);
-15980 fprintf(m_File,
"PhysicalDeviceMemory,Heap,%u,flags,%u\n", i, memProps.memoryHeaps[i].flags);
-
-15982 fprintf(m_File,
"PhysicalDeviceMemory,TypeCount,%u\n", memProps.memoryTypeCount);
-15983 for(uint32_t i = 0; i < memProps.memoryTypeCount; ++i)
-
-15985 fprintf(m_File,
"PhysicalDeviceMemory,Type,%u,heapIndex,%u\n", i, memProps.memoryTypes[i].heapIndex);
-15986 fprintf(m_File,
"PhysicalDeviceMemory,Type,%u,propertyFlags,%u\n", i, memProps.memoryTypes[i].propertyFlags);
-
-
-15989 fprintf(m_File,
"Extension,VK_KHR_dedicated_allocation,%u\n", dedicatedAllocationExtensionEnabled ? 1 : 0);
-15990 fprintf(m_File,
"Extension,VK_KHR_bind_memory2,%u\n", bindMemory2ExtensionEnabled ? 1 : 0);
-15991 fprintf(m_File,
"Extension,VK_EXT_memory_budget,%u\n", memoryBudgetExtensionEnabled ? 1 : 0);
-15992 fprintf(m_File,
"Extension,VK_AMD_device_coherent_memory,%u\n", deviceCoherentMemoryExtensionEnabled ? 1 : 0);
-
-15994 fprintf(m_File,
"Macro,VMA_DEBUG_ALWAYS_DEDICATED_MEMORY,%u\n", VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ? 1 : 0);
-15995 fprintf(m_File,
"Macro,VMA_MIN_ALIGNMENT,%llu\n", (VkDeviceSize)VMA_MIN_ALIGNMENT);
-15996 fprintf(m_File,
"Macro,VMA_DEBUG_MARGIN,%llu\n", (VkDeviceSize)VMA_DEBUG_MARGIN);
-15997 fprintf(m_File,
"Macro,VMA_DEBUG_INITIALIZE_ALLOCATIONS,%u\n", VMA_DEBUG_INITIALIZE_ALLOCATIONS ? 1 : 0);
-15998 fprintf(m_File,
"Macro,VMA_DEBUG_DETECT_CORRUPTION,%u\n", VMA_DEBUG_DETECT_CORRUPTION ? 1 : 0);
-15999 fprintf(m_File,
"Macro,VMA_DEBUG_GLOBAL_MUTEX,%u\n", VMA_DEBUG_GLOBAL_MUTEX ? 1 : 0);
-16000 fprintf(m_File,
"Macro,VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY,%llu\n", (VkDeviceSize)VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY);
-16001 fprintf(m_File,
"Macro,VMA_SMALL_HEAP_MAX_SIZE,%llu\n", (VkDeviceSize)VMA_SMALL_HEAP_MAX_SIZE);
-16002 fprintf(m_File,
"Macro,VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE,%llu\n", (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
-
-16004 fprintf(m_File,
"Config,End\n");
-
-
-16007 void VmaRecorder::GetBasicParams(CallParams& outParams)
-
-16009 #if defined(_WIN32)
-16010 outParams.threadId = GetCurrentThreadId();
-
-
-
-
-16015 std::thread::id thread_id = std::this_thread::get_id();
-16016 std::stringstream thread_id_to_string_converter;
-16017 thread_id_to_string_converter << thread_id;
-16018 std::string thread_id_as_string = thread_id_to_string_converter.str();
-16019 outParams.threadId =
static_cast<uint32_t
>(std::stoi(thread_id_as_string.c_str()));
-
-
-16022 auto current_time = std::chrono::high_resolution_clock::now();
-
-16024 outParams.time = std::chrono::duration<double, std::chrono::seconds::period>(current_time - m_RecordingStartTime).count();
-
-
-16027 void VmaRecorder::PrintPointerList(uint64_t count,
const VmaAllocation* pItems)
-
-
-
-16031 fprintf(m_File,
"%p", pItems[0]);
-16032 for(uint64_t i = 1; i < count; ++i)
-
-16034 fprintf(m_File,
" %p", pItems[i]);
-
-
-
-
-16039 void VmaRecorder::Flush()
-
-
-
-
-
-
-
-
-
-
-
-16052 VmaAllocationObjectAllocator::VmaAllocationObjectAllocator(
const VkAllocationCallbacks* pAllocationCallbacks) :
-16053 m_Allocator(pAllocationCallbacks, 1024)
-
-
-
-16057 template<
typename... Types>
VmaAllocation VmaAllocationObjectAllocator::Allocate(Types... args)
-
-16059 VmaMutexLock mutexLock(m_Mutex);
-16060 return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
-
-
-16063 void VmaAllocationObjectAllocator::Free(
VmaAllocation hAlloc)
-
-16065 VmaMutexLock mutexLock(m_Mutex);
-16066 m_Allocator.Free(hAlloc);
-
-
-
-
-
-
-16074 m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
-
-
-
-
-
-
-16081 m_hDevice(pCreateInfo->device),
-16082 m_hInstance(pCreateInfo->instance),
-16083 m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
-16084 m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
-16085 *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
-16086 m_AllocationObjectAllocator(&m_AllocationCallbacks),
-16087 m_HeapSizeLimitMask(0),
-16088 m_DeviceMemoryCount(0),
-16089 m_PreferredLargeHeapBlockSize(0),
-16090 m_PhysicalDevice(pCreateInfo->physicalDevice),
-16091 m_CurrentFrameIndex(0),
-16092 m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
-
-16094 m_GlobalMemoryTypeBits(UINT32_MAX)
-
-16096 ,m_pRecorder(VMA_NULL)
-
-
-16099 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16101 m_UseKhrDedicatedAllocation =
false;
-16102 m_UseKhrBindMemory2 =
false;
-
-
-16105 if(VMA_DEBUG_DETECT_CORRUPTION)
-
-
-16108 VMA_ASSERT(VMA_DEBUG_MARGIN %
sizeof(uint32_t) == 0);
-
-
-
-
-16113 if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
-
-16115 #if !(VMA_DEDICATED_ALLOCATION)
-
-
-16118 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
-
-
-16121 #if !(VMA_BIND_MEMORY2)
-
-
-16124 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
-
-
-
-16128 #if !(VMA_MEMORY_BUDGET)
-
-
-16131 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
-
-
-16134 #if !(VMA_BUFFER_DEVICE_ADDRESS)
-16135 if(m_UseKhrBufferDeviceAddress)
-
-16137 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
-
-
-16140 #if VMA_VULKAN_VERSION < 1002000
-16141 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 2, 0))
-
-16143 VMA_ASSERT(0 &&
"vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
-
-
-16146 #if VMA_VULKAN_VERSION < 1001000
-16147 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16149 VMA_ASSERT(0 &&
"vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
-
-
-16152 #if !(VMA_MEMORY_PRIORITY)
-16153 if(m_UseExtMemoryPriority)
-
-16155 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
-
-
-
-16159 memset(&m_DeviceMemoryCallbacks, 0 ,
sizeof(m_DeviceMemoryCallbacks));
-16160 memset(&m_PhysicalDeviceProperties, 0,
sizeof(m_PhysicalDeviceProperties));
-16161 memset(&m_MemProps, 0,
sizeof(m_MemProps));
-
-16163 memset(&m_pBlockVectors, 0,
sizeof(m_pBlockVectors));
-16164 memset(&m_VulkanFunctions, 0,
sizeof(m_VulkanFunctions));
-
-16166 #if VMA_EXTERNAL_MEMORY
-16167 memset(&m_TypeExternalMemoryHandleTypes, 0,
sizeof(m_TypeExternalMemoryHandleTypes));
-
-
-
-
-
-
-
-
-
-
-
-16179 (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
-16180 (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
-
-16182 VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
-16183 VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
-16184 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
-16185 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
-
-
-
-
-16190 m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
-
-16192 #if VMA_EXTERNAL_MEMORY
-
-
-
-16196 sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
-
-
-
-
-
-16202 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
-
-16204 const VkDeviceSize limit = pCreateInfo->
pHeapSizeLimit[heapIndex];
-16205 if(limit != VK_WHOLE_SIZE)
-
-16207 m_HeapSizeLimitMask |= 1u << heapIndex;
-16208 if(limit < m_MemProps.memoryHeaps[heapIndex].size)
-
-16210 m_MemProps.memoryHeaps[heapIndex].size = limit;
-
-
-
-
-
-16216 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-16218 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
-
-16220 m_pBlockVectors[memTypeIndex] = vma_new(
this, VmaBlockVector)(
-
-
-
-16224 preferredBlockSize,
-
-
-16227 GetBufferImageGranularity(),
-
-
-
-
-16232 GetMemoryTypeMinAlignment(memTypeIndex),
-
-
-
-
-
-
-
-
-16241 VkResult res = VK_SUCCESS;
-
-
-
-
-16246 #if VMA_RECORDING_ENABLED
-16247 m_pRecorder = vma_new(
this, VmaRecorder)();
-
-16249 if(res != VK_SUCCESS)
-
-
-
-16253 m_pRecorder->WriteConfiguration(
-16254 m_PhysicalDeviceProperties,
-
-16256 m_VulkanApiVersion,
-16257 m_UseKhrDedicatedAllocation,
-16258 m_UseKhrBindMemory2,
-16259 m_UseExtMemoryBudget,
-16260 m_UseAmdDeviceCoherentMemory);
-16261 m_pRecorder->RecordCreateAllocator(GetCurrentFrameIndex());
-
-16263 VMA_ASSERT(0 &&
"VmaAllocatorCreateInfo::pRecordSettings used, but not supported due to VMA_RECORDING_ENABLED not defined to 1.");
-16264 return VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-
-16268 #if VMA_MEMORY_BUDGET
-16269 if(m_UseExtMemoryBudget)
-
-16271 UpdateVulkanBudget();
-
-
-
-
-
-
-16278 VmaAllocator_T::~VmaAllocator_T()
-
-16280 #if VMA_RECORDING_ENABLED
-16281 if(m_pRecorder != VMA_NULL)
-
-16283 m_pRecorder->RecordDestroyAllocator(GetCurrentFrameIndex());
-16284 vma_delete(
this, m_pRecorder);
-
-
-
-16288 VMA_ASSERT(m_Pools.IsEmpty());
-
-16290 for(
size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
-
-16292 if(!m_DedicatedAllocations[memTypeIndex].IsEmpty())
-
-16294 VMA_ASSERT(0 &&
"Unfreed dedicated allocations found.");
-
-
-16297 vma_delete(
this, m_pBlockVectors[memTypeIndex]);
-
-
-
-16301 void VmaAllocator_T::ImportVulkanFunctions(
const VmaVulkanFunctions* pVulkanFunctions)
-
-16303 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
-16304 ImportVulkanFunctions_Static();
-
-
-16307 if(pVulkanFunctions != VMA_NULL)
-
-16309 ImportVulkanFunctions_Custom(pVulkanFunctions);
-
-
-16312 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
-16313 ImportVulkanFunctions_Dynamic();
-
-
-16316 ValidateVulkanFunctions();
-
-
-16319 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
-
-16321 void VmaAllocator_T::ImportVulkanFunctions_Static()
-
-
-16324 m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
-16325 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
-16326 m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
-16327 m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
-16328 m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
-16329 m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
-16330 m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
-16331 m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
-16332 m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
-16333 m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
-16334 m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
-16335 m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
-16336 m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
-16337 m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
-16338 m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
-16339 m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
-16340 m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
-
-
-16343 #if VMA_VULKAN_VERSION >= 1001000
-16344 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16346 m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
-16347 m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
-16348 m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
-16349 m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
-16350 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
-
-
-
-
-
-
-16357 void VmaAllocator_T::ImportVulkanFunctions_Custom(
const VmaVulkanFunctions* pVulkanFunctions)
-
-16359 VMA_ASSERT(pVulkanFunctions != VMA_NULL);
-
-16361 #define VMA_COPY_IF_NOT_NULL(funcName) \
-16362 if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
-
-16364 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
-16365 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
-16366 VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
-16367 VMA_COPY_IF_NOT_NULL(vkFreeMemory);
-16368 VMA_COPY_IF_NOT_NULL(vkMapMemory);
-16369 VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
-16370 VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
-16371 VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
-16372 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
-16373 VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
-16374 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
-16375 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
-16376 VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
-16377 VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
-16378 VMA_COPY_IF_NOT_NULL(vkCreateImage);
-16379 VMA_COPY_IF_NOT_NULL(vkDestroyImage);
-16380 VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
-
-16382 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
-16383 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
-16384 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
-
-
-16387 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
-16388 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
-16389 VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
-
-
-16392 #if VMA_MEMORY_BUDGET
-16393 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
-
-
-16396 #undef VMA_COPY_IF_NOT_NULL
-
-
-16399 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
-
-16401 void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
-
-16403 #define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
-16404 if(m_VulkanFunctions.memberName == VMA_NULL) \
-16405 m_VulkanFunctions.memberName = \
-16406 (functionPointerType)vkGetInstanceProcAddr(m_hInstance, functionNameString);
-16407 #define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
-16408 if(m_VulkanFunctions.memberName == VMA_NULL) \
-16409 m_VulkanFunctions.memberName = \
-16410 (functionPointerType)vkGetDeviceProcAddr(m_hDevice, functionNameString);
-
-16412 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties,
"vkGetPhysicalDeviceProperties");
-16413 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties,
"vkGetPhysicalDeviceMemoryProperties");
-16414 VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory,
"vkAllocateMemory");
-16415 VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory,
"vkFreeMemory");
-16416 VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory,
"vkMapMemory");
-16417 VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory,
"vkUnmapMemory");
-16418 VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges,
"vkFlushMappedMemoryRanges");
-16419 VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges,
"vkInvalidateMappedMemoryRanges");
-16420 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory,
"vkBindBufferMemory");
-16421 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory,
"vkBindImageMemory");
-16422 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements,
"vkGetBufferMemoryRequirements");
-16423 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements,
"vkGetImageMemoryRequirements");
-16424 VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer,
"vkCreateBuffer");
-16425 VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer,
"vkDestroyBuffer");
-16426 VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage,
"vkCreateImage");
-16427 VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage,
"vkDestroyImage");
-16428 VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer,
"vkCmdCopyBuffer");
-
-16430 #if VMA_VULKAN_VERSION >= 1001000
-16431 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16433 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2,
"vkGetBufferMemoryRequirements2");
-16434 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2,
"vkGetImageMemoryRequirements2");
-16435 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2,
"vkBindBufferMemory2");
-16436 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2,
"vkBindImageMemory2");
-16437 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2,
"vkGetPhysicalDeviceMemoryProperties2");
-
-
-
-16441 #if VMA_DEDICATED_ALLOCATION
-16442 if(m_UseKhrDedicatedAllocation)
-
-16444 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR,
"vkGetBufferMemoryRequirements2KHR");
-16445 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR,
"vkGetImageMemoryRequirements2KHR");
-
-
-
-16449 #if VMA_BIND_MEMORY2
-16450 if(m_UseKhrBindMemory2)
-
-16452 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR,
"vkBindBufferMemory2KHR");
-16453 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR,
"vkBindImageMemory2KHR");
-
-
-
-16457 #if VMA_MEMORY_BUDGET
-16458 if(m_UseExtMemoryBudget)
-
-16460 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR,
"vkGetPhysicalDeviceMemoryProperties2KHR");
-
-
-
-16464 #undef VMA_FETCH_DEVICE_FUNC
-16465 #undef VMA_FETCH_INSTANCE_FUNC
-
-
-
-
-16470 void VmaAllocator_T::ValidateVulkanFunctions()
-
-16472 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
-16473 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
-16474 VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
-16475 VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
-16476 VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
-16477 VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
-16478 VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
-16479 VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
-16480 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
-16481 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
-16482 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
-16483 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
-16484 VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
-16485 VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
-16486 VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
-16487 VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
-16488 VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
-
-16490 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
-16491 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
-
-16493 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
-16494 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
-
-
-
-16498 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
-16499 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
-
-16501 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
-16502 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
-
-
-
-16506 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
-16507 if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16509 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
-
-
-
-
-16514 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
-
-16516 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
-16517 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
-16518 const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
-16519 return VmaAlignUp(isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, (VkDeviceSize)32);
-
-
-16522 VkResult VmaAllocator_T::AllocateMemoryOfType(
-
-16524 VkDeviceSize alignment,
-16525 bool dedicatedAllocation,
-16526 VkBuffer dedicatedBuffer,
-16527 VkBufferUsageFlags dedicatedBufferUsage,
-16528 VkImage dedicatedImage,
-
-16530 uint32_t memTypeIndex,
-16531 VmaSuballocationType suballocType,
-16532 size_t allocationCount,
-
-
-16535 VMA_ASSERT(pAllocations != VMA_NULL);
-16536 VMA_DEBUG_LOG(
" AllocateMemory: MemoryTypeIndex=%u, AllocationCount=%zu, Size=%llu", memTypeIndex, allocationCount, size);
-
-
-
-
-
-16542 (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
-
-
-
-
-
-
-
-
-
-16552 VmaBlockVector*
const blockVector = m_pBlockVectors[memTypeIndex];
-16553 VMA_ASSERT(blockVector);
-
-16555 const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
-16556 bool preferDedicatedMemory =
-16557 VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
-16558 dedicatedAllocation ||
-
-16560 size > preferredBlockSize / 2;
-
-16562 if(preferDedicatedMemory &&
-
-16564 finalCreateInfo.
pool == VK_NULL_HANDLE)
-
-
-
-
-
-
-
-
-16573 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-16577 return AllocateDedicatedMemory(
-
-
-
-
-
-
-
-
-
-16587 dedicatedBufferUsage,
-
-
-
-
-
-
-
-16595 VkResult res = blockVector->Allocate(
-16596 m_CurrentFrameIndex.load(),
-
-
-
-
-
-
-16603 if(res == VK_SUCCESS)
-
-
-
-
-
-
-
-16611 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-
-16617 if(m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
-
-16619 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-16622 res = AllocateDedicatedMemory(
-
-
-
-
-
-
-
-
-
-16632 dedicatedBufferUsage,
-
-
-
-16636 if(res == VK_SUCCESS)
-
-
-16639 VMA_DEBUG_LOG(
" Allocated as DedicatedMemory");
-
-
-
-
-
-16645 VMA_DEBUG_LOG(
" vkAllocateMemory FAILED");
-
-
-
-
-
-16651 VkResult VmaAllocator_T::AllocateDedicatedMemory(
-
-16653 VmaSuballocationType suballocType,
-16654 uint32_t memTypeIndex,
-
-
-16657 bool isUserDataString,
-
-
-16660 VkBuffer dedicatedBuffer,
-16661 VkBufferUsageFlags dedicatedBufferUsage,
-16662 VkImage dedicatedImage,
-16663 size_t allocationCount,
-
-
-16666 VMA_ASSERT(allocationCount > 0 && pAllocations);
-
-
-
-16670 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
-
-16672 GetBudget(&heapBudget, heapIndex, 1);
-16673 if(heapBudget.
usage + size * allocationCount > heapBudget.
budget)
-
-16675 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-16679 VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
-16680 allocInfo.memoryTypeIndex = memTypeIndex;
-16681 allocInfo.allocationSize = size;
-
-16683 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
-16684 VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
-16685 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16687 if(dedicatedBuffer != VK_NULL_HANDLE)
-
-16689 VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
-16690 dedicatedAllocInfo.buffer = dedicatedBuffer;
-16691 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
-
-16693 else if(dedicatedImage != VK_NULL_HANDLE)
-
-16695 dedicatedAllocInfo.image = dedicatedImage;
-16696 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
-
-
-
-
-16701 #if VMA_BUFFER_DEVICE_ADDRESS
-16702 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
-16703 if(m_UseKhrBufferDeviceAddress)
-
-16705 bool canContainBufferWithDeviceAddress =
true;
-16706 if(dedicatedBuffer != VK_NULL_HANDLE)
-
-16708 canContainBufferWithDeviceAddress = dedicatedBufferUsage == UINT32_MAX ||
-16709 (dedicatedBufferUsage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT) != 0;
-
-16711 else if(dedicatedImage != VK_NULL_HANDLE)
-
-16713 canContainBufferWithDeviceAddress =
false;
-
-16715 if(canContainBufferWithDeviceAddress)
-
-16717 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
-16718 VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
-
-
-
-
-16723 #if VMA_MEMORY_PRIORITY
-16724 VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
-16725 if(m_UseExtMemoryPriority)
-
-16727 priorityInfo.priority = priority;
-16728 VmaPnextChainPushFront(&allocInfo, &priorityInfo);
-
-
-
-16732 #if VMA_EXTERNAL_MEMORY
-
-16734 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
-16735 exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
-16736 if(exportMemoryAllocInfo.handleTypes != 0)
-
-16738 VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
-
-
-
-
-16743 VkResult res = VK_SUCCESS;
-16744 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
-
-16746 res = AllocateDedicatedMemoryPage(
-
-
-
-
-
-
-
-16754 pAllocations + allocIndex);
-16755 if(res != VK_SUCCESS)
-
-
-
-
-
-16761 if(res == VK_SUCCESS)
-
-
-
-16765 VmaMutexLockWrite lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
-16766 DedicatedAllocationLinkedList& dedicatedAllocations = m_DedicatedAllocations[memTypeIndex];
-16767 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
-
-16769 dedicatedAllocations.PushBack(pAllocations[allocIndex]);
-
-
-
-16773 VMA_DEBUG_LOG(
" Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%u", allocationCount, memTypeIndex);
-
-
-
-
-16778 while(allocIndex--)
-
-
-16781 VkDeviceMemory hMemory = currAlloc->GetMemory();
-
-
-
-
-
-
-
-
-
-
-
-16793 FreeVulkanMemory(memTypeIndex, currAlloc->GetSize(), hMemory);
-16794 m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), currAlloc->GetSize());
-16795 currAlloc->SetUserData(
this, VMA_NULL);
-16796 m_AllocationObjectAllocator.Free(currAlloc);
-
-
-16799 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
-
-
-
-
-
-16805 VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
-
-16807 VmaSuballocationType suballocType,
-16808 uint32_t memTypeIndex,
-16809 const VkMemoryAllocateInfo& allocInfo,
-
-16811 bool isUserDataString,
-
-
-
-16815 VkDeviceMemory hMemory = VK_NULL_HANDLE;
-16816 VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
-
-
-16819 VMA_DEBUG_LOG(
" vkAllocateMemory FAILED");
-
-
-
-16823 void* pMappedData = VMA_NULL;
-
-
-16826 res = (*m_VulkanFunctions.vkMapMemory)(
-
-
-
-
-
-
-
-
-16835 VMA_DEBUG_LOG(
" vkMapMemory FAILED");
-16836 FreeVulkanMemory(memTypeIndex, size, hMemory);
-
-
-
-
-16841 *pAllocation = m_AllocationObjectAllocator.Allocate(m_CurrentFrameIndex.load(), isUserDataString);
-16842 (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
-16843 (*pAllocation)->SetUserData(
this, pUserData);
-16844 m_Budget.AddAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), size);
-16845 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
-
-16847 FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
-
-
-
-
-
-16853 void VmaAllocator_T::GetBufferMemoryRequirements(
-
-16855 VkMemoryRequirements& memReq,
-16856 bool& requiresDedicatedAllocation,
-16857 bool& prefersDedicatedAllocation)
const
-
-16859 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
-16860 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16862 VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
-16863 memReqInfo.buffer = hBuffer;
-
-16865 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
-
-16867 VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
-16868 VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
-
-16870 (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
-
-16872 memReq = memReq2.memoryRequirements;
-16873 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
-16874 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
-
-
-
-
-16879 (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
-16880 requiresDedicatedAllocation =
false;
-16881 prefersDedicatedAllocation =
false;
-
-
-
-16885 void VmaAllocator_T::GetImageMemoryRequirements(
-
-16887 VkMemoryRequirements& memReq,
-16888 bool& requiresDedicatedAllocation,
-16889 bool& prefersDedicatedAllocation)
const
-
-16891 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
-16892 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
-
-16894 VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
-16895 memReqInfo.image = hImage;
-
-16897 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
-
-16899 VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
-16900 VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
-
-16902 (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
-
-16904 memReq = memReq2.memoryRequirements;
-16905 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
-16906 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
-
-
-
-
-16911 (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
-16912 requiresDedicatedAllocation =
false;
-16913 prefersDedicatedAllocation =
false;
-
-
-
-16917 VkResult VmaAllocator_T::AllocateMemory(
-16918 const VkMemoryRequirements& vkMemReq,
-16919 bool requiresDedicatedAllocation,
-16920 bool prefersDedicatedAllocation,
-16921 VkBuffer dedicatedBuffer,
-16922 VkBufferUsageFlags dedicatedBufferUsage,
-16923 VkImage dedicatedImage,
-
-16925 VmaSuballocationType suballocType,
-16926 size_t allocationCount,
-
-
-16929 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
-
-16931 VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
-
-16933 if(vkMemReq.size == 0)
-
-16935 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-
-
-16940 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
-16941 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-16946 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
-16947 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-16949 if(requiresDedicatedAllocation)
-
-
-
-16953 VMA_ASSERT(0 &&
"VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
-16954 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-16956 if(createInfo.
pool != VK_NULL_HANDLE)
-
-16958 VMA_ASSERT(0 &&
"Pool specified while dedicated allocation is required.");
-16959 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-16962 if((createInfo.
pool != VK_NULL_HANDLE) &&
-
-
-16965 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
-16966 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-16969 if(createInfo.
pool != VK_NULL_HANDLE)
-
-
-
-
-16974 (m_MemProps.memoryTypes[createInfo.
pool->m_BlockVector.GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
-
-
-
-
-16979 return createInfo.
pool->m_BlockVector.Allocate(
-16980 m_CurrentFrameIndex.load(),
-
-16982 vkMemReq.alignment,
-
-
-
-
-
-
-
-
-16991 uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
-16992 uint32_t memTypeIndex = UINT32_MAX;
-
-16994 if(res == VK_SUCCESS)
-
-16996 res = AllocateMemoryOfType(
-
-16998 vkMemReq.alignment,
-16999 requiresDedicatedAllocation || prefersDedicatedAllocation,
-
-17001 dedicatedBufferUsage,
-
-
-
-
-
-
-
-17009 if(res == VK_SUCCESS)
-
-
-
-
-
-
-
-
-
-17019 memoryTypeBits &= ~(1u << memTypeIndex);
-
-
-17022 if(res == VK_SUCCESS)
-
-17024 res = AllocateMemoryOfType(
-
-17026 vkMemReq.alignment,
-17027 requiresDedicatedAllocation || prefersDedicatedAllocation,
-
-17029 dedicatedBufferUsage,
-
-
-
-
-
-
-
-17037 if(res == VK_SUCCESS)
-
-
-
-
-
-
-
-
-
-17047 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-
-
-
-
-
-
-
-
-
-17058 void VmaAllocator_T::FreeMemory(
-17059 size_t allocationCount,
-
-
-17062 VMA_ASSERT(pAllocations);
-
-17064 for(
size_t allocIndex = allocationCount; allocIndex--; )
-
-
-
-17068 if(allocation != VK_NULL_HANDLE)
-
-17070 if(TouchAllocation(allocation))
-
-17072 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
-
-17074 FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
-
-
-17077 switch(allocation->GetType())
-
-17079 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-17081 VmaBlockVector* pBlockVector = VMA_NULL;
-17082 VmaPool hPool = allocation->GetBlock()->GetParentPool();
-17083 if(hPool != VK_NULL_HANDLE)
-
-17085 pBlockVector = &hPool->m_BlockVector;
-
-
-
-17089 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
-17090 pBlockVector = m_pBlockVectors[memTypeIndex];
-
-17092 pBlockVector->Free(allocation);
-
-
-17095 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17096 FreeDedicatedMemory(allocation);
-
-
-
-
-
-
-
-17104 m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(allocation->GetMemoryTypeIndex()), allocation->GetSize());
-17105 allocation->SetUserData(
this, VMA_NULL);
-17106 m_AllocationObjectAllocator.Free(allocation);
-
-
-
-
-17111 void VmaAllocator_T::CalculateStats(
VmaStats* pStats)
-
-
-17114 InitStatInfo(pStats->
total);
-17115 for(
size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
-
-17117 for(
size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
-
-
-
-17121 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-17123 VmaBlockVector*
const pBlockVector = m_pBlockVectors[memTypeIndex];
-17124 VMA_ASSERT(pBlockVector);
-17125 pBlockVector->AddStats(pStats);
-
-
-
-
-17130 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
-17131 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
-
-17133 pool->m_BlockVector.AddStats(pStats);
-
-
-
-
-17138 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-17140 const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
-17141 VmaMutexLockRead dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
-17142 DedicatedAllocationLinkedList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
-
-17144 alloc != VMA_NULL; alloc = dedicatedAllocList.GetNext(alloc))
-
-
-17147 alloc->DedicatedAllocCalcStatsInfo(allocationStatInfo);
-17148 VmaAddStatInfo(pStats->
total, allocationStatInfo);
-17149 VmaAddStatInfo(pStats->
memoryType[memTypeIndex], allocationStatInfo);
-17150 VmaAddStatInfo(pStats->
memoryHeap[memHeapIndex], allocationStatInfo);
-
-
-
-
-17155 VmaPostprocessCalcStatInfo(pStats->
total);
-17156 for(
size_t i = 0; i < GetMemoryTypeCount(); ++i)
-17157 VmaPostprocessCalcStatInfo(pStats->
memoryType[i]);
-17158 for(
size_t i = 0; i < GetMemoryHeapCount(); ++i)
-17159 VmaPostprocessCalcStatInfo(pStats->
memoryHeap[i]);
-
-
-17162 void VmaAllocator_T::GetBudget(
VmaBudget* outBudget, uint32_t firstHeap, uint32_t heapCount)
-
-17164 #if VMA_MEMORY_BUDGET
-17165 if(m_UseExtMemoryBudget)
-
-17167 if(m_Budget.m_OperationsSinceBudgetFetch < 30)
-
-17169 VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
-17170 for(uint32_t i = 0; i < heapCount; ++i, ++outBudget)
-
-17172 const uint32_t heapIndex = firstHeap + i;
-
-17174 outBudget->
blockBytes = m_Budget.m_BlockBytes[heapIndex];
-
-
-17177 if(m_Budget.m_VulkanUsage[heapIndex] + outBudget->
blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
-
-17179 outBudget->
usage = m_Budget.m_VulkanUsage[heapIndex] +
-17180 outBudget->
blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
-
-
-
-17184 outBudget->
usage = 0;
-
-
-
-17188 outBudget->
budget = VMA_MIN(
-17189 m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
-
-
-
-
-17194 UpdateVulkanBudget();
-17195 GetBudget(outBudget, firstHeap, heapCount);
-
-
-
-
-
-17201 for(uint32_t i = 0; i < heapCount; ++i, ++outBudget)
-
-17203 const uint32_t heapIndex = firstHeap + i;
-
-17205 outBudget->
blockBytes = m_Budget.m_BlockBytes[heapIndex];
-
-
-
-17209 outBudget->
budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10;
-
-
-
-
-17214 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
-
-17216 VkResult VmaAllocator_T::DefragmentationBegin(
-
-
-
-
-
-
-
-
-
-17226 *pContext = vma_new(
this, VmaDefragmentationContext_T)(
-17227 this, m_CurrentFrameIndex.load(), info.
flags, pStats);
-
-
-17230 (*pContext)->AddAllocations(
-
-
-17233 VkResult res = (*pContext)->Defragment(
-
-
-
-
-17238 if(res != VK_NOT_READY)
-
-17240 vma_delete(
this, *pContext);
-17241 *pContext = VMA_NULL;
-
-
-
-
-
-17247 VkResult VmaAllocator_T::DefragmentationEnd(
-
-
-17250 vma_delete(
this, context);
-
-
-
-17254 VkResult VmaAllocator_T::DefragmentationPassBegin(
-
-
-
-17258 return context->DefragmentPassBegin(pInfo);
-
-17260 VkResult VmaAllocator_T::DefragmentationPassEnd(
-
-
-17263 return context->DefragmentPassEnd();
-
-
-
-
-
-17269 if(hAllocation->CanBecomeLost())
-
-
-
-
-
-17275 const uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
-17276 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
-
-
-17279 if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
-
-
-
-17283 pAllocationInfo->
offset = 0;
-17284 pAllocationInfo->
size = hAllocation->GetSize();
-
-17286 pAllocationInfo->
pUserData = hAllocation->GetUserData();
-
-
-17289 else if(localLastUseFrameIndex == localCurrFrameIndex)
-
-17291 pAllocationInfo->
memoryType = hAllocation->GetMemoryTypeIndex();
-17292 pAllocationInfo->
deviceMemory = hAllocation->GetMemory();
-17293 pAllocationInfo->
offset = hAllocation->GetOffset();
-17294 pAllocationInfo->
size = hAllocation->GetSize();
-
-17296 pAllocationInfo->
pUserData = hAllocation->GetUserData();
-
-
-
-
-17301 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
-
-17303 localLastUseFrameIndex = localCurrFrameIndex;
-
-
-
-
-
-
-17310 #if VMA_STATS_STRING_ENABLED
-17311 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
-17312 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
-
-
-17315 VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
-17316 if(localLastUseFrameIndex == localCurrFrameIndex)
-
-
-
-
-
-17322 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
-
-17324 localLastUseFrameIndex = localCurrFrameIndex;
-
-
-
-
-
-17330 pAllocationInfo->
memoryType = hAllocation->GetMemoryTypeIndex();
-17331 pAllocationInfo->
deviceMemory = hAllocation->GetMemory();
-17332 pAllocationInfo->
offset = hAllocation->GetOffset();
-17333 pAllocationInfo->
size = hAllocation->GetSize();
-17334 pAllocationInfo->
pMappedData = hAllocation->GetMappedData();
-17335 pAllocationInfo->
pUserData = hAllocation->GetUserData();
-
-
-
-17339 bool VmaAllocator_T::TouchAllocation(
VmaAllocation hAllocation)
-
-
-17342 if(hAllocation->CanBecomeLost())
-
-17344 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
-17345 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
-
-
-17348 if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
-
-
-
-17352 else if(localLastUseFrameIndex == localCurrFrameIndex)
-
-
-
-
-
-17358 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
-
-17360 localLastUseFrameIndex = localCurrFrameIndex;
-
-
-
-
-
-
-17367 #if VMA_STATS_STRING_ENABLED
-17368 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
-17369 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
-
-
-17372 VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
-17373 if(localLastUseFrameIndex == localCurrFrameIndex)
-
-
-
-
-
-17379 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
-
-17381 localLastUseFrameIndex = localCurrFrameIndex;
-
-
-
-
-
-
-
-
-
-
-
-17393 VMA_DEBUG_LOG(
" CreatePool: MemoryTypeIndex=%u, flags=%u", pCreateInfo->
memoryTypeIndex, pCreateInfo->
flags);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-17409 return VK_ERROR_INITIALIZATION_FAILED;
-
-
-
-17413 ((1u << pCreateInfo->
memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
-
-17415 return VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-
-
-
-
-17422 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(newCreateInfo.
memoryTypeIndex);
-
-17424 *pPool = vma_new(
this, VmaPool_T)(
this, newCreateInfo, preferredBlockSize);
-
-17426 VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
-17427 if(res != VK_SUCCESS)
-
-17429 vma_delete(
this, *pPool);
-
-
-
-
-
-
-17436 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
-17437 (*pPool)->SetId(m_NextPoolId++);
-17438 m_Pools.PushBack(*pPool);
-
-
-
-
-
-17444 void VmaAllocator_T::DestroyPool(
VmaPool pool)
-
-
-
-17448 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
-17449 m_Pools.Remove(pool);
-
-
-17452 vma_delete(
this, pool);
-
-
-
-
-17457 pool->m_BlockVector.GetPoolStats(pPoolStats);
-
-
-17460 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
-
-17462 m_CurrentFrameIndex.store(frameIndex);
-
-17464 #if VMA_MEMORY_BUDGET
-17465 if(m_UseExtMemoryBudget)
-
-17467 UpdateVulkanBudget();
-
-
-
-
-17472 void VmaAllocator_T::MakePoolAllocationsLost(
-
-17474 size_t* pLostAllocationCount)
-
-17476 hPool->m_BlockVector.MakePoolAllocationsLost(
-17477 m_CurrentFrameIndex.load(),
-17478 pLostAllocationCount);
-
-
-17481 VkResult VmaAllocator_T::CheckPoolCorruption(
VmaPool hPool)
-
-17483 return hPool->m_BlockVector.CheckCorruption();
-
-
-17486 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
-
-17488 VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-17491 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-17493 if(((1u << memTypeIndex) & memoryTypeBits) != 0)
-
-17495 VmaBlockVector*
const pBlockVector = m_pBlockVectors[memTypeIndex];
-17496 VMA_ASSERT(pBlockVector);
-17497 VkResult localRes = pBlockVector->CheckCorruption();
-
-
-17500 case VK_ERROR_FEATURE_NOT_PRESENT:
-
-
-17503 finalRes = VK_SUCCESS;
-
-
-
-
-
-
-
-
-
-17513 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
-17514 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
-
-17516 if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
-
-17518 VkResult localRes = pool->m_BlockVector.CheckCorruption();
-
-
-17521 case VK_ERROR_FEATURE_NOT_PRESENT:
-
-
-17524 finalRes = VK_SUCCESS;
-
-
-
-
-
-
-
-
-
-
-
-17536 void VmaAllocator_T::CreateLostAllocation(
VmaAllocation* pAllocation)
-
-17538 *pAllocation = m_AllocationObjectAllocator.Allocate(VMA_FRAME_INDEX_LOST,
false);
-17539 (*pAllocation)->InitLost();
-
-
-
-17543 template<
typename T>
-17544 struct AtomicTransactionalIncrement
-
-
-17547 typedef std::atomic<T> AtomicT;
-17548 ~AtomicTransactionalIncrement()
-
-
-
-
-17553 T Increment(AtomicT* atomic)
-
-
-17556 return m_Atomic->fetch_add(1);
-
-
-
-17560 m_Atomic =
nullptr;
-
-
-
-17564 AtomicT* m_Atomic =
nullptr;
-
-
-17567 VkResult VmaAllocator_T::AllocateVulkanMemory(
const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
-
-17569 AtomicTransactionalIncrement<uint32_t> deviceMemoryCountIncrement;
-17570 const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(&m_DeviceMemoryCount);
-17571 #if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
-17572 if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
-
-17574 return VK_ERROR_TOO_MANY_OBJECTS;
-
-
-
-17578 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
-
-
-17581 if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
-
-17583 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
-17584 VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
-
-
-17587 const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
-17588 if(blockBytesAfterAllocation > heapSize)
-
-17590 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
-
-17592 if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(blockBytes, blockBytesAfterAllocation))
-
-
-
-
-
-
-
-17600 m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
-
-
-
-17604 VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
-
-17606 if(res == VK_SUCCESS)
-
-17608 #if VMA_MEMORY_BUDGET
-17609 ++m_Budget.m_OperationsSinceBudgetFetch;
-
-
-
-17613 if(m_DeviceMemoryCallbacks.
pfnAllocate != VMA_NULL)
-
-17615 (*m_DeviceMemoryCallbacks.
pfnAllocate)(
this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.
pUserData);
-
-
-17618 deviceMemoryCountIncrement.Commit();
-
-
-
-17622 m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
-
-
-
-
-
-17628 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
-
-
-17631 if(m_DeviceMemoryCallbacks.
pfnFree != VMA_NULL)
-
-17633 (*m_DeviceMemoryCallbacks.
pfnFree)(
this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.
pUserData);
-
-
-
-17637 (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
-
-17639 m_Budget.m_BlockBytes[MemoryTypeIndexToHeapIndex(memoryType)] -= size;
-
-17641 --m_DeviceMemoryCount;
-
-
-17644 VkResult VmaAllocator_T::BindVulkanBuffer(
-17645 VkDeviceMemory memory,
-17646 VkDeviceSize memoryOffset,
-
-
-
-17650 if(pNext != VMA_NULL)
-
-17652 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
-17653 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
-17654 m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
-
-17656 VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
-17657 bindBufferMemoryInfo.pNext = pNext;
-17658 bindBufferMemoryInfo.buffer = buffer;
-17659 bindBufferMemoryInfo.memory = memory;
-17660 bindBufferMemoryInfo.memoryOffset = memoryOffset;
-17661 return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
-
-
-
-
-17666 return VK_ERROR_EXTENSION_NOT_PRESENT;
-
-
-
-
-17671 return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
-
-
-
-17675 VkResult VmaAllocator_T::BindVulkanImage(
-17676 VkDeviceMemory memory,
-17677 VkDeviceSize memoryOffset,
-
-
-
-17681 if(pNext != VMA_NULL)
-
-17683 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
-17684 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
-17685 m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
-
-17687 VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
-17688 bindBufferMemoryInfo.pNext = pNext;
-17689 bindBufferMemoryInfo.image = image;
-17690 bindBufferMemoryInfo.memory = memory;
-17691 bindBufferMemoryInfo.memoryOffset = memoryOffset;
-17692 return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
-
-
-
-
-17697 return VK_ERROR_EXTENSION_NOT_PRESENT;
-
-
-
-
-17702 return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
-
-
-
-17706 VkResult VmaAllocator_T::Map(
VmaAllocation hAllocation,
void** ppData)
-
-17708 if(hAllocation->CanBecomeLost())
-
-17710 return VK_ERROR_MEMORY_MAP_FAILED;
-
-
-17713 switch(hAllocation->GetType())
-
-17715 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-17717 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
-17718 char *pBytes = VMA_NULL;
-17719 VkResult res = pBlock->Map(
this, 1, (
void**)&pBytes);
-17720 if(res == VK_SUCCESS)
-
-17722 *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
-17723 hAllocation->BlockAllocMap();
-
-
-
-17727 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17728 return hAllocation->DedicatedAllocMap(
this, ppData);
-
-
-17731 return VK_ERROR_MEMORY_MAP_FAILED;
-
-
-
-
-
-17737 switch(hAllocation->GetType())
-
-17739 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-17741 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
-17742 hAllocation->BlockAllocUnmap();
-17743 pBlock->Unmap(
this, 1);
-
-
-17746 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17747 hAllocation->DedicatedAllocUnmap(
this);
-
-
-
-
-
-
-17754 VkResult VmaAllocator_T::BindBufferMemory(
-
-17756 VkDeviceSize allocationLocalOffset,
-
-
-
-17760 VkResult res = VK_SUCCESS;
-17761 switch(hAllocation->GetType())
-
-17763 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17764 res = BindVulkanBuffer(hAllocation->GetMemory(), allocationLocalOffset, hBuffer, pNext);
-
-17766 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-17768 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
-17769 VMA_ASSERT(pBlock &&
"Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
-17770 res = pBlock->BindBufferMemory(
this, hAllocation, allocationLocalOffset, hBuffer, pNext);
-
-
-
-
-
-
-
-
-17779 VkResult VmaAllocator_T::BindImageMemory(
-
-17781 VkDeviceSize allocationLocalOffset,
-
-
-
-17785 VkResult res = VK_SUCCESS;
-17786 switch(hAllocation->GetType())
-
-17788 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17789 res = BindVulkanImage(hAllocation->GetMemory(), allocationLocalOffset, hImage, pNext);
-
-17791 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-17793 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
-17794 VMA_ASSERT(pBlock &&
"Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
-17795 res = pBlock->BindImageMemory(
this, hAllocation, allocationLocalOffset, hImage, pNext);
-
-
-
-
-
-
-
-
-17804 VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
-
-17806 VkDeviceSize offset, VkDeviceSize size,
-17807 VMA_CACHE_OPERATION op)
-
-17809 VkResult res = VK_SUCCESS;
-
-17811 VkMappedMemoryRange memRange = {};
-17812 if(GetFlushOrInvalidateRange(hAllocation, offset, size, memRange))
-
-
-
-17816 case VMA_CACHE_FLUSH:
-17817 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
-
-17819 case VMA_CACHE_INVALIDATE:
-17820 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
-
-
-
-
-
-
-
-
-
-17830 VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
-17831 uint32_t allocationCount,
-
-17833 const VkDeviceSize* offsets,
const VkDeviceSize* sizes,
-17834 VMA_CACHE_OPERATION op)
-
-17836 typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
-17837 typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
-17838 RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
-
-17840 for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
-
-
-17843 const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
-17844 const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
-17845 VkMappedMemoryRange newRange;
-17846 if(GetFlushOrInvalidateRange(alloc, offset, size, newRange))
-
-17848 ranges.push_back(newRange);
-
-
-
-17852 VkResult res = VK_SUCCESS;
-17853 if(!ranges.empty())
-
-
-
-17857 case VMA_CACHE_FLUSH:
-17858 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
-
-17860 case VMA_CACHE_INVALIDATE:
-17861 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
-
-
-
-
-
-
-
-
-
-17871 void VmaAllocator_T::FreeDedicatedMemory(
const VmaAllocation allocation)
-
-17873 VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
-
-17875 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
-
-17877 VmaMutexLockWrite lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
-17878 DedicatedAllocationLinkedList& dedicatedAllocations = m_DedicatedAllocations[memTypeIndex];
-17879 dedicatedAllocations.Remove(allocation);
-
-
-17882 VkDeviceMemory hMemory = allocation->GetMemory();
-
-
-
-
-
-
-
-
-
-
-
-17894 FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
-
-17896 VMA_DEBUG_LOG(
" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
-
-
-17899 uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits()
const
-
-17901 VkBufferCreateInfo dummyBufCreateInfo;
-17902 VmaFillGpuDefragmentationBufferCreateInfo(dummyBufCreateInfo);
-
-17904 uint32_t memoryTypeBits = 0;
-
-
-17907 VkBuffer buf = VK_NULL_HANDLE;
-17908 VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
-17909 m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
-17910 if(res == VK_SUCCESS)
-
-
-17913 VkMemoryRequirements memReq;
-17914 (*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
-17915 memoryTypeBits = memReq.memoryTypeBits;
-
-
-17918 (*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
-
-
-17921 return memoryTypeBits;
-
-
-17924 uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits()
const
-
-
-17927 VMA_ASSERT(GetMemoryTypeCount() > 0);
-
-17929 uint32_t memoryTypeBits = UINT32_MAX;
-
-17931 if(!m_UseAmdDeviceCoherentMemory)
-
-
-17934 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-17936 if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
-
-17938 memoryTypeBits &= ~(1u << memTypeIndex);
-
-
-
-
-17943 return memoryTypeBits;
-
-
-17946 bool VmaAllocator_T::GetFlushOrInvalidateRange(
-
-17948 VkDeviceSize offset, VkDeviceSize size,
-17949 VkMappedMemoryRange& outRange)
const
-
-17951 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
-17952 if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
-
-17954 const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
-17955 const VkDeviceSize allocationSize = allocation->GetSize();
-17956 VMA_ASSERT(offset <= allocationSize);
-
-17958 outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
-17959 outRange.pNext = VMA_NULL;
-17960 outRange.memory = allocation->GetMemory();
-
-17962 switch(allocation->GetType())
-
-17964 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
-17965 outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
-17966 if(size == VK_WHOLE_SIZE)
-
-17968 outRange.size = allocationSize - outRange.offset;
-
-
-
-17972 VMA_ASSERT(offset + size <= allocationSize);
-17973 outRange.size = VMA_MIN(
-17974 VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
-17975 allocationSize - outRange.offset);
-
-
-17978 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
-
-
-17981 outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
-17982 if(size == VK_WHOLE_SIZE)
-
-17984 size = allocationSize - offset;
-
-
-
-17988 VMA_ASSERT(offset + size <= allocationSize);
-
-17990 outRange.size = VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize);
-
-
-17993 const VkDeviceSize allocationOffset = allocation->GetOffset();
-17994 VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
-17995 const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
-17996 outRange.offset += allocationOffset;
-17997 outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
-
-
-
-
-
-
-
-
-
-
-
-18009 #if VMA_MEMORY_BUDGET
-
-18011 void VmaAllocator_T::UpdateVulkanBudget()
-
-18013 VMA_ASSERT(m_UseExtMemoryBudget);
-
-18015 VkPhysicalDeviceMemoryProperties2KHR memProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
-
-18017 VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
-18018 VmaPnextChainPushFront(&memProps, &budgetProps);
-
-18020 GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
-
-
-18023 VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
-
-18025 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
-
-18027 m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
-18028 m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
-18029 m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
-
-
-18032 if(m_Budget.m_VulkanBudget[heapIndex] == 0)
-
-18034 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10;
-
-18036 else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
-
-18038 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
-
-18040 if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
-
-18042 m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
-
-
-18045 m_Budget.m_OperationsSinceBudgetFetch = 0;
-
-
-
-
-
-18051 void VmaAllocator_T::FillAllocation(
const VmaAllocation hAllocation, uint8_t pattern)
-
-18053 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
-18054 !hAllocation->CanBecomeLost() &&
-18055 (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
-
-18057 void* pData = VMA_NULL;
-18058 VkResult res = Map(hAllocation, &pData);
-18059 if(res == VK_SUCCESS)
-
-18061 memset(pData, (
int)pattern, (
size_t)hAllocation->GetSize());
-18062 FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
-18063 Unmap(hAllocation);
-
-
-
-18067 VMA_ASSERT(0 &&
"VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
-
-
-
-
-18072 uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
-
-18074 uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
-18075 if(memoryTypeBits == UINT32_MAX)
-
-18077 memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
-18078 m_GpuDefragmentationMemoryTypeBits.store(memoryTypeBits);
-
-18080 return memoryTypeBits;
-
-
-18083 #if VMA_STATS_STRING_ENABLED
-
-18085 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
-
-18087 bool dedicatedAllocationsStarted =
false;
-18088 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-18090 VmaMutexLockRead dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
-18091 DedicatedAllocationLinkedList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
-18092 if(!dedicatedAllocList.IsEmpty())
-
-18094 if(dedicatedAllocationsStarted ==
false)
-
-18096 dedicatedAllocationsStarted =
true;
-18097 json.WriteString(
"DedicatedAllocations");
-18098 json.BeginObject();
-
-
-18101 json.BeginString(
"Type ");
-18102 json.ContinueString(memTypeIndex);
-
-
-
-
-
-18108 alloc != VMA_NULL; alloc = dedicatedAllocList.GetNext(alloc))
-
-18110 json.BeginObject(
true);
-18111 alloc->PrintParameters(json);
-
-
-
-
-
-
-18118 if(dedicatedAllocationsStarted)
-
-
-
-
-
-18124 bool allocationsStarted =
false;
-18125 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
-
-18127 if(m_pBlockVectors[memTypeIndex]->IsEmpty() ==
false)
-
-18129 if(allocationsStarted ==
false)
-
-18131 allocationsStarted =
true;
-18132 json.WriteString(
"DefaultPools");
-18133 json.BeginObject();
-
-
-18136 json.BeginString(
"Type ");
-18137 json.ContinueString(memTypeIndex);
-
-
-18140 m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
-
-
-18143 if(allocationsStarted)
-
-
-
-
-
-
-
-18151 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
-18152 if(!m_Pools.IsEmpty())
-
-18154 json.WriteString(
"Pools");
-18155 json.BeginObject();
-18156 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
-
-18158 json.BeginString();
-18159 json.ContinueString(pool->GetId());
-
-
-18162 pool->m_BlockVector.PrintDetailedMap(json);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-18178 VMA_ASSERT(pCreateInfo && pAllocator);
-
-
-18181 VMA_DEBUG_LOG(
"vmaCreateAllocator");
-
-18183 return (*pAllocator)->Init(pCreateInfo);
-
-
-
-
-
-18189 if(allocator != VK_NULL_HANDLE)
-
-18191 VMA_DEBUG_LOG(
"vmaDestroyAllocator");
-18192 VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
-18193 vma_delete(&allocationCallbacks, allocator);
-
-
-
-
-
-18199 VMA_ASSERT(allocator && pAllocatorInfo);
-18200 pAllocatorInfo->
instance = allocator->m_hInstance;
-18201 pAllocatorInfo->
physicalDevice = allocator->GetPhysicalDevice();
-18202 pAllocatorInfo->
device = allocator->m_hDevice;
-
-
-
-
-18207 const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
-
-18209 VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
-18210 *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
-
-
-
-
-18215 const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
-
-18217 VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
-18218 *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
-
-
-
-
-18223 uint32_t memoryTypeIndex,
-18224 VkMemoryPropertyFlags* pFlags)
-
-18226 VMA_ASSERT(allocator && pFlags);
-18227 VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
-18228 *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
-
-
-
-
-18233 uint32_t frameIndex)
-
-18235 VMA_ASSERT(allocator);
-18236 VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
-
-18238 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18240 allocator->SetCurrentFrameIndex(frameIndex);
-
-
-
-
-
-
-18247 VMA_ASSERT(allocator && pStats);
-18248 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-18249 allocator->CalculateStats(pStats);
-
-
-
-
-
-
-18256 VMA_ASSERT(allocator && pBudget);
-18257 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-18258 allocator->GetBudget(pBudget, 0, allocator->GetMemoryHeapCount());
-
-
-18261 #if VMA_STATS_STRING_ENABLED
-
-
-
-18265 char** ppStatsString,
-18266 VkBool32 detailedMap)
-
-18268 VMA_ASSERT(allocator && ppStatsString);
-18269 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18271 VmaStringBuilder sb(allocator);
-
-18273 VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
-18274 json.BeginObject();
-
-
-18277 allocator->GetBudget(budget, 0, allocator->GetMemoryHeapCount());
-
-
-18280 allocator->CalculateStats(&stats);
-
-18282 json.WriteString(
"Total");
-18283 VmaPrintStatInfo(json, stats.
total);
-
-18285 for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
-
-18287 json.BeginString(
"Heap ");
-18288 json.ContinueString(heapIndex);
-
-18290 json.BeginObject();
-
-18292 json.WriteString(
"Size");
-18293 json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
-
-18295 json.WriteString(
"Flags");
-18296 json.BeginArray(
true);
-18297 if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
-
-18299 json.WriteString(
"DEVICE_LOCAL");
-
-
-
-18303 json.WriteString(
"Budget");
-18304 json.BeginObject();
-
-18306 json.WriteString(
"BlockBytes");
-18307 json.WriteNumber(budget[heapIndex].blockBytes);
-18308 json.WriteString(
"AllocationBytes");
-18309 json.WriteNumber(budget[heapIndex].allocationBytes);
-18310 json.WriteString(
"Usage");
-18311 json.WriteNumber(budget[heapIndex].usage);
-18312 json.WriteString(
"Budget");
-18313 json.WriteNumber(budget[heapIndex].budget);
-
-
-
-
-
-18319 json.WriteString(
"Stats");
-18320 VmaPrintStatInfo(json, stats.
memoryHeap[heapIndex]);
-
-
-18323 for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
-
-18325 if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
-
-18327 json.BeginString(
"Type ");
-18328 json.ContinueString(typeIndex);
-
-
-18331 json.BeginObject();
-
-18333 json.WriteString(
"Flags");
-18334 json.BeginArray(
true);
-18335 VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
-18336 if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
-
-18338 json.WriteString(
"DEVICE_LOCAL");
-
-18340 if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
-
-18342 json.WriteString(
"HOST_VISIBLE");
-
-18344 if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
-
-18346 json.WriteString(
"HOST_COHERENT");
-
-18348 if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
-
-18350 json.WriteString(
"HOST_CACHED");
-
-18352 if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
-
-18354 json.WriteString(
"LAZILY_ALLOCATED");
-
-18356 #if VMA_VULKAN_VERSION >= 1001000
-18357 if((flags & VK_MEMORY_PROPERTY_PROTECTED_BIT) != 0)
-
-18359 json.WriteString(
"PROTECTED");
-
-
-18362 #if VK_AMD_device_coherent_memory
-18363 if((flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
-
-18365 json.WriteString(
"DEVICE_COHERENT");
-
-18367 if((flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY) != 0)
-
-18369 json.WriteString(
"DEVICE_UNCACHED");
-
-
-
-
-
-
-18376 json.WriteString(
"Stats");
-18377 VmaPrintStatInfo(json, stats.
memoryType[typeIndex]);
-
-
-
-
-
-
-
-
-18386 if(detailedMap == VK_TRUE)
-
-18388 allocator->PrintDetailedMap(json);
-
-
-
-
-
-18394 const size_t len = sb.GetLength();
-18395 char*
const pChars = vma_new_array(allocator,
char, len + 1);
-
-
-18398 memcpy(pChars, sb.GetData(), len);
-
-18400 pChars[len] =
'\0';
-18401 *ppStatsString = pChars;
-
-
-
-
-18406 char* pStatsString)
-
-18408 if(pStatsString != VMA_NULL)
-
-18410 VMA_ASSERT(allocator);
-18411 size_t len = strlen(pStatsString);
-18412 vma_delete_array(allocator, pStatsString, len + 1);
-
-
-
-
-
-
-
-
-
-
-18423 uint32_t memoryTypeBits,
-
-18425 uint32_t* pMemoryTypeIndex)
-
-18427 VMA_ASSERT(allocator != VK_NULL_HANDLE);
-18428 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
-18429 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
-
-18431 memoryTypeBits &= allocator->GetGlobalMemoryTypeBits();
-
-
-
-
-
-
-18438 uint32_t requiredFlags = pAllocationCreateInfo->
requiredFlags;
-18439 uint32_t preferredFlags = pAllocationCreateInfo->
preferredFlags;
-18440 uint32_t notPreferredFlags = 0;
-
-
-18443 switch(pAllocationCreateInfo->
usage)
-
-
-
-
-18448 if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
-
-18450 preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
-
-
-
-18454 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
-
-
-18457 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
-18458 if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
-
-18460 preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
-
-
-
-18464 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
-18465 preferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
-
-
-18468 notPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
-
-
-18471 requiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
-
-
-
-
-
-
-
-
-18480 (VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
-
-18482 notPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY;
-
-
-18485 *pMemoryTypeIndex = UINT32_MAX;
-18486 uint32_t minCost = UINT32_MAX;
-18487 for(uint32_t memTypeIndex = 0, memTypeBit = 1;
-18488 memTypeIndex < allocator->GetMemoryTypeCount();
-18489 ++memTypeIndex, memTypeBit <<= 1)
-
-
-18492 if((memTypeBit & memoryTypeBits) != 0)
-
-18494 const VkMemoryPropertyFlags currFlags =
-18495 allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
-
-18497 if((requiredFlags & ~currFlags) == 0)
-
-
-18500 uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags) +
-18501 VmaCountBitsSet(currFlags & notPreferredFlags);
-
-18503 if(currCost < minCost)
-
-18505 *pMemoryTypeIndex = memTypeIndex;
-
-
-
-
-18510 minCost = currCost;
-
-
-
-
-18515 return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
-
-
-
-
-18520 const VkBufferCreateInfo* pBufferCreateInfo,
-
-18522 uint32_t* pMemoryTypeIndex)
-
-18524 VMA_ASSERT(allocator != VK_NULL_HANDLE);
-18525 VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
-18526 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
-18527 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
-
-18529 const VkDevice hDev = allocator->m_hDevice;
-18530 VkBuffer hBuffer = VK_NULL_HANDLE;
-18531 VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
-18532 hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
-18533 if(res == VK_SUCCESS)
-
-18535 VkMemoryRequirements memReq = {};
-18536 allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
-18537 hDev, hBuffer, &memReq);
-
-
-
-18541 memReq.memoryTypeBits,
-18542 pAllocationCreateInfo,
-
-
-18545 allocator->GetVulkanFunctions().vkDestroyBuffer(
-18546 hDev, hBuffer, allocator->GetAllocationCallbacks());
-
-
-
-
-
-
-18553 const VkImageCreateInfo* pImageCreateInfo,
-
-18555 uint32_t* pMemoryTypeIndex)
-
-18557 VMA_ASSERT(allocator != VK_NULL_HANDLE);
-18558 VMA_ASSERT(pImageCreateInfo != VMA_NULL);
-18559 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
-18560 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
-
-18562 const VkDevice hDev = allocator->m_hDevice;
-18563 VkImage hImage = VK_NULL_HANDLE;
-18564 VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
-18565 hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
-18566 if(res == VK_SUCCESS)
-
-18568 VkMemoryRequirements memReq = {};
-18569 allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
-18570 hDev, hImage, &memReq);
-
-
-
-18574 memReq.memoryTypeBits,
-18575 pAllocationCreateInfo,
-
-
-18578 allocator->GetVulkanFunctions().vkDestroyImage(
-18579 hDev, hImage, allocator->GetAllocationCallbacks());
-
-
-
-
-
-
-
-
-
-18589 VMA_ASSERT(allocator && pCreateInfo && pPool);
-
-18591 VMA_DEBUG_LOG(
"vmaCreatePool");
-
-18593 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18595 VkResult res = allocator->CreatePool(pCreateInfo, pPool);
-
-18597 #if VMA_RECORDING_ENABLED
-18598 if(allocator->GetRecorder() != VMA_NULL)
-
-18600 allocator->GetRecorder()->RecordCreatePool(allocator->GetCurrentFrameIndex(), *pCreateInfo, *pPool);
-
-
-
-
-
-
-
-
-
-
-18611 VMA_ASSERT(allocator);
-
-18613 if(pool == VK_NULL_HANDLE)
-
-
-
-
-18618 VMA_DEBUG_LOG(
"vmaDestroyPool");
-
-18620 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18622 #if VMA_RECORDING_ENABLED
-18623 if(allocator->GetRecorder() != VMA_NULL)
-
-18625 allocator->GetRecorder()->RecordDestroyPool(allocator->GetCurrentFrameIndex(), pool);
-
-
-
-18629 allocator->DestroyPool(pool);
-
-
-
-
-
-
-
-18637 VMA_ASSERT(allocator && pool && pPoolStats);
-
-18639 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18641 allocator->GetPoolStats(pool, pPoolStats);
-
-
-
-
-
-18647 size_t* pLostAllocationCount)
-
-18649 VMA_ASSERT(allocator && pool);
-
-18651 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18653 #if VMA_RECORDING_ENABLED
-18654 if(allocator->GetRecorder() != VMA_NULL)
-
-18656 allocator->GetRecorder()->RecordMakePoolAllocationsLost(allocator->GetCurrentFrameIndex(), pool);
-
-
-
-18660 allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
-
-
-
-
-18665 VMA_ASSERT(allocator && pool);
-
-18667 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18669 VMA_DEBUG_LOG(
"vmaCheckPoolCorruption");
-
-18671 return allocator->CheckPoolCorruption(pool);
-
-
-
-
-
-18677 const char** ppName)
-
-18679 VMA_ASSERT(allocator && pool && ppName);
-
-18681 VMA_DEBUG_LOG(
"vmaGetPoolName");
-
-18683 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18685 *ppName = pool->GetName();
-
-
-
-
-
-
-
-18693 VMA_ASSERT(allocator && pool);
-
-18695 VMA_DEBUG_LOG(
"vmaSetPoolName");
-
-18697 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18699 pool->SetName(pName);
-
-18701 #if VMA_RECORDING_ENABLED
-18702 if(allocator->GetRecorder() != VMA_NULL)
-
-18704 allocator->GetRecorder()->RecordSetPoolName(allocator->GetCurrentFrameIndex(), pool, pName);
-
-
-
-
-
-
-18711 const VkMemoryRequirements* pVkMemoryRequirements,
-
-
-
-
-18716 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
-
-18718 VMA_DEBUG_LOG(
"vmaAllocateMemory");
-
-18720 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18722 VkResult result = allocator->AllocateMemory(
-18723 *pVkMemoryRequirements,
-
-
-
-
-
-
-18730 VMA_SUBALLOCATION_TYPE_UNKNOWN,
-
-
-
-18734 #if VMA_RECORDING_ENABLED
-18735 if(allocator->GetRecorder() != VMA_NULL)
-
-18737 allocator->GetRecorder()->RecordAllocateMemory(
-18738 allocator->GetCurrentFrameIndex(),
-18739 *pVkMemoryRequirements,
-
-
-
-
-
-18745 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
-
-18747 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-
-
-
-18755 const VkMemoryRequirements* pVkMemoryRequirements,
-
-18757 size_t allocationCount,
-
-
-
-18761 if(allocationCount == 0)
-
-
-
-
-18766 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
-
-18768 VMA_DEBUG_LOG(
"vmaAllocateMemoryPages");
-
-18770 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18772 VkResult result = allocator->AllocateMemory(
-18773 *pVkMemoryRequirements,
-
-
-
-
-
-
-18780 VMA_SUBALLOCATION_TYPE_UNKNOWN,
-
-
-
-18784 #if VMA_RECORDING_ENABLED
-18785 if(allocator->GetRecorder() != VMA_NULL)
-
-18787 allocator->GetRecorder()->RecordAllocateMemoryPages(
-18788 allocator->GetCurrentFrameIndex(),
-18789 *pVkMemoryRequirements,
-
-18791 (uint64_t)allocationCount,
-
-
-
-
-18796 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
-
-18798 for(
size_t i = 0; i < allocationCount; ++i)
-
-18800 allocator->GetAllocationInfo(pAllocations[i], pAllocationInfo + i);
-
-
-
-
-
-
-
-
-
-
-
-
-
-18814 VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
-
-18816 VMA_DEBUG_LOG(
"vmaAllocateMemoryForBuffer");
-
-18818 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18820 VkMemoryRequirements vkMemReq = {};
-18821 bool requiresDedicatedAllocation =
false;
-18822 bool prefersDedicatedAllocation =
false;
-18823 allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
-18824 requiresDedicatedAllocation,
-18825 prefersDedicatedAllocation);
-
-18827 VkResult result = allocator->AllocateMemory(
-
-18829 requiresDedicatedAllocation,
-18830 prefersDedicatedAllocation,
-
-
-
-
-18835 VMA_SUBALLOCATION_TYPE_BUFFER,
-
-
-
-18839 #if VMA_RECORDING_ENABLED
-18840 if(allocator->GetRecorder() != VMA_NULL)
-
-18842 allocator->GetRecorder()->RecordAllocateMemoryForBuffer(
-18843 allocator->GetCurrentFrameIndex(),
-
-18845 requiresDedicatedAllocation,
-18846 prefersDedicatedAllocation,
-
-
-
-
-
-18852 if(pAllocationInfo && result == VK_SUCCESS)
-
-18854 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-
-
-
-
-
-
-
-
-18867 VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
-
-18869 VMA_DEBUG_LOG(
"vmaAllocateMemoryForImage");
-
-18871 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18873 VkMemoryRequirements vkMemReq = {};
-18874 bool requiresDedicatedAllocation =
false;
-18875 bool prefersDedicatedAllocation =
false;
-18876 allocator->GetImageMemoryRequirements(image, vkMemReq,
-18877 requiresDedicatedAllocation, prefersDedicatedAllocation);
-
-18879 VkResult result = allocator->AllocateMemory(
-
-18881 requiresDedicatedAllocation,
-18882 prefersDedicatedAllocation,
-
-
-
-
-18887 VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
-
-
-
-18891 #if VMA_RECORDING_ENABLED
-18892 if(allocator->GetRecorder() != VMA_NULL)
-
-18894 allocator->GetRecorder()->RecordAllocateMemoryForImage(
-18895 allocator->GetCurrentFrameIndex(),
-
-18897 requiresDedicatedAllocation,
-18898 prefersDedicatedAllocation,
-
-
-
-
-
-18904 if(pAllocationInfo && result == VK_SUCCESS)
-
-18906 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-
-
-
-
-
-18916 VMA_ASSERT(allocator);
-
-18918 if(allocation == VK_NULL_HANDLE)
-
-
-
-
-18923 VMA_DEBUG_LOG(
"vmaFreeMemory");
-
-18925 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18927 #if VMA_RECORDING_ENABLED
-18928 if(allocator->GetRecorder() != VMA_NULL)
-
-18930 allocator->GetRecorder()->RecordFreeMemory(
-18931 allocator->GetCurrentFrameIndex(),
-
-
-
-
-18936 allocator->FreeMemory(
-
-
-
-
-
-
-18943 size_t allocationCount,
-
-
-18946 if(allocationCount == 0)
-
-
-
-
-18951 VMA_ASSERT(allocator);
-
-18953 VMA_DEBUG_LOG(
"vmaFreeMemoryPages");
-
-18955 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18957 #if VMA_RECORDING_ENABLED
-18958 if(allocator->GetRecorder() != VMA_NULL)
-
-18960 allocator->GetRecorder()->RecordFreeMemoryPages(
-18961 allocator->GetCurrentFrameIndex(),
-18962 (uint64_t)allocationCount,
-
-
-
-
-18967 allocator->FreeMemory(allocationCount, pAllocations);
-
-
-
-
-
-
-
-18975 VMA_ASSERT(allocator && allocation && pAllocationInfo);
-
-18977 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18979 #if VMA_RECORDING_ENABLED
-18980 if(allocator->GetRecorder() != VMA_NULL)
-
-18982 allocator->GetRecorder()->RecordGetAllocationInfo(
-18983 allocator->GetCurrentFrameIndex(),
-
-
-
-
-18988 allocator->GetAllocationInfo(allocation, pAllocationInfo);
-
-
-
-
-
-
-18995 VMA_ASSERT(allocator && allocation);
-
-18997 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-18999 #if VMA_RECORDING_ENABLED
-19000 if(allocator->GetRecorder() != VMA_NULL)
-
-19002 allocator->GetRecorder()->RecordTouchAllocation(
-19003 allocator->GetCurrentFrameIndex(),
-
-
-
-
-19008 return allocator->TouchAllocation(allocation);
-
-
-
-
-
-
-
-19016 VMA_ASSERT(allocator && allocation);
-
-19018 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19020 allocation->SetUserData(allocator, pUserData);
-
-19022 #if VMA_RECORDING_ENABLED
-19023 if(allocator->GetRecorder() != VMA_NULL)
-
-19025 allocator->GetRecorder()->RecordSetAllocationUserData(
-19026 allocator->GetCurrentFrameIndex(),
-
-
-
-
-
-
-
-
-
-
-19037 VMA_ASSERT(allocator && pAllocation);
-
-19039 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
-
-19041 allocator->CreateLostAllocation(pAllocation);
-
-19043 #if VMA_RECORDING_ENABLED
-19044 if(allocator->GetRecorder() != VMA_NULL)
-
-19046 allocator->GetRecorder()->RecordCreateLostAllocation(
-19047 allocator->GetCurrentFrameIndex(),
-
-
-
-
-
-
-
-
-
-
-19058 VMA_ASSERT(allocator && allocation && ppData);
-
-19060 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19062 VkResult res = allocator->Map(allocation, ppData);
-
-19064 #if VMA_RECORDING_ENABLED
-19065 if(allocator->GetRecorder() != VMA_NULL)
-
-19067 allocator->GetRecorder()->RecordMapMemory(
-19068 allocator->GetCurrentFrameIndex(),
-
-
-
-
-
-
-
-
-
-
-
-19080 VMA_ASSERT(allocator && allocation);
-
-19082 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19084 #if VMA_RECORDING_ENABLED
-19085 if(allocator->GetRecorder() != VMA_NULL)
-
-19087 allocator->GetRecorder()->RecordUnmapMemory(
-19088 allocator->GetCurrentFrameIndex(),
-
-
-
-
-19093 allocator->Unmap(allocation);
-
-
-
-
-19098 VMA_ASSERT(allocator && allocation);
-
-19100 VMA_DEBUG_LOG(
"vmaFlushAllocation");
-
-19102 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19104 const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
-
-19106 #if VMA_RECORDING_ENABLED
-19107 if(allocator->GetRecorder() != VMA_NULL)
-
-19109 allocator->GetRecorder()->RecordFlushAllocation(
-19110 allocator->GetCurrentFrameIndex(),
-19111 allocation, offset, size);
-
-
-
-
-
-
-
-
-19120 VMA_ASSERT(allocator && allocation);
-
-19122 VMA_DEBUG_LOG(
"vmaInvalidateAllocation");
-
-19124 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19126 const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
-
-19128 #if VMA_RECORDING_ENABLED
-19129 if(allocator->GetRecorder() != VMA_NULL)
-
-19131 allocator->GetRecorder()->RecordInvalidateAllocation(
-19132 allocator->GetCurrentFrameIndex(),
-19133 allocation, offset, size);
-
-
-
-
-
-
-
-
-19142 uint32_t allocationCount,
-
-19144 const VkDeviceSize* offsets,
-19145 const VkDeviceSize* sizes)
-
-19147 VMA_ASSERT(allocator);
-
-19149 if(allocationCount == 0)
-
-
-
-
-19154 VMA_ASSERT(allocations);
-
-19156 VMA_DEBUG_LOG(
"vmaFlushAllocations");
-
-19158 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19160 const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_FLUSH);
-
-19162 #if VMA_RECORDING_ENABLED
-19163 if(allocator->GetRecorder() != VMA_NULL)
-
-
-
-
-
-
-
-
-
-
-19174 uint32_t allocationCount,
-
-19176 const VkDeviceSize* offsets,
-19177 const VkDeviceSize* sizes)
-
-19179 VMA_ASSERT(allocator);
-
-19181 if(allocationCount == 0)
-
-
-
-
-19186 VMA_ASSERT(allocations);
-
-19188 VMA_DEBUG_LOG(
"vmaInvalidateAllocations");
-
-19190 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19192 const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_INVALIDATE);
-
-19194 #if VMA_RECORDING_ENABLED
-19195 if(allocator->GetRecorder() != VMA_NULL)
-
-
-
-
-
-
-
-
-
-
-19206 VMA_ASSERT(allocator);
-
-19208 VMA_DEBUG_LOG(
"vmaCheckCorruption");
-
-19210 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19212 return allocator->CheckCorruption(memoryTypeBits);
-
-
-
-
-
-19218 size_t allocationCount,
-19219 VkBool32* pAllocationsChanged,
-
-
-
-
-
-
-
-
-
-19229 if(pDefragmentationInfo != VMA_NULL)
-
-
-
-
-
-
-
-
-
-
-
-
-
-19243 if(res == VK_NOT_READY)
-
-
-
-
-
-
-
-
-
-
-
-
-19256 VMA_ASSERT(allocator && pInfo && pContext);
-
-
-
-
-
-
-
-
-
-
-19267 VMA_HEAVY_ASSERT(VmaValidatePointerArray(pInfo->
poolCount, pInfo->
pPools));
-
-19269 VMA_DEBUG_LOG(
"vmaDefragmentationBegin");
-
-19271 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19273 VkResult res = allocator->DefragmentationBegin(*pInfo, pStats, pContext);
-
-19275 #if VMA_RECORDING_ENABLED
-19276 if(allocator->GetRecorder() != VMA_NULL)
-
-19278 allocator->GetRecorder()->RecordDefragmentationBegin(
-19279 allocator->GetCurrentFrameIndex(), *pInfo, *pContext);
-
-
-
-
-
-
-
-
-
-
-19290 VMA_ASSERT(allocator);
-
-19292 VMA_DEBUG_LOG(
"vmaDefragmentationEnd");
-
-19294 if(context != VK_NULL_HANDLE)
-
-19296 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19298 #if VMA_RECORDING_ENABLED
-19299 if(allocator->GetRecorder() != VMA_NULL)
-
-19301 allocator->GetRecorder()->RecordDefragmentationEnd(
-19302 allocator->GetCurrentFrameIndex(), context);
-
-
-
-19306 return allocator->DefragmentationEnd(context);
-
-
-
-
-
-
-
-
-
-
-
-
-
-19320 VMA_ASSERT(allocator);
-
-
-19323 VMA_DEBUG_LOG(
"vmaBeginDefragmentationPass");
-
-19325 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19327 if(context == VK_NULL_HANDLE)
-
-
-
-
-
-19333 return allocator->DefragmentationPassBegin(pInfo, context);
-
-
-
-
-
-19339 VMA_ASSERT(allocator);
-
-19341 VMA_DEBUG_LOG(
"vmaEndDefragmentationPass");
-19342 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19344 if(context == VK_NULL_HANDLE)
-
-
-19347 return allocator->DefragmentationPassEnd(context);
-
-
-
-
-
-
-
-19355 VMA_ASSERT(allocator && allocation && buffer);
-
-19357 VMA_DEBUG_LOG(
"vmaBindBufferMemory");
-
-19359 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19361 return allocator->BindBufferMemory(allocation, 0, buffer, VMA_NULL);
-
-
-
-
-
-19367 VkDeviceSize allocationLocalOffset,
-
-
-
-19371 VMA_ASSERT(allocator && allocation && buffer);
-
-19373 VMA_DEBUG_LOG(
"vmaBindBufferMemory2");
-
-19375 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19377 return allocator->BindBufferMemory(allocation, allocationLocalOffset, buffer, pNext);
-
-
-
-
-
-
-
-19385 VMA_ASSERT(allocator && allocation && image);
-
-19387 VMA_DEBUG_LOG(
"vmaBindImageMemory");
-
-19389 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19391 return allocator->BindImageMemory(allocation, 0, image, VMA_NULL);
-
-
-
-
-
-19397 VkDeviceSize allocationLocalOffset,
-
-
-
-19401 VMA_ASSERT(allocator && allocation && image);
-
-19403 VMA_DEBUG_LOG(
"vmaBindImageMemory2");
-
-19405 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19407 return allocator->BindImageMemory(allocation, allocationLocalOffset, image, pNext);
-
-
-
-
-19412 const VkBufferCreateInfo* pBufferCreateInfo,
-
-
-
-
-
-19418 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
-
-19420 if(pBufferCreateInfo->size == 0)
-
-19422 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-19424 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
-19425 !allocator->m_UseKhrBufferDeviceAddress)
-
-19427 VMA_ASSERT(0 &&
"Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
-19428 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-19431 VMA_DEBUG_LOG(
"vmaCreateBuffer");
-
-19433 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19435 *pBuffer = VK_NULL_HANDLE;
-19436 *pAllocation = VK_NULL_HANDLE;
-
-
-19439 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
-19440 allocator->m_hDevice,
-
-19442 allocator->GetAllocationCallbacks(),
-
-
-
-
-19447 VkMemoryRequirements vkMemReq = {};
-19448 bool requiresDedicatedAllocation =
false;
-19449 bool prefersDedicatedAllocation =
false;
-19450 allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
-19451 requiresDedicatedAllocation, prefersDedicatedAllocation);
-
-
-19454 res = allocator->AllocateMemory(
-
-19456 requiresDedicatedAllocation,
-19457 prefersDedicatedAllocation,
-
-19459 pBufferCreateInfo->usage,
-
-19461 *pAllocationCreateInfo,
-19462 VMA_SUBALLOCATION_TYPE_BUFFER,
-
-
-
-19466 #if VMA_RECORDING_ENABLED
-19467 if(allocator->GetRecorder() != VMA_NULL)
-
-19469 allocator->GetRecorder()->RecordCreateBuffer(
-19470 allocator->GetCurrentFrameIndex(),
-19471 *pBufferCreateInfo,
-19472 *pAllocationCreateInfo,
-
-
-
-
-
-
-
-
-
-19482 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
-
-
-
-
-19487 #if VMA_STATS_STRING_ENABLED
-19488 (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
-
-19490 if(pAllocationInfo != VMA_NULL)
-
-19492 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-19497 allocator->FreeMemory(
-
-
-19500 *pAllocation = VK_NULL_HANDLE;
-19501 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
-19502 *pBuffer = VK_NULL_HANDLE;
-
-
-19505 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
-19506 *pBuffer = VK_NULL_HANDLE;
-
-
-
-
-
-
-
-19514 const VkBufferCreateInfo* pBufferCreateInfo,
-
-19516 VkDeviceSize minAlignment,
-
-
-
-
-19521 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
-
-19523 if(pBufferCreateInfo->size == 0)
-
-19525 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-19527 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
-19528 !allocator->m_UseKhrBufferDeviceAddress)
-
-19530 VMA_ASSERT(0 &&
"Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
-19531 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-19534 VMA_DEBUG_LOG(
"vmaCreateBufferWithAlignment");
-
-19536 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19538 *pBuffer = VK_NULL_HANDLE;
-19539 *pAllocation = VK_NULL_HANDLE;
-
-
-19542 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
-19543 allocator->m_hDevice,
-
-19545 allocator->GetAllocationCallbacks(),
-
-
-
-
-19550 VkMemoryRequirements vkMemReq = {};
-19551 bool requiresDedicatedAllocation =
false;
-19552 bool prefersDedicatedAllocation =
false;
-19553 allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
-19554 requiresDedicatedAllocation, prefersDedicatedAllocation);
-
-
-19557 vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
-
-
-19560 res = allocator->AllocateMemory(
-
-19562 requiresDedicatedAllocation,
-19563 prefersDedicatedAllocation,
-
-19565 pBufferCreateInfo->usage,
-
-19567 *pAllocationCreateInfo,
-19568 VMA_SUBALLOCATION_TYPE_BUFFER,
-
-
-
-19572 #if VMA_RECORDING_ENABLED
-19573 if(allocator->GetRecorder() != VMA_NULL)
-
-19575 VMA_ASSERT(0 &&
"Not implemented.");
-
-
-
-
-
-
-
-
-19584 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
-
-
-
-
-19589 #if VMA_STATS_STRING_ENABLED
-19590 (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
-
-19592 if(pAllocationInfo != VMA_NULL)
-
-19594 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-19599 allocator->FreeMemory(
-
-
-19602 *pAllocation = VK_NULL_HANDLE;
-19603 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
-19604 *pBuffer = VK_NULL_HANDLE;
-
-
-19607 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
-19608 *pBuffer = VK_NULL_HANDLE;
-
-
-
-
-
-
-
-
-
-
-19619 VMA_ASSERT(allocator);
-
-19621 if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
-
-
-
-
-19626 VMA_DEBUG_LOG(
"vmaDestroyBuffer");
-
-19628 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19630 #if VMA_RECORDING_ENABLED
-19631 if(allocator->GetRecorder() != VMA_NULL)
-
-19633 allocator->GetRecorder()->RecordDestroyBuffer(
-19634 allocator->GetCurrentFrameIndex(),
-
-
-
-
-19639 if(buffer != VK_NULL_HANDLE)
-
-19641 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
-
-
-19644 if(allocation != VK_NULL_HANDLE)
-
-19646 allocator->FreeMemory(
-
-
-
-
-
-
-
-19654 const VkImageCreateInfo* pImageCreateInfo,
-
-
-
-
-
-19660 VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
-
-19662 if(pImageCreateInfo->extent.width == 0 ||
-19663 pImageCreateInfo->extent.height == 0 ||
-19664 pImageCreateInfo->extent.depth == 0 ||
-19665 pImageCreateInfo->mipLevels == 0 ||
-19666 pImageCreateInfo->arrayLayers == 0)
-
-19668 return VK_ERROR_VALIDATION_FAILED_EXT;
-
-
-19671 VMA_DEBUG_LOG(
"vmaCreateImage");
-
-19673 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19675 *pImage = VK_NULL_HANDLE;
-19676 *pAllocation = VK_NULL_HANDLE;
-
-
-19679 VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
-19680 allocator->m_hDevice,
-
-19682 allocator->GetAllocationCallbacks(),
-
-
-
-19686 VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
-19687 VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
-19688 VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
-
-
-19691 VkMemoryRequirements vkMemReq = {};
-19692 bool requiresDedicatedAllocation =
false;
-19693 bool prefersDedicatedAllocation =
false;
-19694 allocator->GetImageMemoryRequirements(*pImage, vkMemReq,
-19695 requiresDedicatedAllocation, prefersDedicatedAllocation);
-
-19697 res = allocator->AllocateMemory(
-
-19699 requiresDedicatedAllocation,
-19700 prefersDedicatedAllocation,
-
-
-
-19704 *pAllocationCreateInfo,
-
-
-
-
-19709 #if VMA_RECORDING_ENABLED
-19710 if(allocator->GetRecorder() != VMA_NULL)
-
-19712 allocator->GetRecorder()->RecordCreateImage(
-19713 allocator->GetCurrentFrameIndex(),
-
-19715 *pAllocationCreateInfo,
-
-
-
-
-
-
-
-
-
-19725 res = allocator->BindImageMemory(*pAllocation, 0, *pImage, VMA_NULL);
-
-
-
-
-19730 #if VMA_STATS_STRING_ENABLED
-19731 (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
-
-19733 if(pAllocationInfo != VMA_NULL)
-
-19735 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
-
-
-
-
-19740 allocator->FreeMemory(
-
-
-19743 *pAllocation = VK_NULL_HANDLE;
-19744 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
-19745 *pImage = VK_NULL_HANDLE;
-
-
-19748 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
-19749 *pImage = VK_NULL_HANDLE;
-
-
-
-
-
-
-
-
-
-
-19760 VMA_ASSERT(allocator);
-
-19762 if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
-
-
-
-
-19767 VMA_DEBUG_LOG(
"vmaDestroyImage");
-
-19769 VMA_DEBUG_GLOBAL_MUTEX_LOCK
-
-19771 #if VMA_RECORDING_ENABLED
-19772 if(allocator->GetRecorder() != VMA_NULL)
-
-19774 allocator->GetRecorder()->RecordDestroyImage(
-19775 allocator->GetCurrentFrameIndex(),
-
-
-
-
-19780 if(image != VK_NULL_HANDLE)
-
-19782 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
-
-19784 if(allocation != VK_NULL_HANDLE)
-
-19786 allocator->FreeMemory(
-
-
-
-
-
-
-Definition: vk_mem_alloc.h:2900
-uint32_t memoryTypeBits
Bitmask containing one bit set for every memory type acceptable for this allocation.
Definition: vk_mem_alloc.h:2926
-VmaPool pool
Pool that this allocation should be created in.
Definition: vk_mem_alloc.h:2932
-VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:2918
-void * pUserData
Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
Definition: vk_mem_alloc.h:2939
-VkMemoryPropertyFlags requiredFlags
Flags that must be set in a Memory Type chosen for an allocation.
Definition: vk_mem_alloc.h:2913
-float priority
A floating-point value between 0 and 1, indicating the priority of the allocation relative to other m...
Definition: vk_mem_alloc.h:2946
-VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2908
-VmaAllocationCreateFlags flags
Use VmaAllocationCreateFlagBits enum.
Definition: vk_mem_alloc.h:2902
+
+
+12673 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(
VmaAllocator hAllocator) :
+12674 m_pMetadata(VMA_NULL),
+12675 m_MemoryTypeIndex(UINT32_MAX),
+
+12677 m_hMemory(VK_NULL_HANDLE),
+
+12679 m_pMappedData(VMA_NULL)
+
+
+
+12683 void VmaDeviceMemoryBlock::Init(
+
+
+12686 uint32_t newMemoryTypeIndex,
+12687 VkDeviceMemory newMemory,
+12688 VkDeviceSize newSize,
+
+12690 uint32_t algorithm)
+
+12692 VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
+
+12694 m_hParentPool = hParentPool;
+12695 m_MemoryTypeIndex = newMemoryTypeIndex;
+
+12697 m_hMemory = newMemory;
+
+
+
+
+12702 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator);
+
+
+12705 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Buddy)(hAllocator);
+
+
+
+
+
+12711 m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Generic)(hAllocator);
+
+12713 m_pMetadata->Init(newSize);
+
+
+12716 void VmaDeviceMemoryBlock::Destroy(
VmaAllocator allocator)
+
+
+
+12720 VMA_ASSERT(m_pMetadata->IsEmpty() &&
"Some allocations were not freed before destruction of this memory block!");
+
+12722 VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
+12723 allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_pMetadata->GetSize(), m_hMemory);
+12724 m_hMemory = VK_NULL_HANDLE;
+
+12726 vma_delete(allocator, m_pMetadata);
+12727 m_pMetadata = VMA_NULL;
+
+
+12730 bool VmaDeviceMemoryBlock::Validate()
const
+
+12732 VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
+12733 (m_pMetadata->GetSize() != 0));
+
+12735 return m_pMetadata->Validate();
+
+
+12738 VkResult VmaDeviceMemoryBlock::CheckCorruption(
VmaAllocator hAllocator)
+
+12740 void* pData =
nullptr;
+12741 VkResult res = Map(hAllocator, 1, &pData);
+12742 if(res != VK_SUCCESS)
+
+
+
+
+12747 res = m_pMetadata->CheckCorruption(pData);
+
+12749 Unmap(hAllocator, 1);
+
+
+
+
+12754 VkResult VmaDeviceMemoryBlock::Map(
VmaAllocator hAllocator, uint32_t count,
void** ppData)
+
+
+
+
+
+
+12761 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
+12762 if(m_MapCount != 0)
+
+12764 m_MapCount += count;
+12765 VMA_ASSERT(m_pMappedData != VMA_NULL);
+12766 if(ppData != VMA_NULL)
+
+12768 *ppData = m_pMappedData;
+
+
+
+
+
+12774 VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
+12775 hAllocator->m_hDevice,
+
+
+
+
+
+12781 if(result == VK_SUCCESS)
+
+12783 if(ppData != VMA_NULL)
+
+12785 *ppData = m_pMappedData;
+
+12787 m_MapCount = count;
+
+
+
+
+
+12793 void VmaDeviceMemoryBlock::Unmap(
VmaAllocator hAllocator, uint32_t count)
+
+
+
+
+
+
+12800 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
+12801 if(m_MapCount >= count)
+
+12803 m_MapCount -= count;
+12804 if(m_MapCount == 0)
+
+12806 m_pMappedData = VMA_NULL;
+12807 (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
+
+
+
+
+12812 VMA_ASSERT(0 &&
"VkDeviceMemory block is being unmapped while it was not previously mapped.");
+
+
+
+12816 VkResult VmaDeviceMemoryBlock::WriteMagicValueAroundAllocation(
VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
+
+12818 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
+12819 VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
+
+
+12822 VkResult res = Map(hAllocator, 1, &pData);
+12823 if(res != VK_SUCCESS)
+
+
+
+
+12828 VmaWriteMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN);
+12829 VmaWriteMagicValue(pData, allocOffset + allocSize);
+
+12831 Unmap(hAllocator, 1);
+
+
+
+
+12836 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAroundAllocation(
VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
+
+12838 VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
+12839 VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
+
+
+12842 VkResult res = Map(hAllocator, 1, &pData);
+12843 if(res != VK_SUCCESS)
+
+
+
+
+12848 if(!VmaValidateMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN))
+
+12850 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED BEFORE FREED ALLOCATION!");
+
+12852 else if(!VmaValidateMagicValue(pData, allocOffset + allocSize))
+
+12854 VMA_ASSERT(0 &&
"MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
+
+
+12857 Unmap(hAllocator, 1);
+
+
+
+
+12862 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
+
+
+12865 VkDeviceSize allocationLocalOffset,
+
+
+
+12869 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
+12870 hAllocation->GetBlock() ==
this);
+12871 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
+12872 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
+12873 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
+
+12875 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
+12876 return hAllocator->BindVulkanBuffer(m_hMemory, memoryOffset, hBuffer, pNext);
+
+
+12879 VkResult VmaDeviceMemoryBlock::BindImageMemory(
+
+
+12882 VkDeviceSize allocationLocalOffset,
+
+
+
+12886 VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
+12887 hAllocation->GetBlock() ==
this);
+12888 VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
+12889 "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
+12890 const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
+
+12892 VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
+12893 return hAllocator->BindVulkanImage(m_hMemory, memoryOffset, hImage, pNext);
+
+
+
+
+12898 memset(&outInfo, 0,
sizeof(outInfo));
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+12917 static void VmaPostprocessCalcStatInfo(
VmaStatInfo& inoutInfo)
+
+
+
+
+
+
+
+12925 VmaPool_T::VmaPool_T(
+
+
+12928 VkDeviceSize preferredBlockSize) :
+
+
+
+12932 createInfo.memoryTypeIndex,
+12933 createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
+12934 createInfo.minBlockCount,
+12935 createInfo.maxBlockCount,
+
+12937 createInfo.frameInUseCount,
+12938 createInfo.blockSize != 0,
+
+12940 createInfo.priority,
+12941 VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
+12942 createInfo.pMemoryAllocateNext),
+
+
+
+
+
+12948 VmaPool_T::~VmaPool_T()
+
+12950 VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
+
+
+12953 void VmaPool_T::SetName(
const char* pName)
+
+12955 const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
+12956 VmaFreeString(allocs, m_Name);
+
+12958 if(pName != VMA_NULL)
+
+12960 m_Name = VmaCreateStringCopy(allocs, pName);
+
+
+
+
+
+
+
+12968 #if VMA_STATS_STRING_ENABLED
+
+
+
+12972 VmaBlockVector::VmaBlockVector(
+
+
+12975 uint32_t memoryTypeIndex,
+12976 VkDeviceSize preferredBlockSize,
+12977 size_t minBlockCount,
+12978 size_t maxBlockCount,
+12979 VkDeviceSize bufferImageGranularity,
+12980 uint32_t frameInUseCount,
+12981 bool explicitBlockSize,
+12982 uint32_t algorithm,
+
+12984 VkDeviceSize minAllocationAlignment,
+12985 void* pMemoryAllocateNext) :
+12986 m_hAllocator(hAllocator),
+12987 m_hParentPool(hParentPool),
+12988 m_MemoryTypeIndex(memoryTypeIndex),
+12989 m_PreferredBlockSize(preferredBlockSize),
+12990 m_MinBlockCount(minBlockCount),
+12991 m_MaxBlockCount(maxBlockCount),
+12992 m_BufferImageGranularity(bufferImageGranularity),
+12993 m_FrameInUseCount(frameInUseCount),
+12994 m_ExplicitBlockSize(explicitBlockSize),
+12995 m_Algorithm(algorithm),
+12996 m_Priority(priority),
+12997 m_MinAllocationAlignment(minAllocationAlignment),
+12998 m_pMemoryAllocateNext(pMemoryAllocateNext),
+12999 m_HasEmptyBlock(false),
+13000 m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
+
+
+
+
+13005 VmaBlockVector::~VmaBlockVector()
+
+13007 for(
size_t i = m_Blocks.size(); i--; )
+
+13009 m_Blocks[i]->Destroy(m_hAllocator);
+13010 vma_delete(m_hAllocator, m_Blocks[i]);
+
+
+
+13014 VkResult VmaBlockVector::CreateMinBlocks()
+
+13016 for(
size_t i = 0; i < m_MinBlockCount; ++i)
+
+13018 VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
+13019 if(res != VK_SUCCESS)
+
+
+
+
+
+
+
+13027 void VmaBlockVector::GetPoolStats(
VmaPoolStats* pStats)
+
+13029 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+13031 const size_t blockCount = m_Blocks.size();
+
+
+
+
+
+
+
+
+13040 for(uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
+
+13042 const VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
+13043 VMA_ASSERT(pBlock);
+13044 VMA_HEAVY_ASSERT(pBlock->Validate());
+13045 pBlock->m_pMetadata->AddPoolStats(*pStats);
+
+
+
+13049 bool VmaBlockVector::IsEmpty()
+
+13051 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
+13052 return m_Blocks.empty();
+
+
+13055 bool VmaBlockVector::IsCorruptionDetectionEnabled()
const
+
+13057 const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
+13058 return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
+13059 (VMA_DEBUG_MARGIN > 0) &&
+
+13061 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
+
+
+13064 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
+
+13066 VkResult VmaBlockVector::Allocate(
+13067 uint32_t currentFrameIndex,
+
+13069 VkDeviceSize alignment,
+
+13071 VmaSuballocationType suballocType,
+13072 size_t allocationCount,
+
+
+
+13076 VkResult res = VK_SUCCESS;
+
+13078 alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
+
+13080 if(IsCorruptionDetectionEnabled())
+
+13082 size = VmaAlignUp<VkDeviceSize>(size,
sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
+13083 alignment = VmaAlignUp<VkDeviceSize>(alignment,
sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
+
+
+
+13087 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
+13088 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
+
+13090 res = AllocatePage(
+
+
+
+
+
+13096 pAllocations + allocIndex);
+13097 if(res != VK_SUCCESS)
+
+
+
+
+
+
+13104 if(res != VK_SUCCESS)
+
+
+13107 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
+13108 while(allocIndex--)
+
+13110 VmaAllocation_T*
const alloc = pAllocations[allocIndex];
+13111 const VkDeviceSize allocSize = alloc->GetSize();
+
+13113 m_hAllocator->m_Budget.RemoveAllocation(heapIndex, allocSize);
+
+13115 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
+
+
+
+
+
+13121 VkResult VmaBlockVector::AllocatePage(
+13122 uint32_t currentFrameIndex,
+
+13124 VkDeviceSize alignment,
+
+13126 VmaSuballocationType suballocType,
+
+
+
+
+
+
+
+13134 VkDeviceSize freeMemory;
+
+13136 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
+
+13138 m_hAllocator->GetBudget(&heapBudget, heapIndex, 1);
+
+
+
+13142 const bool canFallbackToDedicated = !IsCustomPool();
+13143 const bool canCreateNewBlock =
+
+13145 (m_Blocks.size() < m_MaxBlockCount) &&
+13146 (freeMemory >= size || !canFallbackToDedicated);
+
+
+
+
+
+
+13153 canMakeOtherLost =
false;
+
+
+
+13157 if(isUpperAddress &&
+
+
+13160 return VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+
+
+
+
+
+
+
+
+
+
+
+13174 return VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+
+13178 if(size + 2 * VMA_DEBUG_MARGIN > m_PreferredBlockSize)
+
+13180 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+
+
+
+13188 if(!canMakeOtherLost || canCreateNewBlock)
+
+
+
+
+
+
+
+
+13197 if(!m_Blocks.empty())
+
+13199 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks.back();
+13200 VMA_ASSERT(pCurrBlock);
+13201 VkResult res = AllocateFromBlock(
+
+
+
+
+
+
+
+
+
+13211 if(res == VK_SUCCESS)
+
+13213 VMA_DEBUG_LOG(
" Returned from last block #%u", pCurrBlock->GetId());
+
+
+
+
+
+
+
+
+
+13223 for(
size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
+
+13225 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
+13226 VMA_ASSERT(pCurrBlock);
+13227 VkResult res = AllocateFromBlock(
+
+
+
+
+
+
+
+
+
+13237 if(res == VK_SUCCESS)
+
+13239 VMA_DEBUG_LOG(
" Returned from existing block #%u", pCurrBlock->GetId());
+
+
+
+
+
+
+
+13247 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
+
+13249 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
+13250 VMA_ASSERT(pCurrBlock);
+13251 VkResult res = AllocateFromBlock(
+
+
+
+
+
+
+
+
+
+13261 if(res == VK_SUCCESS)
+
+13263 VMA_DEBUG_LOG(
" Returned from existing block #%u", pCurrBlock->GetId());
+
+
+
+
+
+
+
+13271 if(canCreateNewBlock)
+
+
+13274 VkDeviceSize newBlockSize = m_PreferredBlockSize;
+13275 uint32_t newBlockSizeShift = 0;
+13276 const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
+
+13278 if(!m_ExplicitBlockSize)
+
+
+13281 const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
+13282 for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
+
+13284 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
+13285 if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
+
+13287 newBlockSize = smallerNewBlockSize;
+13288 ++newBlockSizeShift;
+
+
+
+
+
+
+
+
+13297 size_t newBlockIndex = 0;
+13298 VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
+13299 CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+13301 if(!m_ExplicitBlockSize)
+
+13303 while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
+
+13305 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
+13306 if(smallerNewBlockSize >= size)
+
+13308 newBlockSize = smallerNewBlockSize;
+13309 ++newBlockSizeShift;
+13310 res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
+13311 CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+
+
+
+
+13320 if(res == VK_SUCCESS)
+
+13322 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[newBlockIndex];
+13323 VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
+
+13325 res = AllocateFromBlock(
+
+
+
+
+
+
+
+
+
+13335 if(res == VK_SUCCESS)
+
+13337 VMA_DEBUG_LOG(
" Created new block #%u Size=%llu", pBlock->GetId(), newBlockSize);
+
+
+
+
+
+13343 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+
+
+13350 if(canMakeOtherLost)
+
+13352 uint32_t tryIndex = 0;
+13353 for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
+
+13355 VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
+13356 VmaAllocationRequest bestRequest = {};
+13357 VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
+
+
+
+
+
+13363 for(
size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
+
+13365 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
+13366 VMA_ASSERT(pCurrBlock);
+13367 VmaAllocationRequest currRequest = {};
+13368 if(pCurrBlock->m_pMetadata->CreateAllocationRequest(
+
+
+13371 m_BufferImageGranularity,
+
+
+
+
+
+
+
+
+13380 const VkDeviceSize currRequestCost = currRequest.CalcCost();
+13381 if(pBestRequestBlock == VMA_NULL ||
+13382 currRequestCost < bestRequestCost)
+
+13384 pBestRequestBlock = pCurrBlock;
+13385 bestRequest = currRequest;
+13386 bestRequestCost = currRequestCost;
+
+13388 if(bestRequestCost == 0)
+
+
+
+
+
+
+
+
+
+
+13399 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
+
+13401 VmaDeviceMemoryBlock*
const pCurrBlock = m_Blocks[blockIndex];
+13402 VMA_ASSERT(pCurrBlock);
+13403 VmaAllocationRequest currRequest = {};
+13404 if(pCurrBlock->m_pMetadata->CreateAllocationRequest(
+
+
+13407 m_BufferImageGranularity,
+
+
+
+
+
+
+
+
+13416 const VkDeviceSize currRequestCost = currRequest.CalcCost();
+13417 if(pBestRequestBlock == VMA_NULL ||
+13418 currRequestCost < bestRequestCost ||
+
+
+13421 pBestRequestBlock = pCurrBlock;
+13422 bestRequest = currRequest;
+13423 bestRequestCost = currRequestCost;
+
+13425 if(bestRequestCost == 0 ||
+
+
+
+
+
+
+
+
+
+13435 if(pBestRequestBlock != VMA_NULL)
+
+
+
+13439 VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
+13440 if(res != VK_SUCCESS)
+
+
+
+
+
+13446 if(pBestRequestBlock->m_pMetadata->MakeRequestedAllocationsLost(
+
+
+
+
+
+13452 *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(currentFrameIndex, isUserDataString);
+13453 pBestRequestBlock->m_pMetadata->Alloc(bestRequest, suballocType, size, *pAllocation);
+13454 UpdateHasEmptyBlock();
+13455 (*pAllocation)->InitBlockAllocation(
+
+13457 bestRequest.offset,
+
+
+
+
+
+
+13464 VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
+13465 VMA_DEBUG_LOG(
" Returned from existing block");
+13466 (*pAllocation)->SetUserData(m_hAllocator, createInfo.
pUserData);
+13467 m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), size);
+13468 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
+
+13470 m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
+
+13472 if(IsCorruptionDetectionEnabled())
+
+13474 VkResult res = pBestRequestBlock->WriteMagicValueAroundAllocation(m_hAllocator, bestRequest.offset, size);
+13475 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to write magic value.");
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+13490 if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
+
+13492 return VK_ERROR_TOO_MANY_OBJECTS;
+
+
+
+13496 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+13499 void VmaBlockVector::Free(
+
+
+13502 VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
+
+13504 bool budgetExceeded =
false;
+
+13506 const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
+
+13508 m_hAllocator->GetBudget(&heapBudget, heapIndex, 1);
+13509 budgetExceeded = heapBudget.
usage >= heapBudget.
budget;
+
+
+
+
+13514 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+13516 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
+
+13518 if(IsCorruptionDetectionEnabled())
+
+13520 VkResult res = pBlock->ValidateMagicValueAroundAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
+13521 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to validate magic value.");
+
+
+13524 if(hAllocation->IsPersistentMap())
+
+13526 pBlock->Unmap(m_hAllocator, 1);
+
+
+13529 pBlock->m_pMetadata->Free(hAllocation);
+13530 VMA_HEAVY_ASSERT(pBlock->Validate());
+
+13532 VMA_DEBUG_LOG(
" Freed from MemoryTypeIndex=%u", m_MemoryTypeIndex);
+
+13534 const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
+
+13536 if(pBlock->m_pMetadata->IsEmpty())
+
+
+13539 if((m_HasEmptyBlock || budgetExceeded) && canDeleteBlock)
+
+13541 pBlockToDelete = pBlock;
+
+
+
+
+
+
+13548 else if(m_HasEmptyBlock && canDeleteBlock)
+
+13550 VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
+13551 if(pLastBlock->m_pMetadata->IsEmpty())
+
+13553 pBlockToDelete = pLastBlock;
+13554 m_Blocks.pop_back();
+
+
+
+13558 UpdateHasEmptyBlock();
+13559 IncrementallySortBlocks();
+
+
+
+
+13564 if(pBlockToDelete != VMA_NULL)
+
+13566 VMA_DEBUG_LOG(
" Deleted empty block");
+13567 pBlockToDelete->Destroy(m_hAllocator);
+13568 vma_delete(m_hAllocator, pBlockToDelete);
+
+
+
+13572 VkDeviceSize VmaBlockVector::CalcMaxBlockSize()
const
+
+13574 VkDeviceSize result = 0;
+13575 for(
size_t i = m_Blocks.size(); i--; )
+
+13577 result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
+13578 if(result >= m_PreferredBlockSize)
+
+
+
+
+
+
+
+13586 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
+
+13588 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
+
+13590 if(m_Blocks[blockIndex] == pBlock)
+
+13592 VmaVectorRemove(m_Blocks, blockIndex);
+
+
+
+
+
+
+13599 void VmaBlockVector::IncrementallySortBlocks()
+
+
+
+
+13604 for(
size_t i = 1; i < m_Blocks.size(); ++i)
+
+13606 if(m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
+
+13608 VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
+
+
+
+
+
+
+13615 VkResult VmaBlockVector::AllocateFromBlock(
+13616 VmaDeviceMemoryBlock* pBlock,
+13617 uint32_t currentFrameIndex,
+
+13619 VkDeviceSize alignment,
+
+
+13622 VmaSuballocationType suballocType,
+
+
+
+
+
+
+
+
+13631 VmaAllocationRequest currRequest = {};
+13632 if(pBlock->m_pMetadata->CreateAllocationRequest(
+
+
+13635 m_BufferImageGranularity,
+
+
+
+
+
+
+
+
+
+13645 VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
+
+
+
+13649 VkResult res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
+13650 if(res != VK_SUCCESS)
+
+
+
+
+
+13656 *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(currentFrameIndex, isUserDataString);
+13657 pBlock->m_pMetadata->Alloc(currRequest, suballocType, size, *pAllocation);
+13658 UpdateHasEmptyBlock();
+13659 (*pAllocation)->InitBlockAllocation(
+
+13661 currRequest.offset,
+
+
+
+
+
+
+13668 VMA_HEAVY_ASSERT(pBlock->Validate());
+13669 (*pAllocation)->SetUserData(m_hAllocator, pUserData);
+13670 m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), size);
+13671 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
+
+13673 m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
+
+13675 if(IsCorruptionDetectionEnabled())
+
+13677 VkResult res = pBlock->WriteMagicValueAroundAllocation(m_hAllocator, currRequest.offset, size);
+13678 VMA_ASSERT(res == VK_SUCCESS &&
"Couldn't map block memory to write magic value.");
+
+
+
+13682 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+13685 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize,
size_t* pNewBlockIndex)
+
+13687 VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
+13688 allocInfo.pNext = m_pMemoryAllocateNext;
+13689 allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
+13690 allocInfo.allocationSize = blockSize;
+
+13692 #if VMA_BUFFER_DEVICE_ADDRESS
+
+13694 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
+13695 if(m_hAllocator->m_UseKhrBufferDeviceAddress)
+
+13697 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
+13698 VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
+
+
+
+13702 #if VMA_MEMORY_PRIORITY
+13703 VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
+13704 if(m_hAllocator->m_UseExtMemoryPriority)
+
+13706 priorityInfo.priority = m_Priority;
+13707 VmaPnextChainPushFront(&allocInfo, &priorityInfo);
+
+
+
+13711 #if VMA_EXTERNAL_MEMORY
+
+13713 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
+13714 exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(m_MemoryTypeIndex);
+13715 if(exportMemoryAllocInfo.handleTypes != 0)
+
+13717 VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
+
+
+
+13721 VkDeviceMemory mem = VK_NULL_HANDLE;
+13722 VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
+
+
+
+
+
+
+
+
+13731 VmaDeviceMemoryBlock*
const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
+
+
+
+
+
+13737 allocInfo.allocationSize,
+
+
+
+13741 m_Blocks.push_back(pBlock);
+13742 if(pNewBlockIndex != VMA_NULL)
+
+13744 *pNewBlockIndex = m_Blocks.size() - 1;
+
+
+
+
+
+13750 void VmaBlockVector::ApplyDefragmentationMovesCpu(
+13751 class VmaBlockVectorDefragmentationContext* pDefragCtx,
+13752 const VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves)
+
+13754 const size_t blockCount = m_Blocks.size();
+13755 const bool isNonCoherent = m_hAllocator->IsMemoryTypeNonCoherent(m_MemoryTypeIndex);
+
+
+
+13759 BLOCK_FLAG_USED = 0x00000001,
+13760 BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION = 0x00000002,
+
+
+
+
+
+
+
+13768 VmaVector< BlockInfo, VmaStlAllocator<BlockInfo> >
+13769 blockInfo(blockCount, BlockInfo(), VmaStlAllocator<BlockInfo>(m_hAllocator->GetAllocationCallbacks()));
+13770 memset(blockInfo.data(), 0, blockCount *
sizeof(BlockInfo));
+
+
+13773 const size_t moveCount = moves.size();
+13774 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
+
+13776 const VmaDefragmentationMove& move = moves[moveIndex];
+13777 blockInfo[move.srcBlockIndex].flags |= BLOCK_FLAG_USED;
+13778 blockInfo[move.dstBlockIndex].flags |= BLOCK_FLAG_USED;
+
+
+13781 VMA_ASSERT(pDefragCtx->res == VK_SUCCESS);
+
+
+13784 for(
size_t blockIndex = 0; pDefragCtx->res == VK_SUCCESS && blockIndex < blockCount; ++blockIndex)
+
+13786 BlockInfo& currBlockInfo = blockInfo[blockIndex];
+13787 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
+13788 if((currBlockInfo.flags & BLOCK_FLAG_USED) != 0)
+
+13790 currBlockInfo.pMappedData = pBlock->GetMappedData();
+
+13792 if(currBlockInfo.pMappedData == VMA_NULL)
+
+13794 pDefragCtx->res = pBlock->Map(m_hAllocator, 1, &currBlockInfo.pMappedData);
+13795 if(pDefragCtx->res == VK_SUCCESS)
+
+13797 currBlockInfo.flags |= BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION;
+
+
+
+
+
+
+13804 if(pDefragCtx->res == VK_SUCCESS)
+
+13806 const VkDeviceSize nonCoherentAtomSize = m_hAllocator->m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
+13807 VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
+
+13809 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
+
+13811 const VmaDefragmentationMove& move = moves[moveIndex];
+
+13813 const BlockInfo& srcBlockInfo = blockInfo[move.srcBlockIndex];
+13814 const BlockInfo& dstBlockInfo = blockInfo[move.dstBlockIndex];
+
+13816 VMA_ASSERT(srcBlockInfo.pMappedData && dstBlockInfo.pMappedData);
+
+
+
+
+13821 VmaDeviceMemoryBlock*
const pSrcBlock = m_Blocks[move.srcBlockIndex];
+13822 memRange.memory = pSrcBlock->GetDeviceMemory();
+13823 memRange.offset = VmaAlignDown(move.srcOffset, nonCoherentAtomSize);
+13824 memRange.size = VMA_MIN(
+13825 VmaAlignUp(move.size + (move.srcOffset - memRange.offset), nonCoherentAtomSize),
+13826 pSrcBlock->m_pMetadata->GetSize() - memRange.offset);
+13827 (*m_hAllocator->GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hAllocator->m_hDevice, 1, &memRange);
+
+
+
+
+13832 reinterpret_cast<char*
>(dstBlockInfo.pMappedData) + move.dstOffset,
+13833 reinterpret_cast<char*
>(srcBlockInfo.pMappedData) + move.srcOffset,
+13834 static_cast<size_t>(move.size));
+
+13836 if(IsCorruptionDetectionEnabled())
+
+13838 VmaWriteMagicValue(dstBlockInfo.pMappedData, move.dstOffset - VMA_DEBUG_MARGIN);
+13839 VmaWriteMagicValue(dstBlockInfo.pMappedData, move.dstOffset + move.size);
+
+
+
+
+
+13845 VmaDeviceMemoryBlock*
const pDstBlock = m_Blocks[move.dstBlockIndex];
+13846 memRange.memory = pDstBlock->GetDeviceMemory();
+13847 memRange.offset = VmaAlignDown(move.dstOffset, nonCoherentAtomSize);
+13848 memRange.size = VMA_MIN(
+13849 VmaAlignUp(move.size + (move.dstOffset - memRange.offset), nonCoherentAtomSize),
+13850 pDstBlock->m_pMetadata->GetSize() - memRange.offset);
+13851 (*m_hAllocator->GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hAllocator->m_hDevice, 1, &memRange);
+
+
+
+
+
+
+13858 for(
size_t blockIndex = blockCount; blockIndex--; )
+
+13860 const BlockInfo& currBlockInfo = blockInfo[blockIndex];
+13861 if((currBlockInfo.flags & BLOCK_FLAG_MAPPED_FOR_DEFRAGMENTATION) != 0)
+
+13863 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
+13864 pBlock->Unmap(m_hAllocator, 1);
+
+
+
+
+13869 void VmaBlockVector::ApplyDefragmentationMovesGpu(
+13870 class VmaBlockVectorDefragmentationContext* pDefragCtx,
+13871 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
+13872 VkCommandBuffer commandBuffer)
+
+13874 const size_t blockCount = m_Blocks.size();
+
+13876 pDefragCtx->blockContexts.resize(blockCount);
+13877 memset(pDefragCtx->blockContexts.data(), 0, blockCount *
sizeof(VmaBlockDefragmentationContext));
+
+
+13880 const size_t moveCount = moves.size();
+13881 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
+
+13883 const VmaDefragmentationMove& move = moves[moveIndex];
+
+
+
+
+13888 pDefragCtx->blockContexts[move.srcBlockIndex].flags |= VmaBlockDefragmentationContext::BLOCK_FLAG_USED;
+13889 pDefragCtx->blockContexts[move.dstBlockIndex].flags |= VmaBlockDefragmentationContext::BLOCK_FLAG_USED;
+
+
+
+13893 VMA_ASSERT(pDefragCtx->res == VK_SUCCESS);
+
+
+
+13897 VkBufferCreateInfo bufCreateInfo;
+13898 VmaFillGpuDefragmentationBufferCreateInfo(bufCreateInfo);
+
+13900 for(
size_t blockIndex = 0; pDefragCtx->res == VK_SUCCESS && blockIndex < blockCount; ++blockIndex)
+
+13902 VmaBlockDefragmentationContext& currBlockCtx = pDefragCtx->blockContexts[blockIndex];
+13903 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
+13904 if((currBlockCtx.flags & VmaBlockDefragmentationContext::BLOCK_FLAG_USED) != 0)
+
+13906 bufCreateInfo.size = pBlock->m_pMetadata->GetSize();
+13907 pDefragCtx->res = (*m_hAllocator->GetVulkanFunctions().vkCreateBuffer)(
+13908 m_hAllocator->m_hDevice, &bufCreateInfo, m_hAllocator->GetAllocationCallbacks(), &currBlockCtx.hBuffer);
+13909 if(pDefragCtx->res == VK_SUCCESS)
+
+13911 pDefragCtx->res = (*m_hAllocator->GetVulkanFunctions().vkBindBufferMemory)(
+13912 m_hAllocator->m_hDevice, currBlockCtx.hBuffer, pBlock->GetDeviceMemory(), 0);
+
+
+
+
+
+
+13919 if(pDefragCtx->res == VK_SUCCESS)
+
+13921 for(
size_t moveIndex = 0; moveIndex < moveCount; ++moveIndex)
+
+13923 const VmaDefragmentationMove& move = moves[moveIndex];
+
+13925 const VmaBlockDefragmentationContext& srcBlockCtx = pDefragCtx->blockContexts[move.srcBlockIndex];
+13926 const VmaBlockDefragmentationContext& dstBlockCtx = pDefragCtx->blockContexts[move.dstBlockIndex];
+
+13928 VMA_ASSERT(srcBlockCtx.hBuffer && dstBlockCtx.hBuffer);
+
+13930 VkBufferCopy region = {
+
+
+
+13934 (*m_hAllocator->GetVulkanFunctions().vkCmdCopyBuffer)(
+13935 commandBuffer, srcBlockCtx.hBuffer, dstBlockCtx.hBuffer, 1, ®ion);
+
+
+
+
+13940 if(pDefragCtx->res == VK_SUCCESS && moveCount > 0)
+
+13942 pDefragCtx->res = VK_NOT_READY;
+
+
+
+
+
+13948 for(
size_t blockIndex = m_Blocks.size(); blockIndex--; )
+
+13950 VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
+13951 if(pBlock->m_pMetadata->IsEmpty())
+
+13953 if(m_Blocks.size() > m_MinBlockCount)
+
+13955 if(pDefragmentationStats != VMA_NULL)
+
+
+13958 pDefragmentationStats->
bytesFreed += pBlock->m_pMetadata->GetSize();
+
+
+13961 VmaVectorRemove(m_Blocks, blockIndex);
+13962 pBlock->Destroy(m_hAllocator);
+13963 vma_delete(m_hAllocator, pBlock);
+
+
+
+
+
+
+
+13971 UpdateHasEmptyBlock();
+
+
+13974 void VmaBlockVector::UpdateHasEmptyBlock()
+
+13976 m_HasEmptyBlock =
false;
+13977 for(
size_t index = 0, count = m_Blocks.size(); index < count; ++index)
+
+13979 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[index];
+13980 if(pBlock->m_pMetadata->IsEmpty())
+
+13982 m_HasEmptyBlock =
true;
+
+
+
+
+
+13988 #if VMA_STATS_STRING_ENABLED
+
+13990 void VmaBlockVector::PrintDetailedMap(
class VmaJsonWriter& json)
+
+13992 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+13994 json.BeginObject();
+
+
+
+13998 const char* poolName = m_hParentPool->GetName();
+13999 if(poolName != VMA_NULL && poolName[0] !=
'\0')
+
+14001 json.WriteString(
"Name");
+14002 json.WriteString(poolName);
+
+
+14005 json.WriteString(
"MemoryTypeIndex");
+14006 json.WriteNumber(m_MemoryTypeIndex);
+
+14008 json.WriteString(
"BlockSize");
+14009 json.WriteNumber(m_PreferredBlockSize);
+
+14011 json.WriteString(
"BlockCount");
+14012 json.BeginObject(
true);
+14013 if(m_MinBlockCount > 0)
+
+14015 json.WriteString(
"Min");
+14016 json.WriteNumber((uint64_t)m_MinBlockCount);
+
+14018 if(m_MaxBlockCount < SIZE_MAX)
+
+14020 json.WriteString(
"Max");
+14021 json.WriteNumber((uint64_t)m_MaxBlockCount);
+
+14023 json.WriteString(
"Cur");
+14024 json.WriteNumber((uint64_t)m_Blocks.size());
+
+
+14027 if(m_FrameInUseCount > 0)
+
+14029 json.WriteString(
"FrameInUseCount");
+14030 json.WriteNumber(m_FrameInUseCount);
+
+
+14033 if(m_Algorithm != 0)
+
+14035 json.WriteString(
"Algorithm");
+14036 json.WriteString(VmaAlgorithmToStr(m_Algorithm));
+
+
+
+
+14041 json.WriteString(
"PreferredBlockSize");
+14042 json.WriteNumber(m_PreferredBlockSize);
+
+
+14045 json.WriteString(
"Blocks");
+14046 json.BeginObject();
+14047 for(
size_t i = 0; i < m_Blocks.size(); ++i)
+
+14049 json.BeginString();
+14050 json.ContinueString(m_Blocks[i]->GetId());
+
+
+14053 m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
+
+
+
+
+
+
+
+
+14062 void VmaBlockVector::Defragment(
+14063 class VmaBlockVectorDefragmentationContext* pCtx,
+
+14065 VkDeviceSize& maxCpuBytesToMove, uint32_t& maxCpuAllocationsToMove,
+14066 VkDeviceSize& maxGpuBytesToMove, uint32_t& maxGpuAllocationsToMove,
+14067 VkCommandBuffer commandBuffer)
+
+14069 pCtx->res = VK_SUCCESS;
+
+14071 const VkMemoryPropertyFlags memPropFlags =
+14072 m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags;
+14073 const bool isHostVisible = (memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
+
+14075 const bool canDefragmentOnCpu = maxCpuBytesToMove > 0 && maxCpuAllocationsToMove > 0 &&
+
+14077 const bool canDefragmentOnGpu = maxGpuBytesToMove > 0 && maxGpuAllocationsToMove > 0 &&
+14078 !IsCorruptionDetectionEnabled() &&
+14079 ((1u << m_MemoryTypeIndex) & m_hAllocator->GetGpuDefragmentationMemoryTypeBits()) != 0;
+
+
+14082 if(canDefragmentOnCpu || canDefragmentOnGpu)
+
+14084 bool defragmentOnGpu;
+
+14086 if(canDefragmentOnGpu != canDefragmentOnCpu)
+
+14088 defragmentOnGpu = canDefragmentOnGpu;
+
+
+
+
+14093 defragmentOnGpu = (memPropFlags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0 ||
+14094 m_hAllocator->IsIntegratedGpu();
+
+
+14097 bool overlappingMoveSupported = !defragmentOnGpu;
+
+14099 if(m_hAllocator->m_UseMutex)
+
+
+
+14103 if(!m_Mutex.TryLockWrite())
+
+14105 pCtx->res = VK_ERROR_INITIALIZATION_FAILED;
+
+
+
+
+
+14111 m_Mutex.LockWrite();
+14112 pCtx->mutexLocked =
true;
+
+
+
+14116 pCtx->Begin(overlappingMoveSupported, flags);
+
+
+
+14120 const VkDeviceSize maxBytesToMove = defragmentOnGpu ? maxGpuBytesToMove : maxCpuBytesToMove;
+14121 const uint32_t maxAllocationsToMove = defragmentOnGpu ? maxGpuAllocationsToMove : maxCpuAllocationsToMove;
+14122 pCtx->res = pCtx->GetAlgorithm()->Defragment(pCtx->defragmentationMoves, maxBytesToMove, maxAllocationsToMove, flags);
+
+
+14125 if(pStats != VMA_NULL)
+
+14127 const VkDeviceSize bytesMoved = pCtx->GetAlgorithm()->GetBytesMoved();
+14128 const uint32_t allocationsMoved = pCtx->GetAlgorithm()->GetAllocationsMoved();
+
+
+14131 VMA_ASSERT(bytesMoved <= maxBytesToMove);
+14132 VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
+14133 if(defragmentOnGpu)
+
+14135 maxGpuBytesToMove -= bytesMoved;
+14136 maxGpuAllocationsToMove -= allocationsMoved;
+
+
+
+14140 maxCpuBytesToMove -= bytesMoved;
+14141 maxCpuAllocationsToMove -= allocationsMoved;
+
+
+
+
+
+14147 if(m_hAllocator->m_UseMutex)
+14148 m_Mutex.UnlockWrite();
+
+14150 if(pCtx->res >= VK_SUCCESS && !pCtx->defragmentationMoves.empty())
+14151 pCtx->res = VK_NOT_READY;
+
+
+
+
+14156 if(pCtx->res >= VK_SUCCESS)
+
+14158 if(defragmentOnGpu)
+
+14160 ApplyDefragmentationMovesGpu(pCtx, pCtx->defragmentationMoves, commandBuffer);
+
+
+
+14164 ApplyDefragmentationMovesCpu(pCtx, pCtx->defragmentationMoves);
+
+
+
+
+
+14170 void VmaBlockVector::DefragmentationEnd(
+14171 class VmaBlockVectorDefragmentationContext* pCtx,
+
+
+
+
+
+14177 VMA_ASSERT(pCtx->mutexLocked ==
false);
+
+
+
+14181 m_Mutex.LockWrite();
+14182 pCtx->mutexLocked =
true;
+
+
+
+14186 if(pCtx->mutexLocked || !m_hAllocator->m_UseMutex)
+
+
+14189 for(
size_t blockIndex = pCtx->blockContexts.size(); blockIndex--;)
+
+14191 VmaBlockDefragmentationContext &blockCtx = pCtx->blockContexts[blockIndex];
+14192 if(blockCtx.hBuffer)
+
+14194 (*m_hAllocator->GetVulkanFunctions().vkDestroyBuffer)(m_hAllocator->m_hDevice, blockCtx.hBuffer, m_hAllocator->GetAllocationCallbacks());
+
+
+
+14198 if(pCtx->res >= VK_SUCCESS)
+
+14200 FreeEmptyBlocks(pStats);
+
+
+
+14204 if(pCtx->mutexLocked)
+
+14206 VMA_ASSERT(m_hAllocator->m_UseMutex);
+14207 m_Mutex.UnlockWrite();
+
+
+
+14211 uint32_t VmaBlockVector::ProcessDefragmentations(
+14212 class VmaBlockVectorDefragmentationContext *pCtx,
+
+
+14215 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+14217 const uint32_t moveCount = VMA_MIN(uint32_t(pCtx->defragmentationMoves.size()) - pCtx->defragmentationMovesProcessed, maxMoves);
+
+14219 for(uint32_t i = 0; i < moveCount; ++ i)
+
+14221 VmaDefragmentationMove& move = pCtx->defragmentationMoves[pCtx->defragmentationMovesProcessed + i];
+
+
+14224 pMove->
memory = move.pDstBlock->GetDeviceMemory();
+14225 pMove->
offset = move.dstOffset;
+
+
+
+
+14230 pCtx->defragmentationMovesProcessed += moveCount;
+
+
+
+
+14235 void VmaBlockVector::CommitDefragmentations(
+14236 class VmaBlockVectorDefragmentationContext *pCtx,
+
+
+14239 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+14241 for(uint32_t i = pCtx->defragmentationMovesCommitted; i < pCtx->defragmentationMovesProcessed; ++ i)
+
+14243 const VmaDefragmentationMove &move = pCtx->defragmentationMoves[i];
+
+14245 move.pSrcBlock->m_pMetadata->FreeAtOffset(move.srcOffset);
+14246 move.hAllocation->ChangeBlockAllocation(m_hAllocator, move.pDstBlock, move.dstOffset);
+
+
+14249 pCtx->defragmentationMovesCommitted = pCtx->defragmentationMovesProcessed;
+14250 FreeEmptyBlocks(pStats);
+
+
+14253 size_t VmaBlockVector::CalcAllocationCount()
const
+
+
+14256 for(
size_t i = 0; i < m_Blocks.size(); ++i)
+
+14258 result += m_Blocks[i]->m_pMetadata->GetAllocationCount();
+
+
+
+
+14263 bool VmaBlockVector::IsBufferImageGranularityConflictPossible()
const
+
+14265 if(m_BufferImageGranularity == 1)
+
+
+
+14269 VmaSuballocationType lastSuballocType = VMA_SUBALLOCATION_TYPE_FREE;
+14270 for(
size_t i = 0, count = m_Blocks.size(); i < count; ++i)
+
+14272 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[i];
+14273 VMA_ASSERT(m_Algorithm == 0);
+14274 VmaBlockMetadata_Generic*
const pMetadata = (VmaBlockMetadata_Generic*)pBlock->m_pMetadata;
+14275 if(pMetadata->IsBufferImageGranularityConflictPossible(m_BufferImageGranularity, lastSuballocType))
+
+
+
+
+
+
+
+14283 void VmaBlockVector::MakePoolAllocationsLost(
+14284 uint32_t currentFrameIndex,
+14285 size_t* pLostAllocationCount)
+
+14287 VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
+14288 size_t lostAllocationCount = 0;
+14289 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
+
+14291 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
+14292 VMA_ASSERT(pBlock);
+14293 lostAllocationCount += pBlock->m_pMetadata->MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
+
+14295 if(pLostAllocationCount != VMA_NULL)
+
+14297 *pLostAllocationCount = lostAllocationCount;
+
+
+
+14301 VkResult VmaBlockVector::CheckCorruption()
+
+14303 if(!IsCorruptionDetectionEnabled())
+
+14305 return VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+14308 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
+14309 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
+
+14311 VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
+14312 VMA_ASSERT(pBlock);
+14313 VkResult res = pBlock->CheckCorruption(m_hAllocator);
+14314 if(res != VK_SUCCESS)
+
+
+
+
+
+
+
+14322 void VmaBlockVector::AddStats(
VmaStats* pStats)
+
+14324 const uint32_t memTypeIndex = m_MemoryTypeIndex;
+14325 const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
+
+14327 VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
+
+14329 for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
+
+14331 const VmaDeviceMemoryBlock*
const pBlock = m_Blocks[blockIndex];
+14332 VMA_ASSERT(pBlock);
+14333 VMA_HEAVY_ASSERT(pBlock->Validate());
+
+14335 pBlock->m_pMetadata->CalcAllocationStatInfo(allocationStatInfo);
+14336 VmaAddStatInfo(pStats->
total, allocationStatInfo);
+14337 VmaAddStatInfo(pStats->
memoryType[memTypeIndex], allocationStatInfo);
+14338 VmaAddStatInfo(pStats->
memoryHeap[memHeapIndex], allocationStatInfo);
+
+
+
+
+
+14345 VmaDefragmentationAlgorithm_Generic::VmaDefragmentationAlgorithm_Generic(
+
+14347 VmaBlockVector* pBlockVector,
+14348 uint32_t currentFrameIndex,
+14349 bool overlappingMoveSupported) :
+14350 VmaDefragmentationAlgorithm(hAllocator, pBlockVector, currentFrameIndex),
+14351 m_AllocationCount(0),
+14352 m_AllAllocations(false),
+
+14354 m_AllocationsMoved(0),
+14355 m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
+
+
+14358 const size_t blockCount = m_pBlockVector->m_Blocks.size();
+14359 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
+
+14361 BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
+14362 pBlockInfo->m_OriginalBlockIndex = blockIndex;
+14363 pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
+14364 m_Blocks.push_back(pBlockInfo);
+
+
+
+14368 VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
+
+
+14371 VmaDefragmentationAlgorithm_Generic::~VmaDefragmentationAlgorithm_Generic()
+
+14373 for(
size_t i = m_Blocks.size(); i--; )
+
+14375 vma_delete(m_hAllocator, m_Blocks[i]);
+
+
+
+14379 void VmaDefragmentationAlgorithm_Generic::AddAllocation(
VmaAllocation hAlloc, VkBool32* pChanged)
+
+
+14382 if(hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
+
+14384 VmaDeviceMemoryBlock* pBlock = hAlloc->GetBlock();
+14385 BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
+14386 if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
+
+14388 AllocationInfo allocInfo = AllocationInfo(hAlloc, pChanged);
+14389 (*it)->m_Allocations.push_back(allocInfo);
+
+
+
+
+
+
+14396 ++m_AllocationCount;
+
+
+
+14400 VkResult VmaDefragmentationAlgorithm_Generic::DefragmentRound(
+14401 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
+14402 VkDeviceSize maxBytesToMove,
+14403 uint32_t maxAllocationsToMove,
+14404 bool freeOldAllocations)
+
+14406 if(m_Blocks.empty())
+
+
+
+
+
+
+
+
+
+
+
+
+14419 size_t srcBlockMinIndex = 0;
+
+
+
+
+
+
+
+
+
+
+
+
+14432 size_t srcBlockIndex = m_Blocks.size() - 1;
+14433 size_t srcAllocIndex = SIZE_MAX;
+
+
+
+
+
+14439 while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
+
+14441 if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
+
+
+14444 if(srcBlockIndex == srcBlockMinIndex)
+
+
+
+
+
+
+14451 srcAllocIndex = SIZE_MAX;
+
+
+
+
+14456 srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
+
+
+
+14460 BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
+14461 AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
+
+14463 const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
+14464 const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
+14465 const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
+14466 const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
+
+
+14469 for(
size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
+
+14471 BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
+14472 VmaAllocationRequest dstAllocRequest;
+14473 if(pDstBlockInfo->m_pBlock->m_pMetadata->CreateAllocationRequest(
+14474 m_CurrentFrameIndex,
+14475 m_pBlockVector->GetFrameInUseCount(),
+14476 m_pBlockVector->GetBufferImageGranularity(),
+
+
+
+
+
+
+14483 &dstAllocRequest) &&
+
+14485 dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
+
+14487 VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
+
+
+14490 if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
+14491 (m_BytesMoved + size > maxBytesToMove))
+
+
+
+
+14496 VmaDefragmentationMove move = {};
+14497 move.srcBlockIndex = pSrcBlockInfo->m_OriginalBlockIndex;
+14498 move.dstBlockIndex = pDstBlockInfo->m_OriginalBlockIndex;
+14499 move.srcOffset = srcOffset;
+14500 move.dstOffset = dstAllocRequest.offset;
+
+14502 move.hAllocation = allocInfo.m_hAllocation;
+14503 move.pSrcBlock = pSrcBlockInfo->m_pBlock;
+14504 move.pDstBlock = pDstBlockInfo->m_pBlock;
+
+14506 moves.push_back(move);
+
+14508 pDstBlockInfo->m_pBlock->m_pMetadata->Alloc(
+
+
+
+14512 allocInfo.m_hAllocation);
+
+14514 if(freeOldAllocations)
+
+14516 pSrcBlockInfo->m_pBlock->m_pMetadata->FreeAtOffset(srcOffset);
+14517 allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
+
+
+14520 if(allocInfo.m_pChanged != VMA_NULL)
+
+14522 *allocInfo.m_pChanged = VK_TRUE;
+
+
+14525 ++m_AllocationsMoved;
+14526 m_BytesMoved += size;
+
+14528 VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
+
+
+
+
+
+
+
+14536 if(srcAllocIndex > 0)
+
+
+
+
+
+14542 if(srcBlockIndex > 0)
+
+
+14545 srcAllocIndex = SIZE_MAX;
+
+
+
+
+
+
+
+
+
+14555 size_t VmaDefragmentationAlgorithm_Generic::CalcBlocksWithNonMovableCount()
const
+
+
+14558 for(
size_t i = 0; i < m_Blocks.size(); ++i)
+
+14560 if(m_Blocks[i]->m_HasNonMovableAllocations)
+
+
+
+
+
+
+
+14568 VkResult VmaDefragmentationAlgorithm_Generic::Defragment(
+14569 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
+14570 VkDeviceSize maxBytesToMove,
+14571 uint32_t maxAllocationsToMove,
+
+
+14574 if(!m_AllAllocations && m_AllocationCount == 0)
+
+
+
+
+14579 const size_t blockCount = m_Blocks.size();
+14580 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
+
+14582 BlockInfo* pBlockInfo = m_Blocks[blockIndex];
+
+14584 if(m_AllAllocations)
+
+14586 VmaBlockMetadata_Generic* pMetadata = (VmaBlockMetadata_Generic*)pBlockInfo->m_pBlock->m_pMetadata;
+14587 for(VmaSuballocationList::const_iterator it = pMetadata->m_Suballocations.begin();
+14588 it != pMetadata->m_Suballocations.end();
+
+
+14591 if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
+
+14593 AllocationInfo allocInfo = AllocationInfo(it->hAllocation, VMA_NULL);
+14594 pBlockInfo->m_Allocations.push_back(allocInfo);
+
+
+
+
+14599 pBlockInfo->CalcHasNonMovableAllocations();
+
+
+
+14603 pBlockInfo->SortAllocationsByOffsetDescending();
+
+
+
+
+
+14609 VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
+
+
+14612 const uint32_t roundCount = 2;
+
+
+14615 VkResult result = VK_SUCCESS;
+14616 for(uint32_t round = 0; (round < roundCount) && (result == VK_SUCCESS); ++round)
+
+
+
+
+
+
+
+14624 bool VmaDefragmentationAlgorithm_Generic::MoveMakesSense(
+14625 size_t dstBlockIndex, VkDeviceSize dstOffset,
+14626 size_t srcBlockIndex, VkDeviceSize srcOffset)
+
+14628 if(dstBlockIndex < srcBlockIndex)
+
+
+
+14632 if(dstBlockIndex > srcBlockIndex)
+
+
+
+14636 if(dstOffset < srcOffset)
+
+
+
+
+
+
+
+
+14646 VmaDefragmentationAlgorithm_Fast::VmaDefragmentationAlgorithm_Fast(
+
+14648 VmaBlockVector* pBlockVector,
+14649 uint32_t currentFrameIndex,
+14650 bool overlappingMoveSupported) :
+14651 VmaDefragmentationAlgorithm(hAllocator, pBlockVector, currentFrameIndex),
+14652 m_OverlappingMoveSupported(overlappingMoveSupported),
+14653 m_AllocationCount(0),
+14654 m_AllAllocations(false),
+
+14656 m_AllocationsMoved(0),
+14657 m_BlockInfos(VmaStlAllocator<BlockInfo>(hAllocator->GetAllocationCallbacks()))
+
+14659 VMA_ASSERT(VMA_DEBUG_MARGIN == 0);
+
+
+
+14663 VmaDefragmentationAlgorithm_Fast::~VmaDefragmentationAlgorithm_Fast()
+
+
+
+14667 VkResult VmaDefragmentationAlgorithm_Fast::Defragment(
+14668 VmaVector< VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove> >& moves,
+14669 VkDeviceSize maxBytesToMove,
+14670 uint32_t maxAllocationsToMove,
+
+
+14673 VMA_ASSERT(m_AllAllocations || m_pBlockVector->CalcAllocationCount() == m_AllocationCount);
+
+14675 const size_t blockCount = m_pBlockVector->GetBlockCount();
+14676 if(blockCount == 0 || maxBytesToMove == 0 || maxAllocationsToMove == 0)
+
+
+
+
+14681 PreprocessMetadata();
+
+
+
+14685 m_BlockInfos.resize(blockCount);
+14686 for(
size_t i = 0; i < blockCount; ++i)
+
+14688 m_BlockInfos[i].origBlockIndex = i;
+
+
+14691 VMA_SORT(m_BlockInfos.begin(), m_BlockInfos.end(), [
this](
const BlockInfo& lhs,
const BlockInfo& rhs) ->
bool {
+14692 return m_pBlockVector->GetBlock(lhs.origBlockIndex)->m_pMetadata->GetSumFreeSize() <
+14693 m_pBlockVector->GetBlock(rhs.origBlockIndex)->m_pMetadata->GetSumFreeSize();
+
+
+
+
+14698 FreeSpaceDatabase freeSpaceDb;
+
+14700 size_t dstBlockInfoIndex = 0;
+14701 size_t dstOrigBlockIndex = m_BlockInfos[dstBlockInfoIndex].origBlockIndex;
+14702 VmaDeviceMemoryBlock* pDstBlock = m_pBlockVector->GetBlock(dstOrigBlockIndex);
+14703 VmaBlockMetadata_Generic* pDstMetadata = (VmaBlockMetadata_Generic*)pDstBlock->m_pMetadata;
+14704 VkDeviceSize dstBlockSize = pDstMetadata->GetSize();
+14705 VkDeviceSize dstOffset = 0;
+
+
+14708 for(
size_t srcBlockInfoIndex = 0; !end && srcBlockInfoIndex < blockCount; ++srcBlockInfoIndex)
+
+14710 const size_t srcOrigBlockIndex = m_BlockInfos[srcBlockInfoIndex].origBlockIndex;
+14711 VmaDeviceMemoryBlock*
const pSrcBlock = m_pBlockVector->GetBlock(srcOrigBlockIndex);
+14712 VmaBlockMetadata_Generic*
const pSrcMetadata = (VmaBlockMetadata_Generic*)pSrcBlock->m_pMetadata;
+14713 for(VmaSuballocationList::iterator srcSuballocIt = pSrcMetadata->m_Suballocations.begin();
+14714 !end && srcSuballocIt != pSrcMetadata->m_Suballocations.end(); )
+
+14716 VmaAllocation_T*
const pAlloc = srcSuballocIt->hAllocation;
+14717 const VkDeviceSize srcAllocAlignment = pAlloc->GetAlignment();
+14718 const VkDeviceSize srcAllocSize = srcSuballocIt->size;
+14719 if(m_AllocationsMoved == maxAllocationsToMove ||
+14720 m_BytesMoved + srcAllocSize > maxBytesToMove)
+
+
+
+
+14725 const VkDeviceSize srcAllocOffset = srcSuballocIt->offset;
+
+14727 VmaDefragmentationMove move = {};
+
+14729 size_t freeSpaceInfoIndex;
+14730 VkDeviceSize dstAllocOffset;
+14731 if(freeSpaceDb.Fetch(srcAllocAlignment, srcAllocSize,
+14732 freeSpaceInfoIndex, dstAllocOffset))
+
+14734 size_t freeSpaceOrigBlockIndex = m_BlockInfos[freeSpaceInfoIndex].origBlockIndex;
+14735 VmaDeviceMemoryBlock* pFreeSpaceBlock = m_pBlockVector->GetBlock(freeSpaceOrigBlockIndex);
+14736 VmaBlockMetadata_Generic* pFreeSpaceMetadata = (VmaBlockMetadata_Generic*)pFreeSpaceBlock->m_pMetadata;
+
+
+14739 if(freeSpaceInfoIndex == srcBlockInfoIndex)
+
+14741 VMA_ASSERT(dstAllocOffset <= srcAllocOffset);
+
+
+
+14745 VmaSuballocation suballoc = *srcSuballocIt;
+14746 suballoc.offset = dstAllocOffset;
+14747 suballoc.hAllocation->ChangeOffset(dstAllocOffset);
+14748 m_BytesMoved += srcAllocSize;
+14749 ++m_AllocationsMoved;
+
+14751 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
+
+14753 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
+14754 srcSuballocIt = nextSuballocIt;
+
+14756 InsertSuballoc(pFreeSpaceMetadata, suballoc);
+
+14758 move.srcBlockIndex = srcOrigBlockIndex;
+14759 move.dstBlockIndex = freeSpaceOrigBlockIndex;
+14760 move.srcOffset = srcAllocOffset;
+14761 move.dstOffset = dstAllocOffset;
+14762 move.size = srcAllocSize;
+
+14764 moves.push_back(move);
+
+
+
+
+
+
+14771 VMA_ASSERT(freeSpaceInfoIndex < srcBlockInfoIndex);
+
+14773 VmaSuballocation suballoc = *srcSuballocIt;
+14774 suballoc.offset = dstAllocOffset;
+14775 suballoc.hAllocation->ChangeBlockAllocation(m_hAllocator, pFreeSpaceBlock, dstAllocOffset);
+14776 m_BytesMoved += srcAllocSize;
+14777 ++m_AllocationsMoved;
+
+14779 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
+
+14781 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
+14782 srcSuballocIt = nextSuballocIt;
+
+14784 InsertSuballoc(pFreeSpaceMetadata, suballoc);
+
+14786 move.srcBlockIndex = srcOrigBlockIndex;
+14787 move.dstBlockIndex = freeSpaceOrigBlockIndex;
+14788 move.srcOffset = srcAllocOffset;
+14789 move.dstOffset = dstAllocOffset;
+14790 move.size = srcAllocSize;
+
+14792 moves.push_back(move);
+
+
+
+
+14797 dstAllocOffset = VmaAlignUp(dstOffset, srcAllocAlignment);
+
+
+14800 while(dstBlockInfoIndex < srcBlockInfoIndex &&
+14801 dstAllocOffset + srcAllocSize > dstBlockSize)
+
+
+14804 freeSpaceDb.Register(dstBlockInfoIndex, dstOffset, dstBlockSize - dstOffset);
+
+14806 ++dstBlockInfoIndex;
+14807 dstOrigBlockIndex = m_BlockInfos[dstBlockInfoIndex].origBlockIndex;
+14808 pDstBlock = m_pBlockVector->GetBlock(dstOrigBlockIndex);
+14809 pDstMetadata = (VmaBlockMetadata_Generic*)pDstBlock->m_pMetadata;
+14810 dstBlockSize = pDstMetadata->GetSize();
+
+14812 dstAllocOffset = 0;
+
+
+
+14816 if(dstBlockInfoIndex == srcBlockInfoIndex)
+
+14818 VMA_ASSERT(dstAllocOffset <= srcAllocOffset);
+
+14820 const bool overlap = dstAllocOffset + srcAllocSize > srcAllocOffset;
+
+14822 bool skipOver = overlap;
+14823 if(overlap && m_OverlappingMoveSupported && dstAllocOffset < srcAllocOffset)
+
+
+
+14827 skipOver = (srcAllocOffset - dstAllocOffset) * 64 < srcAllocSize;
+
+
+
+
+14832 freeSpaceDb.Register(dstBlockInfoIndex, dstOffset, srcAllocOffset - dstOffset);
+
+14834 dstOffset = srcAllocOffset + srcAllocSize;
+
+
+
+
+
+14840 srcSuballocIt->offset = dstAllocOffset;
+14841 srcSuballocIt->hAllocation->ChangeOffset(dstAllocOffset);
+14842 dstOffset = dstAllocOffset + srcAllocSize;
+14843 m_BytesMoved += srcAllocSize;
+14844 ++m_AllocationsMoved;
+
+
+14847 move.srcBlockIndex = srcOrigBlockIndex;
+14848 move.dstBlockIndex = dstOrigBlockIndex;
+14849 move.srcOffset = srcAllocOffset;
+14850 move.dstOffset = dstAllocOffset;
+14851 move.size = srcAllocSize;
+
+14853 moves.push_back(move);
+
+
+
+
+
+
+
+14861 VMA_ASSERT(dstBlockInfoIndex < srcBlockInfoIndex);
+14862 VMA_ASSERT(dstAllocOffset + srcAllocSize <= dstBlockSize);
+
+14864 VmaSuballocation suballoc = *srcSuballocIt;
+14865 suballoc.offset = dstAllocOffset;
+14866 suballoc.hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlock, dstAllocOffset);
+14867 dstOffset = dstAllocOffset + srcAllocSize;
+14868 m_BytesMoved += srcAllocSize;
+14869 ++m_AllocationsMoved;
+
+14871 VmaSuballocationList::iterator nextSuballocIt = srcSuballocIt;
+
+14873 pSrcMetadata->m_Suballocations.erase(srcSuballocIt);
+14874 srcSuballocIt = nextSuballocIt;
+
+14876 pDstMetadata->m_Suballocations.push_back(suballoc);
+
+14878 move.srcBlockIndex = srcOrigBlockIndex;
+14879 move.dstBlockIndex = dstOrigBlockIndex;
+14880 move.srcOffset = srcAllocOffset;
+14881 move.dstOffset = dstAllocOffset;
+14882 move.size = srcAllocSize;
+
+14884 moves.push_back(move);
+
+
+
+
+
+14890 m_BlockInfos.clear();
+
+14892 PostprocessMetadata();
+
+
+
+
+14897 void VmaDefragmentationAlgorithm_Fast::PreprocessMetadata()
+
+14899 const size_t blockCount = m_pBlockVector->GetBlockCount();
+14900 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
+
+14902 VmaBlockMetadata_Generic*
const pMetadata =
+14903 (VmaBlockMetadata_Generic*)m_pBlockVector->GetBlock(blockIndex)->m_pMetadata;
+14904 pMetadata->m_FreeCount = 0;
+14905 pMetadata->m_SumFreeSize = pMetadata->GetSize();
+14906 pMetadata->m_FreeSuballocationsBySize.clear();
+14907 for(VmaSuballocationList::iterator it = pMetadata->m_Suballocations.begin();
+14908 it != pMetadata->m_Suballocations.end(); )
+
+14910 if(it->type == VMA_SUBALLOCATION_TYPE_FREE)
+
+14912 VmaSuballocationList::iterator nextIt = it;
+
+14914 pMetadata->m_Suballocations.erase(it);
+
+
+
+
+
+
+
+
+
+
+14925 void VmaDefragmentationAlgorithm_Fast::PostprocessMetadata()
+
+14927 const size_t blockCount = m_pBlockVector->GetBlockCount();
+14928 for(
size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
+
+14930 VmaBlockMetadata_Generic*
const pMetadata =
+14931 (VmaBlockMetadata_Generic*)m_pBlockVector->GetBlock(blockIndex)->m_pMetadata;
+14932 const VkDeviceSize blockSize = pMetadata->GetSize();
+
+
+14935 if(pMetadata->m_Suballocations.empty())
+
+14937 pMetadata->m_FreeCount = 1;
+
+14939 VmaSuballocation suballoc = {
+
+
+
+14943 VMA_SUBALLOCATION_TYPE_FREE };
+14944 pMetadata->m_Suballocations.push_back(suballoc);
+14945 pMetadata->RegisterFreeSuballocation(pMetadata->m_Suballocations.begin());
+
+
+
+
+14950 VkDeviceSize offset = 0;
+14951 VmaSuballocationList::iterator it;
+14952 for(it = pMetadata->m_Suballocations.begin();
+14953 it != pMetadata->m_Suballocations.end();
+
+
+14956 VMA_ASSERT(it->type != VMA_SUBALLOCATION_TYPE_FREE);
+14957 VMA_ASSERT(it->offset >= offset);
+
+
+14960 if(it->offset > offset)
+
+14962 ++pMetadata->m_FreeCount;
+14963 const VkDeviceSize freeSize = it->offset - offset;
+14964 VmaSuballocation suballoc = {
+
+
+
+14968 VMA_SUBALLOCATION_TYPE_FREE };
+14969 VmaSuballocationList::iterator precedingFreeIt = pMetadata->m_Suballocations.insert(it, suballoc);
+14970 if(freeSize >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
+
+14972 pMetadata->m_FreeSuballocationsBySize.push_back(precedingFreeIt);
+
+
+
+14976 pMetadata->m_SumFreeSize -= it->size;
+14977 offset = it->offset + it->size;
+
+
+
+14981 if(offset < blockSize)
+
+14983 ++pMetadata->m_FreeCount;
+14984 const VkDeviceSize freeSize = blockSize - offset;
+14985 VmaSuballocation suballoc = {
+
+
+
+14989 VMA_SUBALLOCATION_TYPE_FREE };
+14990 VMA_ASSERT(it == pMetadata->m_Suballocations.end());
+14991 VmaSuballocationList::iterator trailingFreeIt = pMetadata->m_Suballocations.insert(it, suballoc);
+14992 if(freeSize > VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
+
+14994 pMetadata->m_FreeSuballocationsBySize.push_back(trailingFreeIt);
+
+
+
+
+14999 pMetadata->m_FreeSuballocationsBySize.begin(),
+15000 pMetadata->m_FreeSuballocationsBySize.end(),
+15001 VmaSuballocationItemSizeLess());
+
+
+15004 VMA_HEAVY_ASSERT(pMetadata->Validate());
+
+
+
+15008 void VmaDefragmentationAlgorithm_Fast::InsertSuballoc(VmaBlockMetadata_Generic* pMetadata,
const VmaSuballocation& suballoc)
+
+
+15011 VmaSuballocationList::iterator it = pMetadata->m_Suballocations.begin();
+15012 while(it != pMetadata->m_Suballocations.end())
+
+15014 if(it->offset < suballoc.offset)
+
+
+
+
+15019 pMetadata->m_Suballocations.insert(it, suballoc);
+
+
+
+
+15025 VmaBlockVectorDefragmentationContext::VmaBlockVectorDefragmentationContext(
+
+
+15028 VmaBlockVector* pBlockVector,
+15029 uint32_t currFrameIndex) :
+
+15031 mutexLocked(false),
+15032 blockContexts(VmaStlAllocator<VmaBlockDefragmentationContext>(hAllocator->GetAllocationCallbacks())),
+15033 defragmentationMoves(VmaStlAllocator<VmaDefragmentationMove>(hAllocator->GetAllocationCallbacks())),
+15034 defragmentationMovesProcessed(0),
+15035 defragmentationMovesCommitted(0),
+15036 hasDefragmentationPlan(0),
+15037 m_hAllocator(hAllocator),
+15038 m_hCustomPool(hCustomPool),
+15039 m_pBlockVector(pBlockVector),
+15040 m_CurrFrameIndex(currFrameIndex),
+15041 m_pAlgorithm(VMA_NULL),
+15042 m_Allocations(VmaStlAllocator<AllocInfo>(hAllocator->GetAllocationCallbacks())),
+15043 m_AllAllocations(false)
+
+
+
+15047 VmaBlockVectorDefragmentationContext::~VmaBlockVectorDefragmentationContext()
+
+15049 vma_delete(m_hAllocator, m_pAlgorithm);
+
+
+15052 void VmaBlockVectorDefragmentationContext::AddAllocation(
VmaAllocation hAlloc, VkBool32* pChanged)
+
+15054 AllocInfo info = { hAlloc, pChanged };
+15055 m_Allocations.push_back(info);
+
+
+15058 void VmaBlockVectorDefragmentationContext::Begin(
bool overlappingMoveSupported,
VmaDefragmentationFlags flags)
+
+15060 const bool allAllocations = m_AllAllocations ||
+15061 m_Allocations.size() == m_pBlockVector->CalcAllocationCount();
+
+
+
+
+
+
+
+
+
+
+
+
+15074 if(VMA_DEBUG_MARGIN == 0 &&
+
+15076 !m_pBlockVector->IsBufferImageGranularityConflictPossible() &&
+
+
+15079 m_pAlgorithm = vma_new(m_hAllocator, VmaDefragmentationAlgorithm_Fast)(
+15080 m_hAllocator, m_pBlockVector, m_CurrFrameIndex, overlappingMoveSupported);
+
+
+
+15084 m_pAlgorithm = vma_new(m_hAllocator, VmaDefragmentationAlgorithm_Generic)(
+15085 m_hAllocator, m_pBlockVector, m_CurrFrameIndex, overlappingMoveSupported);
+
+
+
+
+15090 m_pAlgorithm->AddAll();
+
+
+
+15094 for(
size_t i = 0, count = m_Allocations.size(); i < count; ++i)
+
+15096 m_pAlgorithm->AddAllocation(m_Allocations[i].hAlloc, m_Allocations[i].pChanged);
+
+
+
+
+
+
+15104 VmaDefragmentationContext_T::VmaDefragmentationContext_T(
+
+15106 uint32_t currFrameIndex,
+
+
+15109 m_hAllocator(hAllocator),
+15110 m_CurrFrameIndex(currFrameIndex),
+
+
+15113 m_CustomPoolContexts(VmaStlAllocator<VmaBlockVectorDefragmentationContext*>(hAllocator->GetAllocationCallbacks()))
+
+15115 memset(m_DefaultPoolContexts, 0,
sizeof(m_DefaultPoolContexts));
+
+
+15118 VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
+
+15120 for(
size_t i = m_CustomPoolContexts.size(); i--; )
+
+15122 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_CustomPoolContexts[i];
+15123 pBlockVectorCtx->GetBlockVector()->DefragmentationEnd(pBlockVectorCtx, m_Flags, m_pStats);
+15124 vma_delete(m_hAllocator, pBlockVectorCtx);
+
+15126 for(
size_t i = m_hAllocator->m_MemProps.memoryTypeCount; i--; )
+
+15128 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_DefaultPoolContexts[i];
+15129 if(pBlockVectorCtx)
+
+15131 pBlockVectorCtx->GetBlockVector()->DefragmentationEnd(pBlockVectorCtx, m_Flags, m_pStats);
+15132 vma_delete(m_hAllocator, pBlockVectorCtx);
+
+
+
+
+15137 void VmaDefragmentationContext_T::AddPools(uint32_t poolCount,
const VmaPool* pPools)
+
+15139 for(uint32_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
+
+15141 VmaPool pool = pPools[poolIndex];
+
+
+15144 if(pool->m_BlockVector.GetAlgorithm() == 0)
+
+15146 VmaBlockVectorDefragmentationContext* pBlockVectorDefragCtx = VMA_NULL;
+
+15148 for(
size_t i = m_CustomPoolContexts.size(); i--; )
+
+15150 if(m_CustomPoolContexts[i]->GetCustomPool() == pool)
+
+15152 pBlockVectorDefragCtx = m_CustomPoolContexts[i];
+
+
+
+
+15157 if(!pBlockVectorDefragCtx)
+
+15159 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
+
+
+15162 &pool->m_BlockVector,
+
+15164 m_CustomPoolContexts.push_back(pBlockVectorDefragCtx);
+
+
+15167 pBlockVectorDefragCtx->AddAll();
+
+
+
+
+15172 void VmaDefragmentationContext_T::AddAllocations(
+15173 uint32_t allocationCount,
+
+15175 VkBool32* pAllocationsChanged)
+
+
+15178 for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
+
+
+15181 VMA_ASSERT(hAlloc);
+
+15183 if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
+
+15185 (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
+
+15187 VmaBlockVectorDefragmentationContext* pBlockVectorDefragCtx = VMA_NULL;
+
+15189 const VmaPool hAllocPool = hAlloc->GetBlock()->GetParentPool();
+
+15191 if(hAllocPool != VK_NULL_HANDLE)
+
+
+15194 if(hAllocPool->m_BlockVector.GetAlgorithm() == 0)
+
+15196 for(
size_t i = m_CustomPoolContexts.size(); i--; )
+
+15198 if(m_CustomPoolContexts[i]->GetCustomPool() == hAllocPool)
+
+15200 pBlockVectorDefragCtx = m_CustomPoolContexts[i];
+
+
+
+15204 if(!pBlockVectorDefragCtx)
+
+15206 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
+
+
+15209 &hAllocPool->m_BlockVector,
+
+15211 m_CustomPoolContexts.push_back(pBlockVectorDefragCtx);
+
+
+
+
+
+
+15218 const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
+15219 pBlockVectorDefragCtx = m_DefaultPoolContexts[memTypeIndex];
+15220 if(!pBlockVectorDefragCtx)
+
+15222 pBlockVectorDefragCtx = vma_new(m_hAllocator, VmaBlockVectorDefragmentationContext)(
+
+
+15225 m_hAllocator->m_pBlockVectors[memTypeIndex],
+
+15227 m_DefaultPoolContexts[memTypeIndex] = pBlockVectorDefragCtx;
+
+
+
+15231 if(pBlockVectorDefragCtx)
+
+15233 VkBool32*
const pChanged = (pAllocationsChanged != VMA_NULL) ?
+15234 &pAllocationsChanged[allocIndex] : VMA_NULL;
+15235 pBlockVectorDefragCtx->AddAllocation(hAlloc, pChanged);
+
+
+
+
+
+15241 VkResult VmaDefragmentationContext_T::Defragment(
+15242 VkDeviceSize maxCpuBytesToMove, uint32_t maxCpuAllocationsToMove,
+15243 VkDeviceSize maxGpuBytesToMove, uint32_t maxGpuAllocationsToMove,
+
+
+
+
+
+
+
+
+
+
+
+15255 m_MaxCpuBytesToMove = maxCpuBytesToMove;
+15256 m_MaxCpuAllocationsToMove = maxCpuAllocationsToMove;
+
+15258 m_MaxGpuBytesToMove = maxGpuBytesToMove;
+15259 m_MaxGpuAllocationsToMove = maxGpuAllocationsToMove;
+
+15261 if(m_MaxCpuBytesToMove == 0 && m_MaxCpuAllocationsToMove == 0 &&
+15262 m_MaxGpuBytesToMove == 0 && m_MaxGpuAllocationsToMove == 0)
+
+
+15265 return VK_NOT_READY;
+
+
+15268 if(commandBuffer == VK_NULL_HANDLE)
+
+15270 maxGpuBytesToMove = 0;
+15271 maxGpuAllocationsToMove = 0;
+
+
+15274 VkResult res = VK_SUCCESS;
+
+
+15277 for(uint32_t memTypeIndex = 0;
+15278 memTypeIndex < m_hAllocator->GetMemoryTypeCount() && res >= VK_SUCCESS;
+
+
+15281 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
+15282 if(pBlockVectorCtx)
+
+15284 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
+15285 pBlockVectorCtx->GetBlockVector()->Defragment(
+
+
+15288 maxCpuBytesToMove, maxCpuAllocationsToMove,
+15289 maxGpuBytesToMove, maxGpuAllocationsToMove,
+
+15291 if(pBlockVectorCtx->res != VK_SUCCESS)
+
+15293 res = pBlockVectorCtx->res;
+
+
+
+
+
+15299 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
+15300 customCtxIndex < customCtxCount && res >= VK_SUCCESS;
+
+
+15303 VmaBlockVectorDefragmentationContext* pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
+15304 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
+15305 pBlockVectorCtx->GetBlockVector()->Defragment(
+
+
+15308 maxCpuBytesToMove, maxCpuAllocationsToMove,
+15309 maxGpuBytesToMove, maxGpuAllocationsToMove,
+
+15311 if(pBlockVectorCtx->res != VK_SUCCESS)
+
+15313 res = pBlockVectorCtx->res;
+
+
+
+
+
+
+
+
+
+
+
+
+15326 for(uint32_t memTypeIndex = 0;
+15327 memTypeIndex < m_hAllocator->GetMemoryTypeCount();
+
+
+15330 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
+15331 if(pBlockVectorCtx)
+
+15333 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
+
+15335 if(!pBlockVectorCtx->hasDefragmentationPlan)
+
+15337 pBlockVectorCtx->GetBlockVector()->Defragment(
+
+
+15340 m_MaxCpuBytesToMove, m_MaxCpuAllocationsToMove,
+15341 m_MaxGpuBytesToMove, m_MaxGpuAllocationsToMove,
+
+
+15344 if(pBlockVectorCtx->res < VK_SUCCESS)
+
+
+15347 pBlockVectorCtx->hasDefragmentationPlan =
true;
+
+
+15350 const uint32_t processed = pBlockVectorCtx->GetBlockVector()->ProcessDefragmentations(
+
+15352 pCurrentMove, movesLeft);
+
+15354 movesLeft -= processed;
+15355 pCurrentMove += processed;
+
+
+
+
+15360 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
+15361 customCtxIndex < customCtxCount;
+
+
+15364 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
+15365 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
+
+15367 if(!pBlockVectorCtx->hasDefragmentationPlan)
+
+15369 pBlockVectorCtx->GetBlockVector()->Defragment(
+
+
+15372 m_MaxCpuBytesToMove, m_MaxCpuAllocationsToMove,
+15373 m_MaxGpuBytesToMove, m_MaxGpuAllocationsToMove,
+
+
+15376 if(pBlockVectorCtx->res < VK_SUCCESS)
+
+
+15379 pBlockVectorCtx->hasDefragmentationPlan =
true;
+
+
+15382 const uint32_t processed = pBlockVectorCtx->GetBlockVector()->ProcessDefragmentations(
+
+15384 pCurrentMove, movesLeft);
+
+15386 movesLeft -= processed;
+15387 pCurrentMove += processed;
+
+
+
+
+
+
+15394 VkResult VmaDefragmentationContext_T::DefragmentPassEnd()
+
+15396 VkResult res = VK_SUCCESS;
+
+
+15399 for(uint32_t memTypeIndex = 0;
+15400 memTypeIndex < m_hAllocator->GetMemoryTypeCount();
+
+
+15403 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_DefaultPoolContexts[memTypeIndex];
+15404 if(pBlockVectorCtx)
+
+15406 VMA_ASSERT(pBlockVectorCtx->GetBlockVector());
+
+15408 if(!pBlockVectorCtx->hasDefragmentationPlan)
+
+15410 res = VK_NOT_READY;
+
+
+
+15414 pBlockVectorCtx->GetBlockVector()->CommitDefragmentations(
+15415 pBlockVectorCtx, m_pStats);
+
+15417 if(pBlockVectorCtx->defragmentationMoves.size() != pBlockVectorCtx->defragmentationMovesCommitted)
+15418 res = VK_NOT_READY;
+
+
+
+
+15423 for(
size_t customCtxIndex = 0, customCtxCount = m_CustomPoolContexts.size();
+15424 customCtxIndex < customCtxCount;
+
+
+15427 VmaBlockVectorDefragmentationContext *pBlockVectorCtx = m_CustomPoolContexts[customCtxIndex];
+15428 VMA_ASSERT(pBlockVectorCtx && pBlockVectorCtx->GetBlockVector());
+
+15430 if(!pBlockVectorCtx->hasDefragmentationPlan)
+
+15432 res = VK_NOT_READY;
+
+
+
+15436 pBlockVectorCtx->GetBlockVector()->CommitDefragmentations(
+15437 pBlockVectorCtx, m_pStats);
+
+15439 if(pBlockVectorCtx->defragmentationMoves.size() != pBlockVectorCtx->defragmentationMovesCommitted)
+15440 res = VK_NOT_READY;
+
+
+
+
+
+
+
+15449 #if VMA_RECORDING_ENABLED
+
+15451 VmaRecorder::VmaRecorder() :
+
+
+
+15455 m_RecordingStartTime(std::chrono::high_resolution_clock::now())
+
+
+
+
+
+15461 m_UseMutex = useMutex;
+15462 m_Flags = settings.
flags;
+
+15464 #if defined(_WIN32)
+
+15466 errno_t err = fopen_s(&m_File, settings.
pFilePath,
"wb");
+
+
+
+15470 return VK_ERROR_INITIALIZATION_FAILED;
+
+
+
+15474 m_File = fopen(settings.
pFilePath,
"wb");
+
+
+
+15478 return VK_ERROR_INITIALIZATION_FAILED;
+
+
+
+
+15483 fprintf(m_File,
"%s\n",
"Vulkan Memory Allocator,Calls recording");
+15484 fprintf(m_File,
"%s\n",
"1,8");
+
+
+
+
+15489 VmaRecorder::~VmaRecorder()
+
+15491 if(m_File != VMA_NULL)
+
+
+
+
+
+15497 void VmaRecorder::RecordCreateAllocator(uint32_t frameIndex)
+
+15499 CallParams callParams;
+15500 GetBasicParams(callParams);
+
+15502 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15503 fprintf(m_File,
"%u,%.3f,%u,vmaCreateAllocator\n", callParams.threadId, callParams.time, frameIndex);
+
+
+
+15507 void VmaRecorder::RecordDestroyAllocator(uint32_t frameIndex)
+
+15509 CallParams callParams;
+15510 GetBasicParams(callParams);
+
+15512 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15513 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyAllocator\n", callParams.threadId, callParams.time, frameIndex);
+
+
+
+
+
+15519 CallParams callParams;
+15520 GetBasicParams(callParams);
+
+15522 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15523 fprintf(m_File,
"%u,%.3f,%u,vmaCreatePool,%u,%u,%llu,%llu,%llu,%u,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+
+
+
+
+
+
+15534 void VmaRecorder::RecordDestroyPool(uint32_t frameIndex,
VmaPool pool)
+
+15536 CallParams callParams;
+15537 GetBasicParams(callParams);
+
+15539 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15540 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyPool,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15545 void VmaRecorder::RecordAllocateMemory(uint32_t frameIndex,
+15546 const VkMemoryRequirements& vkMemReq,
+
+
+
+15550 CallParams callParams;
+15551 GetBasicParams(callParams);
+
+15553 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15554 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
+15555 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemory,%llu,%llu,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+
+15557 vkMemReq.alignment,
+15558 vkMemReq.memoryTypeBits,
+
+
+
+
+
+
+
+15566 userDataStr.GetString());
+
+
+
+15570 void VmaRecorder::RecordAllocateMemoryPages(uint32_t frameIndex,
+15571 const VkMemoryRequirements& vkMemReq,
+
+15573 uint64_t allocationCount,
+
+
+15576 CallParams callParams;
+15577 GetBasicParams(callParams);
+
+15579 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15580 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
+15581 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryPages,%llu,%llu,%u,%u,%u,%u,%u,%u,%p,", callParams.threadId, callParams.time, frameIndex,
+
+15583 vkMemReq.alignment,
+15584 vkMemReq.memoryTypeBits,
+
+
+
+
+
+
+15591 PrintPointerList(allocationCount, pAllocations);
+15592 fprintf(m_File,
",%s\n", userDataStr.GetString());
+
+
+
+15596 void VmaRecorder::RecordAllocateMemoryForBuffer(uint32_t frameIndex,
+15597 const VkMemoryRequirements& vkMemReq,
+15598 bool requiresDedicatedAllocation,
+15599 bool prefersDedicatedAllocation,
+
+
+
+15603 CallParams callParams;
+15604 GetBasicParams(callParams);
+
+15606 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15607 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
+15608 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryForBuffer,%llu,%llu,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+
+15610 vkMemReq.alignment,
+15611 vkMemReq.memoryTypeBits,
+15612 requiresDedicatedAllocation ? 1 : 0,
+15613 prefersDedicatedAllocation ? 1 : 0,
+
+
+
+
+
+
+
+15621 userDataStr.GetString());
+
+
+
+15625 void VmaRecorder::RecordAllocateMemoryForImage(uint32_t frameIndex,
+15626 const VkMemoryRequirements& vkMemReq,
+15627 bool requiresDedicatedAllocation,
+15628 bool prefersDedicatedAllocation,
+
+
+
+15632 CallParams callParams;
+15633 GetBasicParams(callParams);
+
+15635 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15636 UserDataString userDataStr(createInfo.
flags, createInfo.
pUserData);
+15637 fprintf(m_File,
"%u,%.3f,%u,vmaAllocateMemoryForImage,%llu,%llu,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+
+15639 vkMemReq.alignment,
+15640 vkMemReq.memoryTypeBits,
+15641 requiresDedicatedAllocation ? 1 : 0,
+15642 prefersDedicatedAllocation ? 1 : 0,
+
+
+
+
+
+
+
+15650 userDataStr.GetString());
+
+
+
+15654 void VmaRecorder::RecordFreeMemory(uint32_t frameIndex,
+
+
+15657 CallParams callParams;
+15658 GetBasicParams(callParams);
+
+15660 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15661 fprintf(m_File,
"%u,%.3f,%u,vmaFreeMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15666 void VmaRecorder::RecordFreeMemoryPages(uint32_t frameIndex,
+15667 uint64_t allocationCount,
+
+
+15670 CallParams callParams;
+15671 GetBasicParams(callParams);
+
+15673 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15674 fprintf(m_File,
"%u,%.3f,%u,vmaFreeMemoryPages,", callParams.threadId, callParams.time, frameIndex);
+15675 PrintPointerList(allocationCount, pAllocations);
+15676 fprintf(m_File,
"\n");
+
+
+
+15680 void VmaRecorder::RecordSetAllocationUserData(uint32_t frameIndex,
+
+15682 const void* pUserData)
+
+15684 CallParams callParams;
+15685 GetBasicParams(callParams);
+
+15687 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15688 UserDataString userDataStr(
+
+
+15691 fprintf(m_File,
"%u,%.3f,%u,vmaSetAllocationUserData,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+
+15693 userDataStr.GetString());
+
+
+
+15697 void VmaRecorder::RecordCreateLostAllocation(uint32_t frameIndex,
+
+
+15700 CallParams callParams;
+15701 GetBasicParams(callParams);
+
+15703 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15704 fprintf(m_File,
"%u,%.3f,%u,vmaCreateLostAllocation,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15709 void VmaRecorder::RecordMapMemory(uint32_t frameIndex,
+
+
+15712 CallParams callParams;
+15713 GetBasicParams(callParams);
+
+15715 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15716 fprintf(m_File,
"%u,%.3f,%u,vmaMapMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15721 void VmaRecorder::RecordUnmapMemory(uint32_t frameIndex,
+
+
+15724 CallParams callParams;
+15725 GetBasicParams(callParams);
+
+15727 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15728 fprintf(m_File,
"%u,%.3f,%u,vmaUnmapMemory,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15733 void VmaRecorder::RecordFlushAllocation(uint32_t frameIndex,
+15734 VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
+
+15736 CallParams callParams;
+15737 GetBasicParams(callParams);
+
+15739 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15740 fprintf(m_File,
"%u,%.3f,%u,vmaFlushAllocation,%p,%llu,%llu\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+
+
+15747 void VmaRecorder::RecordInvalidateAllocation(uint32_t frameIndex,
+15748 VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
+
+15750 CallParams callParams;
+15751 GetBasicParams(callParams);
+
+15753 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15754 fprintf(m_File,
"%u,%.3f,%u,vmaInvalidateAllocation,%p,%llu,%llu\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+
+
+15761 void VmaRecorder::RecordCreateBuffer(uint32_t frameIndex,
+15762 const VkBufferCreateInfo& bufCreateInfo,
+
+
+
+15766 CallParams callParams;
+15767 GetBasicParams(callParams);
+
+15769 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15770 UserDataString userDataStr(allocCreateInfo.
flags, allocCreateInfo.
pUserData);
+15771 fprintf(m_File,
"%u,%.3f,%u,vmaCreateBuffer,%u,%llu,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+15772 bufCreateInfo.flags,
+15773 bufCreateInfo.size,
+15774 bufCreateInfo.usage,
+15775 bufCreateInfo.sharingMode,
+15776 allocCreateInfo.
flags,
+15777 allocCreateInfo.
usage,
+
+
+
+15781 allocCreateInfo.
pool,
+
+15783 userDataStr.GetString());
+
+
+
+15787 void VmaRecorder::RecordCreateImage(uint32_t frameIndex,
+15788 const VkImageCreateInfo& imageCreateInfo,
+
+
+
+15792 CallParams callParams;
+15793 GetBasicParams(callParams);
+
+15795 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15796 UserDataString userDataStr(allocCreateInfo.
flags, allocCreateInfo.
pUserData);
+15797 fprintf(m_File,
"%u,%.3f,%u,vmaCreateImage,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%u,%p,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+15798 imageCreateInfo.flags,
+15799 imageCreateInfo.imageType,
+15800 imageCreateInfo.format,
+15801 imageCreateInfo.extent.width,
+15802 imageCreateInfo.extent.height,
+15803 imageCreateInfo.extent.depth,
+15804 imageCreateInfo.mipLevels,
+15805 imageCreateInfo.arrayLayers,
+15806 imageCreateInfo.samples,
+15807 imageCreateInfo.tiling,
+15808 imageCreateInfo.usage,
+15809 imageCreateInfo.sharingMode,
+15810 imageCreateInfo.initialLayout,
+15811 allocCreateInfo.
flags,
+15812 allocCreateInfo.
usage,
+
+
+
+15816 allocCreateInfo.
pool,
+
+15818 userDataStr.GetString());
+
+
+
+15822 void VmaRecorder::RecordDestroyBuffer(uint32_t frameIndex,
+
+
+15825 CallParams callParams;
+15826 GetBasicParams(callParams);
+
+15828 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15829 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyBuffer,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15834 void VmaRecorder::RecordDestroyImage(uint32_t frameIndex,
+
+
+15837 CallParams callParams;
+15838 GetBasicParams(callParams);
+
+15840 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15841 fprintf(m_File,
"%u,%.3f,%u,vmaDestroyImage,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15846 void VmaRecorder::RecordTouchAllocation(uint32_t frameIndex,
+
+
+15849 CallParams callParams;
+15850 GetBasicParams(callParams);
+
+15852 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15853 fprintf(m_File,
"%u,%.3f,%u,vmaTouchAllocation,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15858 void VmaRecorder::RecordGetAllocationInfo(uint32_t frameIndex,
+
+
+15861 CallParams callParams;
+15862 GetBasicParams(callParams);
+
+15864 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15865 fprintf(m_File,
"%u,%.3f,%u,vmaGetAllocationInfo,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15870 void VmaRecorder::RecordMakePoolAllocationsLost(uint32_t frameIndex,
+
+
+15873 CallParams callParams;
+15874 GetBasicParams(callParams);
+
+15876 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15877 fprintf(m_File,
"%u,%.3f,%u,vmaMakePoolAllocationsLost,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15882 void VmaRecorder::RecordDefragmentationBegin(uint32_t frameIndex,
+
+
+
+15886 CallParams callParams;
+15887 GetBasicParams(callParams);
+
+15889 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15890 fprintf(m_File,
"%u,%.3f,%u,vmaDefragmentationBegin,%u,", callParams.threadId, callParams.time, frameIndex,
+
+
+15893 fprintf(m_File,
",");
+
+15895 fprintf(m_File,
",%llu,%u,%llu,%u,%p,%p\n",
+
+
+
+
+
+
+
+
+
+15905 void VmaRecorder::RecordDefragmentationEnd(uint32_t frameIndex,
+
+
+15908 CallParams callParams;
+15909 GetBasicParams(callParams);
+
+15911 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15912 fprintf(m_File,
"%u,%.3f,%u,vmaDefragmentationEnd,%p\n", callParams.threadId, callParams.time, frameIndex,
+
+
+
+
+15917 void VmaRecorder::RecordSetPoolName(uint32_t frameIndex,
+
+
+
+15921 CallParams callParams;
+15922 GetBasicParams(callParams);
+
+15924 VmaMutexLock lock(m_FileMutex, m_UseMutex);
+15925 fprintf(m_File,
"%u,%.3f,%u,vmaSetPoolName,%p,%s\n", callParams.threadId, callParams.time, frameIndex,
+15926 pool, name != VMA_NULL ? name :
"");
+
+
+
+
+
+15932 if(pUserData != VMA_NULL)
+
+
+
+15936 m_Str = (
const char*)pUserData;
+
+
+
+
+15941 snprintf(m_PtrStr, 17,
"%p", pUserData);
+
+
+
+
+
+
+
+
+
+15951 void VmaRecorder::WriteConfiguration(
+15952 const VkPhysicalDeviceProperties& devProps,
+15953 const VkPhysicalDeviceMemoryProperties& memProps,
+15954 uint32_t vulkanApiVersion,
+15955 bool dedicatedAllocationExtensionEnabled,
+15956 bool bindMemory2ExtensionEnabled,
+15957 bool memoryBudgetExtensionEnabled,
+15958 bool deviceCoherentMemoryExtensionEnabled)
+
+15960 fprintf(m_File,
"Config,Begin\n");
+
+15962 fprintf(m_File,
"VulkanApiVersion,%u,%u\n", VK_VERSION_MAJOR(vulkanApiVersion), VK_VERSION_MINOR(vulkanApiVersion));
+
+15964 fprintf(m_File,
"PhysicalDevice,apiVersion,%u\n", devProps.apiVersion);
+15965 fprintf(m_File,
"PhysicalDevice,driverVersion,%u\n", devProps.driverVersion);
+15966 fprintf(m_File,
"PhysicalDevice,vendorID,%u\n", devProps.vendorID);
+15967 fprintf(m_File,
"PhysicalDevice,deviceID,%u\n", devProps.deviceID);
+15968 fprintf(m_File,
"PhysicalDevice,deviceType,%u\n", devProps.deviceType);
+15969 fprintf(m_File,
"PhysicalDevice,deviceName,%s\n", devProps.deviceName);
+
+15971 fprintf(m_File,
"PhysicalDeviceLimits,maxMemoryAllocationCount,%u\n", devProps.limits.maxMemoryAllocationCount);
+15972 fprintf(m_File,
"PhysicalDeviceLimits,bufferImageGranularity,%llu\n", devProps.limits.bufferImageGranularity);
+15973 fprintf(m_File,
"PhysicalDeviceLimits,nonCoherentAtomSize,%llu\n", devProps.limits.nonCoherentAtomSize);
+
+15975 fprintf(m_File,
"PhysicalDeviceMemory,HeapCount,%u\n", memProps.memoryHeapCount);
+15976 for(uint32_t i = 0; i < memProps.memoryHeapCount; ++i)
+
+15978 fprintf(m_File,
"PhysicalDeviceMemory,Heap,%u,size,%llu\n", i, memProps.memoryHeaps[i].size);
+15979 fprintf(m_File,
"PhysicalDeviceMemory,Heap,%u,flags,%u\n", i, memProps.memoryHeaps[i].flags);
+
+15981 fprintf(m_File,
"PhysicalDeviceMemory,TypeCount,%u\n", memProps.memoryTypeCount);
+15982 for(uint32_t i = 0; i < memProps.memoryTypeCount; ++i)
+
+15984 fprintf(m_File,
"PhysicalDeviceMemory,Type,%u,heapIndex,%u\n", i, memProps.memoryTypes[i].heapIndex);
+15985 fprintf(m_File,
"PhysicalDeviceMemory,Type,%u,propertyFlags,%u\n", i, memProps.memoryTypes[i].propertyFlags);
+
+
+15988 fprintf(m_File,
"Extension,VK_KHR_dedicated_allocation,%u\n", dedicatedAllocationExtensionEnabled ? 1 : 0);
+15989 fprintf(m_File,
"Extension,VK_KHR_bind_memory2,%u\n", bindMemory2ExtensionEnabled ? 1 : 0);
+15990 fprintf(m_File,
"Extension,VK_EXT_memory_budget,%u\n", memoryBudgetExtensionEnabled ? 1 : 0);
+15991 fprintf(m_File,
"Extension,VK_AMD_device_coherent_memory,%u\n", deviceCoherentMemoryExtensionEnabled ? 1 : 0);
+
+15993 fprintf(m_File,
"Macro,VMA_DEBUG_ALWAYS_DEDICATED_MEMORY,%u\n", VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ? 1 : 0);
+15994 fprintf(m_File,
"Macro,VMA_MIN_ALIGNMENT,%llu\n", (VkDeviceSize)VMA_MIN_ALIGNMENT);
+15995 fprintf(m_File,
"Macro,VMA_DEBUG_MARGIN,%llu\n", (VkDeviceSize)VMA_DEBUG_MARGIN);
+15996 fprintf(m_File,
"Macro,VMA_DEBUG_INITIALIZE_ALLOCATIONS,%u\n", VMA_DEBUG_INITIALIZE_ALLOCATIONS ? 1 : 0);
+15997 fprintf(m_File,
"Macro,VMA_DEBUG_DETECT_CORRUPTION,%u\n", VMA_DEBUG_DETECT_CORRUPTION ? 1 : 0);
+15998 fprintf(m_File,
"Macro,VMA_DEBUG_GLOBAL_MUTEX,%u\n", VMA_DEBUG_GLOBAL_MUTEX ? 1 : 0);
+15999 fprintf(m_File,
"Macro,VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY,%llu\n", (VkDeviceSize)VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY);
+16000 fprintf(m_File,
"Macro,VMA_SMALL_HEAP_MAX_SIZE,%llu\n", (VkDeviceSize)VMA_SMALL_HEAP_MAX_SIZE);
+16001 fprintf(m_File,
"Macro,VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE,%llu\n", (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
+
+16003 fprintf(m_File,
"Config,End\n");
+
+
+16006 void VmaRecorder::GetBasicParams(CallParams& outParams)
+
+16008 #if defined(_WIN32)
+16009 outParams.threadId = GetCurrentThreadId();
+
+
+
+
+16014 std::thread::id thread_id = std::this_thread::get_id();
+16015 std::stringstream thread_id_to_string_converter;
+16016 thread_id_to_string_converter << thread_id;
+16017 std::string thread_id_as_string = thread_id_to_string_converter.str();
+16018 outParams.threadId =
static_cast<uint32_t
>(std::stoi(thread_id_as_string.c_str()));
+
+
+16021 auto current_time = std::chrono::high_resolution_clock::now();
+
+16023 outParams.time = std::chrono::duration<double, std::chrono::seconds::period>(current_time - m_RecordingStartTime).count();
+
+
+16026 void VmaRecorder::PrintPointerList(uint64_t count,
const VmaAllocation* pItems)
+
+
+
+16030 fprintf(m_File,
"%p", pItems[0]);
+16031 for(uint64_t i = 1; i < count; ++i)
+
+16033 fprintf(m_File,
" %p", pItems[i]);
+
+
+
+
+16038 void VmaRecorder::Flush()
+
+
+
+
+
+
+
+
+
+
+
+16051 VmaAllocationObjectAllocator::VmaAllocationObjectAllocator(
const VkAllocationCallbacks* pAllocationCallbacks) :
+16052 m_Allocator(pAllocationCallbacks, 1024)
+
+
+
+16056 template<
typename... Types>
VmaAllocation VmaAllocationObjectAllocator::Allocate(Types... args)
+
+16058 VmaMutexLock mutexLock(m_Mutex);
+16059 return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
+
+
+16062 void VmaAllocationObjectAllocator::Free(
VmaAllocation hAlloc)
+
+16064 VmaMutexLock mutexLock(m_Mutex);
+16065 m_Allocator.Free(hAlloc);
+
+
+
+
+
+
+16073 m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
+
+
+
+
+
+
+16080 m_hDevice(pCreateInfo->device),
+16081 m_hInstance(pCreateInfo->instance),
+16082 m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
+16083 m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
+16084 *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
+16085 m_AllocationObjectAllocator(&m_AllocationCallbacks),
+16086 m_HeapSizeLimitMask(0),
+16087 m_DeviceMemoryCount(0),
+16088 m_PreferredLargeHeapBlockSize(0),
+16089 m_PhysicalDevice(pCreateInfo->physicalDevice),
+16090 m_CurrentFrameIndex(0),
+16091 m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
+
+16093 m_GlobalMemoryTypeBits(UINT32_MAX)
+
+16095 ,m_pRecorder(VMA_NULL)
+
+
+16098 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16100 m_UseKhrDedicatedAllocation =
false;
+16101 m_UseKhrBindMemory2 =
false;
+
+
+16104 if(VMA_DEBUG_DETECT_CORRUPTION)
+
+
+16107 VMA_ASSERT(VMA_DEBUG_MARGIN %
sizeof(uint32_t) == 0);
+
+
+
+
+16112 if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
+
+16114 #if !(VMA_DEDICATED_ALLOCATION)
+
+
+16117 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
+
+
+16120 #if !(VMA_BIND_MEMORY2)
+
+
+16123 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
+
+
+
+16127 #if !(VMA_MEMORY_BUDGET)
+
+
+16130 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
+
+
+16133 #if !(VMA_BUFFER_DEVICE_ADDRESS)
+16134 if(m_UseKhrBufferDeviceAddress)
+
+16136 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
+
+
+16139 #if VMA_VULKAN_VERSION < 1002000
+16140 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 2, 0))
+
+16142 VMA_ASSERT(0 &&
"vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
+
+
+16145 #if VMA_VULKAN_VERSION < 1001000
+16146 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16148 VMA_ASSERT(0 &&
"vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
+
+
+16151 #if !(VMA_MEMORY_PRIORITY)
+16152 if(m_UseExtMemoryPriority)
+
+16154 VMA_ASSERT(0 &&
"VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
+
+
+
+16158 memset(&m_DeviceMemoryCallbacks, 0 ,
sizeof(m_DeviceMemoryCallbacks));
+16159 memset(&m_PhysicalDeviceProperties, 0,
sizeof(m_PhysicalDeviceProperties));
+16160 memset(&m_MemProps, 0,
sizeof(m_MemProps));
+
+16162 memset(&m_pBlockVectors, 0,
sizeof(m_pBlockVectors));
+16163 memset(&m_VulkanFunctions, 0,
sizeof(m_VulkanFunctions));
+
+16165 #if VMA_EXTERNAL_MEMORY
+16166 memset(&m_TypeExternalMemoryHandleTypes, 0,
sizeof(m_TypeExternalMemoryHandleTypes));
+
+
+
+
+
+
+
+
+
+
+
+16178 (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
+16179 (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
+
+16181 VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
+16182 VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
+16183 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
+16184 VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
+
+
+
+
+16189 m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
+
+16191 #if VMA_EXTERNAL_MEMORY
+
+
+
+16195 sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
+
+
+
+
+
+16201 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
+
+16203 const VkDeviceSize limit = pCreateInfo->
pHeapSizeLimit[heapIndex];
+16204 if(limit != VK_WHOLE_SIZE)
+
+16206 m_HeapSizeLimitMask |= 1u << heapIndex;
+16207 if(limit < m_MemProps.memoryHeaps[heapIndex].size)
+
+16209 m_MemProps.memoryHeaps[heapIndex].size = limit;
+
+
+
+
+
+16215 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+16217 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
+
+16219 m_pBlockVectors[memTypeIndex] = vma_new(
this, VmaBlockVector)(
+
+
+
+16223 preferredBlockSize,
+
+
+16226 GetBufferImageGranularity(),
+
+
+
+
+16231 GetMemoryTypeMinAlignment(memTypeIndex),
+
+
+
+
+
+
+
+
+16240 VkResult res = VK_SUCCESS;
+
+
+
+
+16245 #if VMA_RECORDING_ENABLED
+16246 m_pRecorder = vma_new(
this, VmaRecorder)();
+
+16248 if(res != VK_SUCCESS)
+
+
+
+16252 m_pRecorder->WriteConfiguration(
+16253 m_PhysicalDeviceProperties,
+
+16255 m_VulkanApiVersion,
+16256 m_UseKhrDedicatedAllocation,
+16257 m_UseKhrBindMemory2,
+16258 m_UseExtMemoryBudget,
+16259 m_UseAmdDeviceCoherentMemory);
+16260 m_pRecorder->RecordCreateAllocator(GetCurrentFrameIndex());
+
+16262 VMA_ASSERT(0 &&
"VmaAllocatorCreateInfo::pRecordSettings used, but not supported due to VMA_RECORDING_ENABLED not defined to 1.");
+16263 return VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+
+16267 #if VMA_MEMORY_BUDGET
+16268 if(m_UseExtMemoryBudget)
+
+16270 UpdateVulkanBudget();
+
+
+
+
+
+
+16277 VmaAllocator_T::~VmaAllocator_T()
+
+16279 #if VMA_RECORDING_ENABLED
+16280 if(m_pRecorder != VMA_NULL)
+
+16282 m_pRecorder->RecordDestroyAllocator(GetCurrentFrameIndex());
+16283 vma_delete(
this, m_pRecorder);
+
+
+
+16287 VMA_ASSERT(m_Pools.IsEmpty());
+
+16289 for(
size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
+
+16291 if(!m_DedicatedAllocations[memTypeIndex].IsEmpty())
+
+16293 VMA_ASSERT(0 &&
"Unfreed dedicated allocations found.");
+
+
+16296 vma_delete(
this, m_pBlockVectors[memTypeIndex]);
+
+
+
+16300 void VmaAllocator_T::ImportVulkanFunctions(
const VmaVulkanFunctions* pVulkanFunctions)
+
+16302 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
+16303 ImportVulkanFunctions_Static();
+
+
+16306 if(pVulkanFunctions != VMA_NULL)
+
+16308 ImportVulkanFunctions_Custom(pVulkanFunctions);
+
+
+16311 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
+16312 ImportVulkanFunctions_Dynamic();
+
+
+16315 ValidateVulkanFunctions();
+
+
+16318 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
+
+16320 void VmaAllocator_T::ImportVulkanFunctions_Static()
+
+
+16323 m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
+16324 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
+16325 m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
+16326 m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
+16327 m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
+16328 m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
+16329 m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
+16330 m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
+16331 m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
+16332 m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
+16333 m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
+16334 m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
+16335 m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
+16336 m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
+16337 m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
+16338 m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
+16339 m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
+
+
+16342 #if VMA_VULKAN_VERSION >= 1001000
+16343 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16345 m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
+16346 m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
+16347 m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
+16348 m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
+16349 m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
+
+
+
+
+
+
+16356 void VmaAllocator_T::ImportVulkanFunctions_Custom(
const VmaVulkanFunctions* pVulkanFunctions)
+
+16358 VMA_ASSERT(pVulkanFunctions != VMA_NULL);
+
+16360 #define VMA_COPY_IF_NOT_NULL(funcName) \
+16361 if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
+
+16363 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
+16364 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
+16365 VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
+16366 VMA_COPY_IF_NOT_NULL(vkFreeMemory);
+16367 VMA_COPY_IF_NOT_NULL(vkMapMemory);
+16368 VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
+16369 VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
+16370 VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
+16371 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
+16372 VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
+16373 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
+16374 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
+16375 VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
+16376 VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
+16377 VMA_COPY_IF_NOT_NULL(vkCreateImage);
+16378 VMA_COPY_IF_NOT_NULL(vkDestroyImage);
+16379 VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
+
+16381 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+16382 VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
+16383 VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
+
+
+16386 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
+16387 VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
+16388 VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
+
+
+16391 #if VMA_MEMORY_BUDGET
+16392 VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
+
+
+16395 #undef VMA_COPY_IF_NOT_NULL
+
+
+16398 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
+
+16400 void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
+
+16402 #define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
+16403 if(m_VulkanFunctions.memberName == VMA_NULL) \
+16404 m_VulkanFunctions.memberName = \
+16405 (functionPointerType)vkGetInstanceProcAddr(m_hInstance, functionNameString);
+16406 #define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
+16407 if(m_VulkanFunctions.memberName == VMA_NULL) \
+16408 m_VulkanFunctions.memberName = \
+16409 (functionPointerType)vkGetDeviceProcAddr(m_hDevice, functionNameString);
+
+16411 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties,
"vkGetPhysicalDeviceProperties");
+16412 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties,
"vkGetPhysicalDeviceMemoryProperties");
+16413 VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory,
"vkAllocateMemory");
+16414 VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory,
"vkFreeMemory");
+16415 VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory,
"vkMapMemory");
+16416 VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory,
"vkUnmapMemory");
+16417 VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges,
"vkFlushMappedMemoryRanges");
+16418 VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges,
"vkInvalidateMappedMemoryRanges");
+16419 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory,
"vkBindBufferMemory");
+16420 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory,
"vkBindImageMemory");
+16421 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements,
"vkGetBufferMemoryRequirements");
+16422 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements,
"vkGetImageMemoryRequirements");
+16423 VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer,
"vkCreateBuffer");
+16424 VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer,
"vkDestroyBuffer");
+16425 VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage,
"vkCreateImage");
+16426 VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage,
"vkDestroyImage");
+16427 VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer,
"vkCmdCopyBuffer");
+
+16429 #if VMA_VULKAN_VERSION >= 1001000
+16430 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16432 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2,
"vkGetBufferMemoryRequirements2");
+16433 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2,
"vkGetImageMemoryRequirements2");
+16434 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2,
"vkBindBufferMemory2");
+16435 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2,
"vkBindImageMemory2");
+16436 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2,
"vkGetPhysicalDeviceMemoryProperties2");
+
+
+
+16440 #if VMA_DEDICATED_ALLOCATION
+16441 if(m_UseKhrDedicatedAllocation)
+
+16443 VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR,
"vkGetBufferMemoryRequirements2KHR");
+16444 VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR,
"vkGetImageMemoryRequirements2KHR");
+
+
+
+16448 #if VMA_BIND_MEMORY2
+16449 if(m_UseKhrBindMemory2)
+
+16451 VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR,
"vkBindBufferMemory2KHR");
+16452 VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR,
"vkBindImageMemory2KHR");
+
+
+
+16456 #if VMA_MEMORY_BUDGET
+16457 if(m_UseExtMemoryBudget)
+
+16459 VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR,
"vkGetPhysicalDeviceMemoryProperties2KHR");
+
+
+
+16463 #undef VMA_FETCH_DEVICE_FUNC
+16464 #undef VMA_FETCH_INSTANCE_FUNC
+
+
+
+
+16469 void VmaAllocator_T::ValidateVulkanFunctions()
+
+16471 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
+16472 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
+16473 VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
+16474 VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
+16475 VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
+16476 VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
+16477 VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
+16478 VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
+16479 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
+16480 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
+16481 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
+16482 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
+16483 VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
+16484 VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
+16485 VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
+16486 VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
+16487 VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
+
+16489 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+16490 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
+
+16492 VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
+16493 VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
+
+
+
+16497 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
+16498 if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
+
+16500 VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
+16501 VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
+
+
+
+16505 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
+16506 if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16508 VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
+
+
+
+
+16513 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
+
+16515 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
+16516 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
+16517 const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
+16518 return VmaAlignUp(isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, (VkDeviceSize)32);
+
+
+16521 VkResult VmaAllocator_T::AllocateMemoryOfType(
+
+16523 VkDeviceSize alignment,
+16524 bool dedicatedAllocation,
+16525 VkBuffer dedicatedBuffer,
+16526 VkBufferUsageFlags dedicatedBufferUsage,
+16527 VkImage dedicatedImage,
+
+16529 uint32_t memTypeIndex,
+16530 VmaSuballocationType suballocType,
+16531 size_t allocationCount,
+
+
+16534 VMA_ASSERT(pAllocations != VMA_NULL);
+16535 VMA_DEBUG_LOG(
" AllocateMemory: MemoryTypeIndex=%u, AllocationCount=%zu, Size=%llu", memTypeIndex, allocationCount, size);
+
+
+
+
+
+16541 (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
+
+
+
+
+
+
+
+
+
+16551 VmaBlockVector*
const blockVector = m_pBlockVectors[memTypeIndex];
+16552 VMA_ASSERT(blockVector);
+
+16554 const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
+16555 bool preferDedicatedMemory =
+16556 VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
+16557 dedicatedAllocation ||
+
+16559 size > preferredBlockSize / 2;
+
+16561 if(preferDedicatedMemory &&
+
+16563 finalCreateInfo.
pool == VK_NULL_HANDLE)
+
+
+
+
+
+
+
+
+16572 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+16576 return AllocateDedicatedMemory(
+
+
+
+
+
+
+
+
+
+16586 dedicatedBufferUsage,
+
+
+
+
+
+
+
+16594 VkResult res = blockVector->Allocate(
+16595 m_CurrentFrameIndex.load(),
+
+
+
+
+
+
+16602 if(res == VK_SUCCESS)
+
+
+
+
+
+
+
+16610 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+
+16616 if(m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
+
+16618 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+16621 res = AllocateDedicatedMemory(
+
+
+
+
+
+
+
+
+
+16631 dedicatedBufferUsage,
+
+
+
+16635 if(res == VK_SUCCESS)
+
+
+16638 VMA_DEBUG_LOG(
" Allocated as DedicatedMemory");
+
+
+
+
+
+16644 VMA_DEBUG_LOG(
" vkAllocateMemory FAILED");
+
+
+
+
+
+16650 VkResult VmaAllocator_T::AllocateDedicatedMemory(
+
+16652 VmaSuballocationType suballocType,
+16653 uint32_t memTypeIndex,
+
+
+16656 bool isUserDataString,
+
+
+16659 VkBuffer dedicatedBuffer,
+16660 VkBufferUsageFlags dedicatedBufferUsage,
+16661 VkImage dedicatedImage,
+16662 size_t allocationCount,
+
+
+16665 VMA_ASSERT(allocationCount > 0 && pAllocations);
+
+
+
+16669 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
+
+16671 GetBudget(&heapBudget, heapIndex, 1);
+16672 if(heapBudget.
usage + size * allocationCount > heapBudget.
budget)
+
+16674 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+16678 VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
+16679 allocInfo.memoryTypeIndex = memTypeIndex;
+16680 allocInfo.allocationSize = size;
+
+16682 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+16683 VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
+16684 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16686 if(dedicatedBuffer != VK_NULL_HANDLE)
+
+16688 VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
+16689 dedicatedAllocInfo.buffer = dedicatedBuffer;
+16690 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
+
+16692 else if(dedicatedImage != VK_NULL_HANDLE)
+
+16694 dedicatedAllocInfo.image = dedicatedImage;
+16695 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
+
+
+
+
+16700 #if VMA_BUFFER_DEVICE_ADDRESS
+16701 VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
+16702 if(m_UseKhrBufferDeviceAddress)
+
+16704 bool canContainBufferWithDeviceAddress =
true;
+16705 if(dedicatedBuffer != VK_NULL_HANDLE)
+
+16707 canContainBufferWithDeviceAddress = dedicatedBufferUsage == UINT32_MAX ||
+16708 (dedicatedBufferUsage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT) != 0;
+
+16710 else if(dedicatedImage != VK_NULL_HANDLE)
+
+16712 canContainBufferWithDeviceAddress =
false;
+
+16714 if(canContainBufferWithDeviceAddress)
+
+16716 allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
+16717 VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
+
+
+
+
+16722 #if VMA_MEMORY_PRIORITY
+16723 VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
+16724 if(m_UseExtMemoryPriority)
+
+16726 priorityInfo.priority = priority;
+16727 VmaPnextChainPushFront(&allocInfo, &priorityInfo);
+
+
+
+16731 #if VMA_EXTERNAL_MEMORY
+
+16733 VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
+16734 exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
+16735 if(exportMemoryAllocInfo.handleTypes != 0)
+
+16737 VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
+
+
+
+
+16742 VkResult res = VK_SUCCESS;
+16743 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
+
+16745 res = AllocateDedicatedMemoryPage(
+
+
+
+
+
+
+
+16753 pAllocations + allocIndex);
+16754 if(res != VK_SUCCESS)
+
+
+
+
+
+16760 if(res == VK_SUCCESS)
+
+
+
+16764 VmaMutexLockWrite lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
+16765 DedicatedAllocationLinkedList& dedicatedAllocations = m_DedicatedAllocations[memTypeIndex];
+16766 for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
+
+16768 dedicatedAllocations.PushBack(pAllocations[allocIndex]);
+
+
+
+16772 VMA_DEBUG_LOG(
" Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%u", allocationCount, memTypeIndex);
+
+
+
+
+16777 while(allocIndex--)
+
+
+16780 VkDeviceMemory hMemory = currAlloc->GetMemory();
+
+
+
+
+
+
+
+
+
+
+
+16792 FreeVulkanMemory(memTypeIndex, currAlloc->GetSize(), hMemory);
+16793 m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), currAlloc->GetSize());
+16794 currAlloc->SetUserData(
this, VMA_NULL);
+16795 m_AllocationObjectAllocator.Free(currAlloc);
+
+
+16798 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
+
+
+
+
+
+16804 VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
+
+16806 VmaSuballocationType suballocType,
+16807 uint32_t memTypeIndex,
+16808 const VkMemoryAllocateInfo& allocInfo,
+
+16810 bool isUserDataString,
+
+
+
+16814 VkDeviceMemory hMemory = VK_NULL_HANDLE;
+16815 VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
+
+
+16818 VMA_DEBUG_LOG(
" vkAllocateMemory FAILED");
+
+
+
+16822 void* pMappedData = VMA_NULL;
+
+
+16825 res = (*m_VulkanFunctions.vkMapMemory)(
+
+
+
+
+
+
+
+
+16834 VMA_DEBUG_LOG(
" vkMapMemory FAILED");
+16835 FreeVulkanMemory(memTypeIndex, size, hMemory);
+
+
+
+
+16840 *pAllocation = m_AllocationObjectAllocator.Allocate(m_CurrentFrameIndex.load(), isUserDataString);
+16841 (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
+16842 (*pAllocation)->SetUserData(
this, pUserData);
+16843 m_Budget.AddAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), size);
+16844 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
+
+16846 FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
+
+
+
+
+
+16852 void VmaAllocator_T::GetBufferMemoryRequirements(
+
+16854 VkMemoryRequirements& memReq,
+16855 bool& requiresDedicatedAllocation,
+16856 bool& prefersDedicatedAllocation)
const
+
+16858 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+16859 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16861 VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
+16862 memReqInfo.buffer = hBuffer;
+
+16864 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
+
+16866 VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
+16867 VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
+
+16869 (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
+
+16871 memReq = memReq2.memoryRequirements;
+16872 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
+16873 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
+
+
+
+
+16878 (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
+16879 requiresDedicatedAllocation =
false;
+16880 prefersDedicatedAllocation =
false;
+
+
+
+16884 void VmaAllocator_T::GetImageMemoryRequirements(
+
+16886 VkMemoryRequirements& memReq,
+16887 bool& requiresDedicatedAllocation,
+16888 bool& prefersDedicatedAllocation)
const
+
+16890 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
+16891 if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
+
+16893 VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
+16894 memReqInfo.image = hImage;
+
+16896 VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
+
+16898 VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
+16899 VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
+
+16901 (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
+
+16903 memReq = memReq2.memoryRequirements;
+16904 requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
+16905 prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
+
+
+
+
+16910 (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
+16911 requiresDedicatedAllocation =
false;
+16912 prefersDedicatedAllocation =
false;
+
+
+
+16916 VkResult VmaAllocator_T::AllocateMemory(
+16917 const VkMemoryRequirements& vkMemReq,
+16918 bool requiresDedicatedAllocation,
+16919 bool prefersDedicatedAllocation,
+16920 VkBuffer dedicatedBuffer,
+16921 VkBufferUsageFlags dedicatedBufferUsage,
+16922 VkImage dedicatedImage,
+
+16924 VmaSuballocationType suballocType,
+16925 size_t allocationCount,
+
+
+16928 memset(pAllocations, 0,
sizeof(
VmaAllocation) * allocationCount);
+
+16930 VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
+
+16932 if(vkMemReq.size == 0)
+
+16934 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+
+
+16939 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
+16940 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+16945 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
+16946 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+16948 if(requiresDedicatedAllocation)
+
+
+
+16952 VMA_ASSERT(0 &&
"VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
+16953 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+16955 if(createInfo.
pool != VK_NULL_HANDLE)
+
+16957 VMA_ASSERT(0 &&
"Pool specified while dedicated allocation is required.");
+16958 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+16961 if((createInfo.
pool != VK_NULL_HANDLE) &&
+
+
+16964 VMA_ASSERT(0 &&
"Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
+16965 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+16968 if(createInfo.
pool != VK_NULL_HANDLE)
+
+
+
+
+16973 (m_MemProps.memoryTypes[createInfo.
pool->m_BlockVector.GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
+
+
+
+
+16978 return createInfo.
pool->m_BlockVector.Allocate(
+16979 m_CurrentFrameIndex.load(),
+
+16981 vkMemReq.alignment,
+
+
+
+
+
+
+
+
+16990 uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
+16991 uint32_t memTypeIndex = UINT32_MAX;
+
+16993 if(res == VK_SUCCESS)
+
+16995 res = AllocateMemoryOfType(
+
+16997 vkMemReq.alignment,
+16998 requiresDedicatedAllocation || prefersDedicatedAllocation,
+
+17000 dedicatedBufferUsage,
+
+
+
+
+
+
+
+17008 if(res == VK_SUCCESS)
+
+
+
+
+
+
+
+
+
+17018 memoryTypeBits &= ~(1u << memTypeIndex);
+
+
+17021 if(res == VK_SUCCESS)
+
+17023 res = AllocateMemoryOfType(
+
+17025 vkMemReq.alignment,
+17026 requiresDedicatedAllocation || prefersDedicatedAllocation,
+
+17028 dedicatedBufferUsage,
+
+
+
+
+
+
+
+17036 if(res == VK_SUCCESS)
+
+
+
+
+
+
+
+
+
+17046 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+
+
+
+
+
+
+
+
+
+17057 void VmaAllocator_T::FreeMemory(
+17058 size_t allocationCount,
+
+
+17061 VMA_ASSERT(pAllocations);
+
+17063 for(
size_t allocIndex = allocationCount; allocIndex--; )
+
+
+
+17067 if(allocation != VK_NULL_HANDLE)
+
+17069 if(TouchAllocation(allocation))
+
+17071 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
+
+17073 FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
+
+
+17076 switch(allocation->GetType())
+
+17078 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+17080 VmaBlockVector* pBlockVector = VMA_NULL;
+17081 VmaPool hPool = allocation->GetBlock()->GetParentPool();
+17082 if(hPool != VK_NULL_HANDLE)
+
+17084 pBlockVector = &hPool->m_BlockVector;
+
+
+
+17088 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
+17089 pBlockVector = m_pBlockVectors[memTypeIndex];
+
+17091 pBlockVector->Free(allocation);
+
+
+17094 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17095 FreeDedicatedMemory(allocation);
+
+
+
+
+
+
+
+17103 m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(allocation->GetMemoryTypeIndex()), allocation->GetSize());
+17104 allocation->SetUserData(
this, VMA_NULL);
+17105 m_AllocationObjectAllocator.Free(allocation);
+
+
+
+
+17110 void VmaAllocator_T::CalculateStats(
VmaStats* pStats)
+
+
+17113 InitStatInfo(pStats->
total);
+17114 for(
size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
+
+17116 for(
size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
+
+
+
+17120 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+17122 VmaBlockVector*
const pBlockVector = m_pBlockVectors[memTypeIndex];
+17123 VMA_ASSERT(pBlockVector);
+17124 pBlockVector->AddStats(pStats);
+
+
+
+
+17129 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
+17130 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
+
+17132 pool->m_BlockVector.AddStats(pStats);
+
+
+
+
+17137 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+17139 const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
+17140 VmaMutexLockRead dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
+17141 DedicatedAllocationLinkedList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
+
+17143 alloc != VMA_NULL; alloc = dedicatedAllocList.GetNext(alloc))
+
+
+17146 alloc->DedicatedAllocCalcStatsInfo(allocationStatInfo);
+17147 VmaAddStatInfo(pStats->
total, allocationStatInfo);
+17148 VmaAddStatInfo(pStats->
memoryType[memTypeIndex], allocationStatInfo);
+17149 VmaAddStatInfo(pStats->
memoryHeap[memHeapIndex], allocationStatInfo);
+
+
+
+
+17154 VmaPostprocessCalcStatInfo(pStats->
total);
+17155 for(
size_t i = 0; i < GetMemoryTypeCount(); ++i)
+17156 VmaPostprocessCalcStatInfo(pStats->
memoryType[i]);
+17157 for(
size_t i = 0; i < GetMemoryHeapCount(); ++i)
+17158 VmaPostprocessCalcStatInfo(pStats->
memoryHeap[i]);
+
+
+17161 void VmaAllocator_T::GetBudget(
VmaBudget* outBudget, uint32_t firstHeap, uint32_t heapCount)
+
+17163 #if VMA_MEMORY_BUDGET
+17164 if(m_UseExtMemoryBudget)
+
+17166 if(m_Budget.m_OperationsSinceBudgetFetch < 30)
+
+17168 VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
+17169 for(uint32_t i = 0; i < heapCount; ++i, ++outBudget)
+
+17171 const uint32_t heapIndex = firstHeap + i;
+
+17173 outBudget->
blockBytes = m_Budget.m_BlockBytes[heapIndex];
+
+
+17176 if(m_Budget.m_VulkanUsage[heapIndex] + outBudget->
blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
+
+17178 outBudget->
usage = m_Budget.m_VulkanUsage[heapIndex] +
+17179 outBudget->
blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
+
+
+
+17183 outBudget->
usage = 0;
+
+
+
+17187 outBudget->
budget = VMA_MIN(
+17188 m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
+
+
+
+
+17193 UpdateVulkanBudget();
+17194 GetBudget(outBudget, firstHeap, heapCount);
+
+
+
+
+
+17200 for(uint32_t i = 0; i < heapCount; ++i, ++outBudget)
+
+17202 const uint32_t heapIndex = firstHeap + i;
+
+17204 outBudget->
blockBytes = m_Budget.m_BlockBytes[heapIndex];
+
+
+
+17208 outBudget->
budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10;
+
+
+
+
+17213 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
+
+17215 VkResult VmaAllocator_T::DefragmentationBegin(
+
+
+
+
+
+
+
+
+
+17225 *pContext = vma_new(
this, VmaDefragmentationContext_T)(
+17226 this, m_CurrentFrameIndex.load(), info.
flags, pStats);
+
+
+17229 (*pContext)->AddAllocations(
+
+
+17232 VkResult res = (*pContext)->Defragment(
+
+
+
+
+17237 if(res != VK_NOT_READY)
+
+17239 vma_delete(
this, *pContext);
+17240 *pContext = VMA_NULL;
+
+
+
+
+
+17246 VkResult VmaAllocator_T::DefragmentationEnd(
+
+
+17249 vma_delete(
this, context);
+
+
+
+17253 VkResult VmaAllocator_T::DefragmentationPassBegin(
+
+
+
+17257 return context->DefragmentPassBegin(pInfo);
+
+17259 VkResult VmaAllocator_T::DefragmentationPassEnd(
+
+
+17262 return context->DefragmentPassEnd();
+
+
+
+
+
+17268 if(hAllocation->CanBecomeLost())
+
+
+
+
+
+17274 const uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
+17275 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
+
+
+17278 if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
+
+
+
+17282 pAllocationInfo->
offset = 0;
+17283 pAllocationInfo->
size = hAllocation->GetSize();
+
+17285 pAllocationInfo->
pUserData = hAllocation->GetUserData();
+
+
+17288 else if(localLastUseFrameIndex == localCurrFrameIndex)
+
+17290 pAllocationInfo->
memoryType = hAllocation->GetMemoryTypeIndex();
+17291 pAllocationInfo->
deviceMemory = hAllocation->GetMemory();
+17292 pAllocationInfo->
offset = hAllocation->GetOffset();
+17293 pAllocationInfo->
size = hAllocation->GetSize();
+
+17295 pAllocationInfo->
pUserData = hAllocation->GetUserData();
+
+
+
+
+17300 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
+
+17302 localLastUseFrameIndex = localCurrFrameIndex;
+
+
+
+
+
+
+17309 #if VMA_STATS_STRING_ENABLED
+17310 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
+17311 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
+
+
+17314 VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
+17315 if(localLastUseFrameIndex == localCurrFrameIndex)
+
+
+
+
+
+17321 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
+
+17323 localLastUseFrameIndex = localCurrFrameIndex;
+
+
+
+
+
+17329 pAllocationInfo->
memoryType = hAllocation->GetMemoryTypeIndex();
+17330 pAllocationInfo->
deviceMemory = hAllocation->GetMemory();
+17331 pAllocationInfo->
offset = hAllocation->GetOffset();
+17332 pAllocationInfo->
size = hAllocation->GetSize();
+17333 pAllocationInfo->
pMappedData = hAllocation->GetMappedData();
+17334 pAllocationInfo->
pUserData = hAllocation->GetUserData();
+
+
+
+17338 bool VmaAllocator_T::TouchAllocation(
VmaAllocation hAllocation)
+
+
+17341 if(hAllocation->CanBecomeLost())
+
+17343 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
+17344 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
+
+
+17347 if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
+
+
+
+17351 else if(localLastUseFrameIndex == localCurrFrameIndex)
+
+
+
+
+
+17357 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
+
+17359 localLastUseFrameIndex = localCurrFrameIndex;
+
+
+
+
+
+
+17366 #if VMA_STATS_STRING_ENABLED
+17367 uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
+17368 uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
+
+
+17371 VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
+17372 if(localLastUseFrameIndex == localCurrFrameIndex)
+
+
+
+
+
+17378 if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
+
+17380 localLastUseFrameIndex = localCurrFrameIndex;
+
+
+
+
+
+
+
+
+
+
+
+17392 VMA_DEBUG_LOG(
" CreatePool: MemoryTypeIndex=%u, flags=%u", pCreateInfo->
memoryTypeIndex, pCreateInfo->
flags);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+17408 return VK_ERROR_INITIALIZATION_FAILED;
+
+
+
+17412 ((1u << pCreateInfo->
memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
+
+17414 return VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+
+
+
+
+17421 const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(newCreateInfo.
memoryTypeIndex);
+
+17423 *pPool = vma_new(
this, VmaPool_T)(
this, newCreateInfo, preferredBlockSize);
+
+17425 VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
+17426 if(res != VK_SUCCESS)
+
+17428 vma_delete(
this, *pPool);
+
+
+
+
+
+
+17435 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
+17436 (*pPool)->SetId(m_NextPoolId++);
+17437 m_Pools.PushBack(*pPool);
+
+
+
+
+
+17443 void VmaAllocator_T::DestroyPool(
VmaPool pool)
+
+
+
+17447 VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
+17448 m_Pools.Remove(pool);
+
+
+17451 vma_delete(
this, pool);
+
+
+
+
+17456 pool->m_BlockVector.GetPoolStats(pPoolStats);
+
+
+17459 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
+
+17461 m_CurrentFrameIndex.store(frameIndex);
+
+17463 #if VMA_MEMORY_BUDGET
+17464 if(m_UseExtMemoryBudget)
+
+17466 UpdateVulkanBudget();
+
+
+
+
+17471 void VmaAllocator_T::MakePoolAllocationsLost(
+
+17473 size_t* pLostAllocationCount)
+
+17475 hPool->m_BlockVector.MakePoolAllocationsLost(
+17476 m_CurrentFrameIndex.load(),
+17477 pLostAllocationCount);
+
+
+17480 VkResult VmaAllocator_T::CheckPoolCorruption(
VmaPool hPool)
+
+17482 return hPool->m_BlockVector.CheckCorruption();
+
+
+17485 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
+
+17487 VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+17490 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+17492 if(((1u << memTypeIndex) & memoryTypeBits) != 0)
+
+17494 VmaBlockVector*
const pBlockVector = m_pBlockVectors[memTypeIndex];
+17495 VMA_ASSERT(pBlockVector);
+17496 VkResult localRes = pBlockVector->CheckCorruption();
+
+
+17499 case VK_ERROR_FEATURE_NOT_PRESENT:
+
+
+17502 finalRes = VK_SUCCESS;
+
+
+
+
+
+
+
+
+
+17512 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
+17513 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
+
+17515 if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
+
+17517 VkResult localRes = pool->m_BlockVector.CheckCorruption();
+
+
+17520 case VK_ERROR_FEATURE_NOT_PRESENT:
+
+
+17523 finalRes = VK_SUCCESS;
+
+
+
+
+
+
+
+
+
+
+
+17535 void VmaAllocator_T::CreateLostAllocation(
VmaAllocation* pAllocation)
+
+17537 *pAllocation = m_AllocationObjectAllocator.Allocate(VMA_FRAME_INDEX_LOST,
false);
+17538 (*pAllocation)->InitLost();
+
+
+
+17542 template<
typename T>
+17543 struct AtomicTransactionalIncrement
+
+
+17546 typedef std::atomic<T> AtomicT;
+17547 ~AtomicTransactionalIncrement()
+
+
+
+
+17552 T Increment(AtomicT* atomic)
+
+
+17555 return m_Atomic->fetch_add(1);
+
+
+
+17559 m_Atomic =
nullptr;
+
+
+
+17563 AtomicT* m_Atomic =
nullptr;
+
+
+17566 VkResult VmaAllocator_T::AllocateVulkanMemory(
const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
+
+17568 AtomicTransactionalIncrement<uint32_t> deviceMemoryCountIncrement;
+17569 const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(&m_DeviceMemoryCount);
+17570 #if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
+17571 if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
+
+17573 return VK_ERROR_TOO_MANY_OBJECTS;
+
+
+
+17577 const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
+
+
+17580 if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
+
+17582 const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
+17583 VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
+
+
+17586 const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
+17587 if(blockBytesAfterAllocation > heapSize)
+
+17589 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
+
+17591 if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(blockBytes, blockBytesAfterAllocation))
+
+
+
+
+
+
+
+17599 m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
+
+
+
+17603 VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
+
+17605 if(res == VK_SUCCESS)
+
+17607 #if VMA_MEMORY_BUDGET
+17608 ++m_Budget.m_OperationsSinceBudgetFetch;
+
+
+
+17612 if(m_DeviceMemoryCallbacks.
pfnAllocate != VMA_NULL)
+
+17614 (*m_DeviceMemoryCallbacks.
pfnAllocate)(
this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.
pUserData);
+
+
+17617 deviceMemoryCountIncrement.Commit();
+
+
+
+17621 m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
+
+
+
+
+
+17627 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
+
+
+17630 if(m_DeviceMemoryCallbacks.
pfnFree != VMA_NULL)
+
+17632 (*m_DeviceMemoryCallbacks.
pfnFree)(
this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.
pUserData);
+
+
+
+17636 (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
+
+17638 m_Budget.m_BlockBytes[MemoryTypeIndexToHeapIndex(memoryType)] -= size;
+
+17640 --m_DeviceMemoryCount;
+
+
+17643 VkResult VmaAllocator_T::BindVulkanBuffer(
+17644 VkDeviceMemory memory,
+17645 VkDeviceSize memoryOffset,
+
+
+
+17649 if(pNext != VMA_NULL)
+
+17651 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
+17652 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
+17653 m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
+
+17655 VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
+17656 bindBufferMemoryInfo.pNext = pNext;
+17657 bindBufferMemoryInfo.buffer = buffer;
+17658 bindBufferMemoryInfo.memory = memory;
+17659 bindBufferMemoryInfo.memoryOffset = memoryOffset;
+17660 return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
+
+
+
+
+17665 return VK_ERROR_EXTENSION_NOT_PRESENT;
+
+
+
+
+17670 return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
+
+
+
+17674 VkResult VmaAllocator_T::BindVulkanImage(
+17675 VkDeviceMemory memory,
+17676 VkDeviceSize memoryOffset,
+
+
+
+17680 if(pNext != VMA_NULL)
+
+17682 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
+17683 if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
+17684 m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
+
+17686 VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
+17687 bindBufferMemoryInfo.pNext = pNext;
+17688 bindBufferMemoryInfo.image = image;
+17689 bindBufferMemoryInfo.memory = memory;
+17690 bindBufferMemoryInfo.memoryOffset = memoryOffset;
+17691 return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
+
+
+
+
+17696 return VK_ERROR_EXTENSION_NOT_PRESENT;
+
+
+
+
+17701 return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
+
+
+
+17705 VkResult VmaAllocator_T::Map(
VmaAllocation hAllocation,
void** ppData)
+
+17707 if(hAllocation->CanBecomeLost())
+
+17709 return VK_ERROR_MEMORY_MAP_FAILED;
+
+
+17712 switch(hAllocation->GetType())
+
+17714 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+17716 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
+17717 char *pBytes = VMA_NULL;
+17718 VkResult res = pBlock->Map(
this, 1, (
void**)&pBytes);
+17719 if(res == VK_SUCCESS)
+
+17721 *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
+17722 hAllocation->BlockAllocMap();
+
+
+
+17726 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17727 return hAllocation->DedicatedAllocMap(
this, ppData);
+
+
+17730 return VK_ERROR_MEMORY_MAP_FAILED;
+
+
+
+
+
+17736 switch(hAllocation->GetType())
+
+17738 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+17740 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
+17741 hAllocation->BlockAllocUnmap();
+17742 pBlock->Unmap(
this, 1);
+
+
+17745 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17746 hAllocation->DedicatedAllocUnmap(
this);
+
+
+
+
+
+
+17753 VkResult VmaAllocator_T::BindBufferMemory(
+
+17755 VkDeviceSize allocationLocalOffset,
+
+
+
+17759 VkResult res = VK_SUCCESS;
+17760 switch(hAllocation->GetType())
+
+17762 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17763 res = BindVulkanBuffer(hAllocation->GetMemory(), allocationLocalOffset, hBuffer, pNext);
+
+17765 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+17767 VmaDeviceMemoryBlock*
const pBlock = hAllocation->GetBlock();
+17768 VMA_ASSERT(pBlock &&
"Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
+17769 res = pBlock->BindBufferMemory(
this, hAllocation, allocationLocalOffset, hBuffer, pNext);
+
+
+
+
+
+
+
+
+17778 VkResult VmaAllocator_T::BindImageMemory(
+
+17780 VkDeviceSize allocationLocalOffset,
+
+
+
+17784 VkResult res = VK_SUCCESS;
+17785 switch(hAllocation->GetType())
+
+17787 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17788 res = BindVulkanImage(hAllocation->GetMemory(), allocationLocalOffset, hImage, pNext);
+
+17790 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+17792 VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
+17793 VMA_ASSERT(pBlock &&
"Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
+17794 res = pBlock->BindImageMemory(
this, hAllocation, allocationLocalOffset, hImage, pNext);
+
+
+
+
+
+
+
+
+17803 VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
+
+17805 VkDeviceSize offset, VkDeviceSize size,
+17806 VMA_CACHE_OPERATION op)
+
+17808 VkResult res = VK_SUCCESS;
+
+17810 VkMappedMemoryRange memRange = {};
+17811 if(GetFlushOrInvalidateRange(hAllocation, offset, size, memRange))
+
+
+
+17815 case VMA_CACHE_FLUSH:
+17816 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
+
+17818 case VMA_CACHE_INVALIDATE:
+17819 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
+
+
+
+
+
+
+
+
+
+17829 VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
+17830 uint32_t allocationCount,
+
+17832 const VkDeviceSize* offsets,
const VkDeviceSize* sizes,
+17833 VMA_CACHE_OPERATION op)
+
+17835 typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
+17836 typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
+17837 RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
+
+17839 for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
+
+
+17842 const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
+17843 const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
+17844 VkMappedMemoryRange newRange;
+17845 if(GetFlushOrInvalidateRange(alloc, offset, size, newRange))
+
+17847 ranges.push_back(newRange);
+
+
+
+17851 VkResult res = VK_SUCCESS;
+17852 if(!ranges.empty())
+
+
+
+17856 case VMA_CACHE_FLUSH:
+17857 res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
+
+17859 case VMA_CACHE_INVALIDATE:
+17860 res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
+
+
+
+
+
+
+
+
+
+17870 void VmaAllocator_T::FreeDedicatedMemory(
const VmaAllocation allocation)
+
+17872 VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
+
+17874 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
+
+17876 VmaMutexLockWrite lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
+17877 DedicatedAllocationLinkedList& dedicatedAllocations = m_DedicatedAllocations[memTypeIndex];
+17878 dedicatedAllocations.Remove(allocation);
+
+
+17881 VkDeviceMemory hMemory = allocation->GetMemory();
+
+
+
+
+
+
+
+
+
+
+
+17893 FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
+
+17895 VMA_DEBUG_LOG(
" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
+
+
+17898 uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits()
const
+
+17900 VkBufferCreateInfo dummyBufCreateInfo;
+17901 VmaFillGpuDefragmentationBufferCreateInfo(dummyBufCreateInfo);
+
+17903 uint32_t memoryTypeBits = 0;
+
+
+17906 VkBuffer buf = VK_NULL_HANDLE;
+17907 VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
+17908 m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
+17909 if(res == VK_SUCCESS)
+
+
+17912 VkMemoryRequirements memReq;
+17913 (*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
+17914 memoryTypeBits = memReq.memoryTypeBits;
+
+
+17917 (*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
+
+
+17920 return memoryTypeBits;
+
+
+17923 uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits()
const
+
+
+17926 VMA_ASSERT(GetMemoryTypeCount() > 0);
+
+17928 uint32_t memoryTypeBits = UINT32_MAX;
+
+17930 if(!m_UseAmdDeviceCoherentMemory)
+
+
+17933 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+17935 if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
+
+17937 memoryTypeBits &= ~(1u << memTypeIndex);
+
+
+
+
+17942 return memoryTypeBits;
+
+
+17945 bool VmaAllocator_T::GetFlushOrInvalidateRange(
+
+17947 VkDeviceSize offset, VkDeviceSize size,
+17948 VkMappedMemoryRange& outRange)
const
+
+17950 const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
+17951 if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
+
+17953 const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
+17954 const VkDeviceSize allocationSize = allocation->GetSize();
+17955 VMA_ASSERT(offset <= allocationSize);
+
+17957 outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
+17958 outRange.pNext = VMA_NULL;
+17959 outRange.memory = allocation->GetMemory();
+
+17961 switch(allocation->GetType())
+
+17963 case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
+17964 outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
+17965 if(size == VK_WHOLE_SIZE)
+
+17967 outRange.size = allocationSize - outRange.offset;
+
+
+
+17971 VMA_ASSERT(offset + size <= allocationSize);
+17972 outRange.size = VMA_MIN(
+17973 VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
+17974 allocationSize - outRange.offset);
+
+
+17977 case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
+
+
+17980 outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
+17981 if(size == VK_WHOLE_SIZE)
+
+17983 size = allocationSize - offset;
+
+
+
+17987 VMA_ASSERT(offset + size <= allocationSize);
+
+17989 outRange.size = VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize);
+
+
+17992 const VkDeviceSize allocationOffset = allocation->GetOffset();
+17993 VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
+17994 const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
+17995 outRange.offset += allocationOffset;
+17996 outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
+
+
+
+
+
+
+
+
+
+
+
+18008 #if VMA_MEMORY_BUDGET
+
+18010 void VmaAllocator_T::UpdateVulkanBudget()
+
+18012 VMA_ASSERT(m_UseExtMemoryBudget);
+
+18014 VkPhysicalDeviceMemoryProperties2KHR memProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
+
+18016 VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
+18017 VmaPnextChainPushFront(&memProps, &budgetProps);
+
+18019 GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
+
+
+18022 VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
+
+18024 for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
+
+18026 m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
+18027 m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
+18028 m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
+
+
+18031 if(m_Budget.m_VulkanBudget[heapIndex] == 0)
+
+18033 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10;
+
+18035 else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
+
+18037 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
+
+18039 if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
+
+18041 m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
+
+
+18044 m_Budget.m_OperationsSinceBudgetFetch = 0;
+
+
+
+
+
+18050 void VmaAllocator_T::FillAllocation(
const VmaAllocation hAllocation, uint8_t pattern)
+
+18052 if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
+18053 !hAllocation->CanBecomeLost() &&
+18054 (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
+
+18056 void* pData = VMA_NULL;
+18057 VkResult res = Map(hAllocation, &pData);
+18058 if(res == VK_SUCCESS)
+
+18060 memset(pData, (
int)pattern, (
size_t)hAllocation->GetSize());
+18061 FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
+18062 Unmap(hAllocation);
+
+
+
+18066 VMA_ASSERT(0 &&
"VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
+
+
+
+
+18071 uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
+
+18073 uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
+18074 if(memoryTypeBits == UINT32_MAX)
+
+18076 memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
+18077 m_GpuDefragmentationMemoryTypeBits.store(memoryTypeBits);
+
+18079 return memoryTypeBits;
+
+
+18082 #if VMA_STATS_STRING_ENABLED
+
+18084 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
+
+18086 bool dedicatedAllocationsStarted =
false;
+18087 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+18089 VmaMutexLockRead dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
+18090 DedicatedAllocationLinkedList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
+18091 if(!dedicatedAllocList.IsEmpty())
+
+18093 if(dedicatedAllocationsStarted ==
false)
+
+18095 dedicatedAllocationsStarted =
true;
+18096 json.WriteString(
"DedicatedAllocations");
+18097 json.BeginObject();
+
+
+18100 json.BeginString(
"Type ");
+18101 json.ContinueString(memTypeIndex);
+
+
+
+
+
+18107 alloc != VMA_NULL; alloc = dedicatedAllocList.GetNext(alloc))
+
+18109 json.BeginObject(
true);
+18110 alloc->PrintParameters(json);
+
+
+
+
+
+
+18117 if(dedicatedAllocationsStarted)
+
+
+
+
+
+18123 bool allocationsStarted =
false;
+18124 for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
+
+18126 if(m_pBlockVectors[memTypeIndex]->IsEmpty() ==
false)
+
+18128 if(allocationsStarted ==
false)
+
+18130 allocationsStarted =
true;
+18131 json.WriteString(
"DefaultPools");
+18132 json.BeginObject();
+
+
+18135 json.BeginString(
"Type ");
+18136 json.ContinueString(memTypeIndex);
+
+
+18139 m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
+
+
+18142 if(allocationsStarted)
+
+
+
+
+
+
+
+18150 VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
+18151 if(!m_Pools.IsEmpty())
+
+18153 json.WriteString(
"Pools");
+18154 json.BeginObject();
+18155 for(
VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
+
+18157 json.BeginString();
+18158 json.ContinueString(pool->GetId());
+
+
+18161 pool->m_BlockVector.PrintDetailedMap(json);
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+18177 VMA_ASSERT(pCreateInfo && pAllocator);
+
+
+18180 VMA_DEBUG_LOG(
"vmaCreateAllocator");
+
+18182 return (*pAllocator)->Init(pCreateInfo);
+
+
+
+
+
+18188 if(allocator != VK_NULL_HANDLE)
+
+18190 VMA_DEBUG_LOG(
"vmaDestroyAllocator");
+18191 VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
+18192 vma_delete(&allocationCallbacks, allocator);
+
+
+
+
+
+18198 VMA_ASSERT(allocator && pAllocatorInfo);
+18199 pAllocatorInfo->
instance = allocator->m_hInstance;
+18200 pAllocatorInfo->
physicalDevice = allocator->GetPhysicalDevice();
+18201 pAllocatorInfo->
device = allocator->m_hDevice;
+
+
+
+
+18206 const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
+
+18208 VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
+18209 *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
+
+
+
+
+18214 const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
+
+18216 VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
+18217 *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
+
+
+
+
+18222 uint32_t memoryTypeIndex,
+18223 VkMemoryPropertyFlags* pFlags)
+
+18225 VMA_ASSERT(allocator && pFlags);
+18226 VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
+18227 *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
+
+
+
+
+18232 uint32_t frameIndex)
+
+18234 VMA_ASSERT(allocator);
+18235 VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
+
+18237 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18239 allocator->SetCurrentFrameIndex(frameIndex);
+
+
+
+
+
+
+18246 VMA_ASSERT(allocator && pStats);
+18247 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+18248 allocator->CalculateStats(pStats);
+
+
+
+
+
+
+18255 VMA_ASSERT(allocator && pBudget);
+18256 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+18257 allocator->GetBudget(pBudget, 0, allocator->GetMemoryHeapCount());
+
+
+18260 #if VMA_STATS_STRING_ENABLED
+
+
+
+18264 char** ppStatsString,
+18265 VkBool32 detailedMap)
+
+18267 VMA_ASSERT(allocator && ppStatsString);
+18268 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18270 VmaStringBuilder sb(allocator);
+
+18272 VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
+18273 json.BeginObject();
+
+
+18276 allocator->GetBudget(budget, 0, allocator->GetMemoryHeapCount());
+
+
+18279 allocator->CalculateStats(&stats);
+
+18281 json.WriteString(
"Total");
+18282 VmaPrintStatInfo(json, stats.
total);
+
+18284 for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
+
+18286 json.BeginString(
"Heap ");
+18287 json.ContinueString(heapIndex);
+
+18289 json.BeginObject();
+
+18291 json.WriteString(
"Size");
+18292 json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
+
+18294 json.WriteString(
"Flags");
+18295 json.BeginArray(
true);
+18296 if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
+
+18298 json.WriteString(
"DEVICE_LOCAL");
+
+
+
+18302 json.WriteString(
"Budget");
+18303 json.BeginObject();
+
+18305 json.WriteString(
"BlockBytes");
+18306 json.WriteNumber(budget[heapIndex].blockBytes);
+18307 json.WriteString(
"AllocationBytes");
+18308 json.WriteNumber(budget[heapIndex].allocationBytes);
+18309 json.WriteString(
"Usage");
+18310 json.WriteNumber(budget[heapIndex].usage);
+18311 json.WriteString(
"Budget");
+18312 json.WriteNumber(budget[heapIndex].budget);
+
+
+
+
+
+18318 json.WriteString(
"Stats");
+18319 VmaPrintStatInfo(json, stats.
memoryHeap[heapIndex]);
+
+
+18322 for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
+
+18324 if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
+
+18326 json.BeginString(
"Type ");
+18327 json.ContinueString(typeIndex);
+
+
+18330 json.BeginObject();
+
+18332 json.WriteString(
"Flags");
+18333 json.BeginArray(
true);
+18334 VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
+18335 if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
+
+18337 json.WriteString(
"DEVICE_LOCAL");
+
+18339 if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
+
+18341 json.WriteString(
"HOST_VISIBLE");
+
+18343 if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
+
+18345 json.WriteString(
"HOST_COHERENT");
+
+18347 if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
+
+18349 json.WriteString(
"HOST_CACHED");
+
+18351 if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
+
+18353 json.WriteString(
"LAZILY_ALLOCATED");
+
+18355 #if VMA_VULKAN_VERSION >= 1001000
+18356 if((flags & VK_MEMORY_PROPERTY_PROTECTED_BIT) != 0)
+
+18358 json.WriteString(
"PROTECTED");
+
+
+18361 #if VK_AMD_device_coherent_memory
+18362 if((flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
+
+18364 json.WriteString(
"DEVICE_COHERENT");
+
+18366 if((flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY) != 0)
+
+18368 json.WriteString(
"DEVICE_UNCACHED");
+
+
+
+
+
+
+18375 json.WriteString(
"Stats");
+18376 VmaPrintStatInfo(json, stats.
memoryType[typeIndex]);
+
+
+
+
+
+
+
+
+18385 if(detailedMap == VK_TRUE)
+
+18387 allocator->PrintDetailedMap(json);
+
+
+
+
+
+18393 const size_t len = sb.GetLength();
+18394 char*
const pChars = vma_new_array(allocator,
char, len + 1);
+
+
+18397 memcpy(pChars, sb.GetData(), len);
+
+18399 pChars[len] =
'\0';
+18400 *ppStatsString = pChars;
+
+
+
+
+18405 char* pStatsString)
+
+18407 if(pStatsString != VMA_NULL)
+
+18409 VMA_ASSERT(allocator);
+18410 size_t len = strlen(pStatsString);
+18411 vma_delete_array(allocator, pStatsString, len + 1);
+
+
+
+
+
+
+
+
+
+
+18422 uint32_t memoryTypeBits,
+
+18424 uint32_t* pMemoryTypeIndex)
+
+18426 VMA_ASSERT(allocator != VK_NULL_HANDLE);
+18427 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
+18428 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
+
+18430 memoryTypeBits &= allocator->GetGlobalMemoryTypeBits();
+
+
+
+
+
+
+18437 uint32_t requiredFlags = pAllocationCreateInfo->
requiredFlags;
+18438 uint32_t preferredFlags = pAllocationCreateInfo->
preferredFlags;
+18439 uint32_t notPreferredFlags = 0;
+
+
+18442 switch(pAllocationCreateInfo->
usage)
+
+
+
+
+18447 if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
+
+18449 preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
+
+
+
+18453 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
+
+
+18456 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
+18457 if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
+
+18459 preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
+
+
+
+18463 requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
+18464 preferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
+
+
+18467 notPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
+
+
+18470 requiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
+
+
+
+
+
+
+
+
+18479 (VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
+
+18481 notPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY;
+
+
+18484 *pMemoryTypeIndex = UINT32_MAX;
+18485 uint32_t minCost = UINT32_MAX;
+18486 for(uint32_t memTypeIndex = 0, memTypeBit = 1;
+18487 memTypeIndex < allocator->GetMemoryTypeCount();
+18488 ++memTypeIndex, memTypeBit <<= 1)
+
+
+18491 if((memTypeBit & memoryTypeBits) != 0)
+
+18493 const VkMemoryPropertyFlags currFlags =
+18494 allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
+
+18496 if((requiredFlags & ~currFlags) == 0)
+
+
+18499 uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags) +
+18500 VmaCountBitsSet(currFlags & notPreferredFlags);
+
+18502 if(currCost < minCost)
+
+18504 *pMemoryTypeIndex = memTypeIndex;
+
+
+
+
+18509 minCost = currCost;
+
+
+
+
+18514 return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
+
+
+
+
+18519 const VkBufferCreateInfo* pBufferCreateInfo,
+
+18521 uint32_t* pMemoryTypeIndex)
+
+18523 VMA_ASSERT(allocator != VK_NULL_HANDLE);
+18524 VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
+18525 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
+18526 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
+
+18528 const VkDevice hDev = allocator->m_hDevice;
+18529 VkBuffer hBuffer = VK_NULL_HANDLE;
+18530 VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
+18531 hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
+18532 if(res == VK_SUCCESS)
+
+18534 VkMemoryRequirements memReq = {};
+18535 allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
+18536 hDev, hBuffer, &memReq);
+
+
+
+18540 memReq.memoryTypeBits,
+18541 pAllocationCreateInfo,
+
+
+18544 allocator->GetVulkanFunctions().vkDestroyBuffer(
+18545 hDev, hBuffer, allocator->GetAllocationCallbacks());
+
+
+
+
+
+
+18552 const VkImageCreateInfo* pImageCreateInfo,
+
+18554 uint32_t* pMemoryTypeIndex)
+
+18556 VMA_ASSERT(allocator != VK_NULL_HANDLE);
+18557 VMA_ASSERT(pImageCreateInfo != VMA_NULL);
+18558 VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
+18559 VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
+
+18561 const VkDevice hDev = allocator->m_hDevice;
+18562 VkImage hImage = VK_NULL_HANDLE;
+18563 VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
+18564 hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
+18565 if(res == VK_SUCCESS)
+
+18567 VkMemoryRequirements memReq = {};
+18568 allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
+18569 hDev, hImage, &memReq);
+
+
+
+18573 memReq.memoryTypeBits,
+18574 pAllocationCreateInfo,
+
+
+18577 allocator->GetVulkanFunctions().vkDestroyImage(
+18578 hDev, hImage, allocator->GetAllocationCallbacks());
+
+
+
+
+
+
+
+
+
+18588 VMA_ASSERT(allocator && pCreateInfo && pPool);
+
+18590 VMA_DEBUG_LOG(
"vmaCreatePool");
+
+18592 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18594 VkResult res = allocator->CreatePool(pCreateInfo, pPool);
+
+18596 #if VMA_RECORDING_ENABLED
+18597 if(allocator->GetRecorder() != VMA_NULL)
+
+18599 allocator->GetRecorder()->RecordCreatePool(allocator->GetCurrentFrameIndex(), *pCreateInfo, *pPool);
+
+
+
+
+
+
+
+
+
+
+18610 VMA_ASSERT(allocator);
+
+18612 if(pool == VK_NULL_HANDLE)
+
+
+
+
+18617 VMA_DEBUG_LOG(
"vmaDestroyPool");
+
+18619 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18621 #if VMA_RECORDING_ENABLED
+18622 if(allocator->GetRecorder() != VMA_NULL)
+
+18624 allocator->GetRecorder()->RecordDestroyPool(allocator->GetCurrentFrameIndex(), pool);
+
+
+
+18628 allocator->DestroyPool(pool);
+
+
+
+
+
+
+
+18636 VMA_ASSERT(allocator && pool && pPoolStats);
+
+18638 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18640 allocator->GetPoolStats(pool, pPoolStats);
+
+
+
+
+
+18646 size_t* pLostAllocationCount)
+
+18648 VMA_ASSERT(allocator && pool);
+
+18650 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18652 #if VMA_RECORDING_ENABLED
+18653 if(allocator->GetRecorder() != VMA_NULL)
+
+18655 allocator->GetRecorder()->RecordMakePoolAllocationsLost(allocator->GetCurrentFrameIndex(), pool);
+
+
+
+18659 allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
+
+
+
+
+18664 VMA_ASSERT(allocator && pool);
+
+18666 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18668 VMA_DEBUG_LOG(
"vmaCheckPoolCorruption");
+
+18670 return allocator->CheckPoolCorruption(pool);
+
+
+
+
+
+18676 const char** ppName)
+
+18678 VMA_ASSERT(allocator && pool && ppName);
+
+18680 VMA_DEBUG_LOG(
"vmaGetPoolName");
+
+18682 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18684 *ppName = pool->GetName();
+
+
+
+
+
+
+
+18692 VMA_ASSERT(allocator && pool);
+
+18694 VMA_DEBUG_LOG(
"vmaSetPoolName");
+
+18696 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18698 pool->SetName(pName);
+
+18700 #if VMA_RECORDING_ENABLED
+18701 if(allocator->GetRecorder() != VMA_NULL)
+
+18703 allocator->GetRecorder()->RecordSetPoolName(allocator->GetCurrentFrameIndex(), pool, pName);
+
+
+
+
+
+
+18710 const VkMemoryRequirements* pVkMemoryRequirements,
+
+
+
+
+18715 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
+
+18717 VMA_DEBUG_LOG(
"vmaAllocateMemory");
+
+18719 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18721 VkResult result = allocator->AllocateMemory(
+18722 *pVkMemoryRequirements,
+
+
+
+
+
+
+18729 VMA_SUBALLOCATION_TYPE_UNKNOWN,
+
+
+
+18733 #if VMA_RECORDING_ENABLED
+18734 if(allocator->GetRecorder() != VMA_NULL)
+
+18736 allocator->GetRecorder()->RecordAllocateMemory(
+18737 allocator->GetCurrentFrameIndex(),
+18738 *pVkMemoryRequirements,
+
+
+
+
+
+18744 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
+
+18746 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+
+
+
+18754 const VkMemoryRequirements* pVkMemoryRequirements,
+
+18756 size_t allocationCount,
+
+
+
+18760 if(allocationCount == 0)
+
+
+
+
+18765 VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
+
+18767 VMA_DEBUG_LOG(
"vmaAllocateMemoryPages");
+
+18769 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18771 VkResult result = allocator->AllocateMemory(
+18772 *pVkMemoryRequirements,
+
+
+
+
+
+
+18779 VMA_SUBALLOCATION_TYPE_UNKNOWN,
+
+
+
+18783 #if VMA_RECORDING_ENABLED
+18784 if(allocator->GetRecorder() != VMA_NULL)
+
+18786 allocator->GetRecorder()->RecordAllocateMemoryPages(
+18787 allocator->GetCurrentFrameIndex(),
+18788 *pVkMemoryRequirements,
+
+18790 (uint64_t)allocationCount,
+
+
+
+
+18795 if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
+
+18797 for(
size_t i = 0; i < allocationCount; ++i)
+
+18799 allocator->GetAllocationInfo(pAllocations[i], pAllocationInfo + i);
+
+
+
+
+
+
+
+
+
+
+
+
+
+18813 VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
+
+18815 VMA_DEBUG_LOG(
"vmaAllocateMemoryForBuffer");
+
+18817 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18819 VkMemoryRequirements vkMemReq = {};
+18820 bool requiresDedicatedAllocation =
false;
+18821 bool prefersDedicatedAllocation =
false;
+18822 allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
+18823 requiresDedicatedAllocation,
+18824 prefersDedicatedAllocation);
+
+18826 VkResult result = allocator->AllocateMemory(
+
+18828 requiresDedicatedAllocation,
+18829 prefersDedicatedAllocation,
+
+
+
+
+18834 VMA_SUBALLOCATION_TYPE_BUFFER,
+
+
+
+18838 #if VMA_RECORDING_ENABLED
+18839 if(allocator->GetRecorder() != VMA_NULL)
+
+18841 allocator->GetRecorder()->RecordAllocateMemoryForBuffer(
+18842 allocator->GetCurrentFrameIndex(),
+
+18844 requiresDedicatedAllocation,
+18845 prefersDedicatedAllocation,
+
+
+
+
+
+18851 if(pAllocationInfo && result == VK_SUCCESS)
+
+18853 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+
+
+
+
+
+
+
+
+18866 VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
+
+18868 VMA_DEBUG_LOG(
"vmaAllocateMemoryForImage");
+
+18870 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18872 VkMemoryRequirements vkMemReq = {};
+18873 bool requiresDedicatedAllocation =
false;
+18874 bool prefersDedicatedAllocation =
false;
+18875 allocator->GetImageMemoryRequirements(image, vkMemReq,
+18876 requiresDedicatedAllocation, prefersDedicatedAllocation);
+
+18878 VkResult result = allocator->AllocateMemory(
+
+18880 requiresDedicatedAllocation,
+18881 prefersDedicatedAllocation,
+
+
+
+
+18886 VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
+
+
+
+18890 #if VMA_RECORDING_ENABLED
+18891 if(allocator->GetRecorder() != VMA_NULL)
+
+18893 allocator->GetRecorder()->RecordAllocateMemoryForImage(
+18894 allocator->GetCurrentFrameIndex(),
+
+18896 requiresDedicatedAllocation,
+18897 prefersDedicatedAllocation,
+
+
+
+
+
+18903 if(pAllocationInfo && result == VK_SUCCESS)
+
+18905 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+
+
+
+
+
+18915 VMA_ASSERT(allocator);
+
+18917 if(allocation == VK_NULL_HANDLE)
+
+
+
+
+18922 VMA_DEBUG_LOG(
"vmaFreeMemory");
+
+18924 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18926 #if VMA_RECORDING_ENABLED
+18927 if(allocator->GetRecorder() != VMA_NULL)
+
+18929 allocator->GetRecorder()->RecordFreeMemory(
+18930 allocator->GetCurrentFrameIndex(),
+
+
+
+
+18935 allocator->FreeMemory(
+
+
+
+
+
+
+18942 size_t allocationCount,
+
+
+18945 if(allocationCount == 0)
+
+
+
+
+18950 VMA_ASSERT(allocator);
+
+18952 VMA_DEBUG_LOG(
"vmaFreeMemoryPages");
+
+18954 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18956 #if VMA_RECORDING_ENABLED
+18957 if(allocator->GetRecorder() != VMA_NULL)
+
+18959 allocator->GetRecorder()->RecordFreeMemoryPages(
+18960 allocator->GetCurrentFrameIndex(),
+18961 (uint64_t)allocationCount,
+
+
+
+
+18966 allocator->FreeMemory(allocationCount, pAllocations);
+
+
+
+
+
+
+
+18974 VMA_ASSERT(allocator && allocation && pAllocationInfo);
+
+18976 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18978 #if VMA_RECORDING_ENABLED
+18979 if(allocator->GetRecorder() != VMA_NULL)
+
+18981 allocator->GetRecorder()->RecordGetAllocationInfo(
+18982 allocator->GetCurrentFrameIndex(),
+
+
+
+
+18987 allocator->GetAllocationInfo(allocation, pAllocationInfo);
+
+
+
+
+
+
+18994 VMA_ASSERT(allocator && allocation);
+
+18996 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+18998 #if VMA_RECORDING_ENABLED
+18999 if(allocator->GetRecorder() != VMA_NULL)
+
+19001 allocator->GetRecorder()->RecordTouchAllocation(
+19002 allocator->GetCurrentFrameIndex(),
+
+
+
+
+19007 return allocator->TouchAllocation(allocation);
+
+
+
+
+
+
+
+19015 VMA_ASSERT(allocator && allocation);
+
+19017 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19019 allocation->SetUserData(allocator, pUserData);
+
+19021 #if VMA_RECORDING_ENABLED
+19022 if(allocator->GetRecorder() != VMA_NULL)
+
+19024 allocator->GetRecorder()->RecordSetAllocationUserData(
+19025 allocator->GetCurrentFrameIndex(),
+
+
+
+
+
+
+
+
+
+
+19036 VMA_ASSERT(allocator && pAllocation);
+
+19038 VMA_DEBUG_GLOBAL_MUTEX_LOCK;
+
+19040 allocator->CreateLostAllocation(pAllocation);
+
+19042 #if VMA_RECORDING_ENABLED
+19043 if(allocator->GetRecorder() != VMA_NULL)
+
+19045 allocator->GetRecorder()->RecordCreateLostAllocation(
+19046 allocator->GetCurrentFrameIndex(),
+
+
+
+
+
+
+
+
+
+
+19057 VMA_ASSERT(allocator && allocation && ppData);
+
+19059 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19061 VkResult res = allocator->Map(allocation, ppData);
+
+19063 #if VMA_RECORDING_ENABLED
+19064 if(allocator->GetRecorder() != VMA_NULL)
+
+19066 allocator->GetRecorder()->RecordMapMemory(
+19067 allocator->GetCurrentFrameIndex(),
+
+
+
+
+
+
+
+
+
+
+
+19079 VMA_ASSERT(allocator && allocation);
+
+19081 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19083 #if VMA_RECORDING_ENABLED
+19084 if(allocator->GetRecorder() != VMA_NULL)
+
+19086 allocator->GetRecorder()->RecordUnmapMemory(
+19087 allocator->GetCurrentFrameIndex(),
+
+
+
+
+19092 allocator->Unmap(allocation);
+
+
+
+
+19097 VMA_ASSERT(allocator && allocation);
+
+19099 VMA_DEBUG_LOG(
"vmaFlushAllocation");
+
+19101 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19103 const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
+
+19105 #if VMA_RECORDING_ENABLED
+19106 if(allocator->GetRecorder() != VMA_NULL)
+
+19108 allocator->GetRecorder()->RecordFlushAllocation(
+19109 allocator->GetCurrentFrameIndex(),
+19110 allocation, offset, size);
+
+
+
+
+
+
+
+
+19119 VMA_ASSERT(allocator && allocation);
+
+19121 VMA_DEBUG_LOG(
"vmaInvalidateAllocation");
+
+19123 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19125 const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
+
+19127 #if VMA_RECORDING_ENABLED
+19128 if(allocator->GetRecorder() != VMA_NULL)
+
+19130 allocator->GetRecorder()->RecordInvalidateAllocation(
+19131 allocator->GetCurrentFrameIndex(),
+19132 allocation, offset, size);
+
+
+
+
+
+
+
+
+19141 uint32_t allocationCount,
+
+19143 const VkDeviceSize* offsets,
+19144 const VkDeviceSize* sizes)
+
+19146 VMA_ASSERT(allocator);
+
+19148 if(allocationCount == 0)
+
+
+
+
+19153 VMA_ASSERT(allocations);
+
+19155 VMA_DEBUG_LOG(
"vmaFlushAllocations");
+
+19157 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19159 const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_FLUSH);
+
+19161 #if VMA_RECORDING_ENABLED
+19162 if(allocator->GetRecorder() != VMA_NULL)
+
+
+
+
+
+
+
+
+
+
+19173 uint32_t allocationCount,
+
+19175 const VkDeviceSize* offsets,
+19176 const VkDeviceSize* sizes)
+
+19178 VMA_ASSERT(allocator);
+
+19180 if(allocationCount == 0)
+
+
+
+
+19185 VMA_ASSERT(allocations);
+
+19187 VMA_DEBUG_LOG(
"vmaInvalidateAllocations");
+
+19189 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19191 const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_INVALIDATE);
+
+19193 #if VMA_RECORDING_ENABLED
+19194 if(allocator->GetRecorder() != VMA_NULL)
+
+
+
+
+
+
+
+
+
+
+19205 VMA_ASSERT(allocator);
+
+19207 VMA_DEBUG_LOG(
"vmaCheckCorruption");
+
+19209 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19211 return allocator->CheckCorruption(memoryTypeBits);
+
+
+
+
+
+19217 size_t allocationCount,
+19218 VkBool32* pAllocationsChanged,
+
+
+
+
+
+
+
+
+
+19228 if(pDefragmentationInfo != VMA_NULL)
+
+
+
+
+
+
+
+
+
+
+
+
+
+19242 if(res == VK_NOT_READY)
+
+
+
+
+
+
+
+
+
+
+
+
+19255 VMA_ASSERT(allocator && pInfo && pContext);
+
+
+
+
+
+
+
+
+
+
+19266 VMA_HEAVY_ASSERT(VmaValidatePointerArray(pInfo->
poolCount, pInfo->
pPools));
+
+19268 VMA_DEBUG_LOG(
"vmaDefragmentationBegin");
+
+19270 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19272 VkResult res = allocator->DefragmentationBegin(*pInfo, pStats, pContext);
+
+19274 #if VMA_RECORDING_ENABLED
+19275 if(allocator->GetRecorder() != VMA_NULL)
+
+19277 allocator->GetRecorder()->RecordDefragmentationBegin(
+19278 allocator->GetCurrentFrameIndex(), *pInfo, *pContext);
+
+
+
+
+
+
+
+
+
+
+19289 VMA_ASSERT(allocator);
+
+19291 VMA_DEBUG_LOG(
"vmaDefragmentationEnd");
+
+19293 if(context != VK_NULL_HANDLE)
+
+19295 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19297 #if VMA_RECORDING_ENABLED
+19298 if(allocator->GetRecorder() != VMA_NULL)
+
+19300 allocator->GetRecorder()->RecordDefragmentationEnd(
+19301 allocator->GetCurrentFrameIndex(), context);
+
+
+
+19305 return allocator->DefragmentationEnd(context);
+
+
+
+
+
+
+
+
+
+
+
+
+
+19319 VMA_ASSERT(allocator);
+
+
+19322 VMA_DEBUG_LOG(
"vmaBeginDefragmentationPass");
+
+19324 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19326 if(context == VK_NULL_HANDLE)
+
+
+
+
+
+19332 return allocator->DefragmentationPassBegin(pInfo, context);
+
+
+
+
+
+19338 VMA_ASSERT(allocator);
+
+19340 VMA_DEBUG_LOG(
"vmaEndDefragmentationPass");
+19341 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19343 if(context == VK_NULL_HANDLE)
+
+
+19346 return allocator->DefragmentationPassEnd(context);
+
+
+
+
+
+
+
+19354 VMA_ASSERT(allocator && allocation && buffer);
+
+19356 VMA_DEBUG_LOG(
"vmaBindBufferMemory");
+
+19358 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19360 return allocator->BindBufferMemory(allocation, 0, buffer, VMA_NULL);
+
+
+
+
+
+19366 VkDeviceSize allocationLocalOffset,
+
+
+
+19370 VMA_ASSERT(allocator && allocation && buffer);
+
+19372 VMA_DEBUG_LOG(
"vmaBindBufferMemory2");
+
+19374 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19376 return allocator->BindBufferMemory(allocation, allocationLocalOffset, buffer, pNext);
+
+
+
+
+
+
+
+19384 VMA_ASSERT(allocator && allocation && image);
+
+19386 VMA_DEBUG_LOG(
"vmaBindImageMemory");
+
+19388 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19390 return allocator->BindImageMemory(allocation, 0, image, VMA_NULL);
+
+
+
+
+
+19396 VkDeviceSize allocationLocalOffset,
+
+
+
+19400 VMA_ASSERT(allocator && allocation && image);
+
+19402 VMA_DEBUG_LOG(
"vmaBindImageMemory2");
+
+19404 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19406 return allocator->BindImageMemory(allocation, allocationLocalOffset, image, pNext);
+
+
+
+
+19411 const VkBufferCreateInfo* pBufferCreateInfo,
+
+
+
+
+
+19417 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
+
+19419 if(pBufferCreateInfo->size == 0)
+
+19421 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+19423 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
+19424 !allocator->m_UseKhrBufferDeviceAddress)
+
+19426 VMA_ASSERT(0 &&
"Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
+19427 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+19430 VMA_DEBUG_LOG(
"vmaCreateBuffer");
+
+19432 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19434 *pBuffer = VK_NULL_HANDLE;
+19435 *pAllocation = VK_NULL_HANDLE;
+
+
+19438 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
+19439 allocator->m_hDevice,
+
+19441 allocator->GetAllocationCallbacks(),
+
+
+
+
+19446 VkMemoryRequirements vkMemReq = {};
+19447 bool requiresDedicatedAllocation =
false;
+19448 bool prefersDedicatedAllocation =
false;
+19449 allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
+19450 requiresDedicatedAllocation, prefersDedicatedAllocation);
+
+
+19453 res = allocator->AllocateMemory(
+
+19455 requiresDedicatedAllocation,
+19456 prefersDedicatedAllocation,
+
+19458 pBufferCreateInfo->usage,
+
+19460 *pAllocationCreateInfo,
+19461 VMA_SUBALLOCATION_TYPE_BUFFER,
+
+
+
+19465 #if VMA_RECORDING_ENABLED
+19466 if(allocator->GetRecorder() != VMA_NULL)
+
+19468 allocator->GetRecorder()->RecordCreateBuffer(
+19469 allocator->GetCurrentFrameIndex(),
+19470 *pBufferCreateInfo,
+19471 *pAllocationCreateInfo,
+
+
+
+
+
+
+
+
+
+19481 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
+
+
+
+
+19486 #if VMA_STATS_STRING_ENABLED
+19487 (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
+
+19489 if(pAllocationInfo != VMA_NULL)
+
+19491 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+19496 allocator->FreeMemory(
+
+
+19499 *pAllocation = VK_NULL_HANDLE;
+19500 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
+19501 *pBuffer = VK_NULL_HANDLE;
+
+
+19504 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
+19505 *pBuffer = VK_NULL_HANDLE;
+
+
+
+
+
+
+
+19513 const VkBufferCreateInfo* pBufferCreateInfo,
+
+19515 VkDeviceSize minAlignment,
+
+
+
+
+19520 VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
+
+19522 if(pBufferCreateInfo->size == 0)
+
+19524 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+19526 if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
+19527 !allocator->m_UseKhrBufferDeviceAddress)
+
+19529 VMA_ASSERT(0 &&
"Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
+19530 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+19533 VMA_DEBUG_LOG(
"vmaCreateBufferWithAlignment");
+
+19535 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19537 *pBuffer = VK_NULL_HANDLE;
+19538 *pAllocation = VK_NULL_HANDLE;
+
+
+19541 VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
+19542 allocator->m_hDevice,
+
+19544 allocator->GetAllocationCallbacks(),
+
+
+
+
+19549 VkMemoryRequirements vkMemReq = {};
+19550 bool requiresDedicatedAllocation =
false;
+19551 bool prefersDedicatedAllocation =
false;
+19552 allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
+19553 requiresDedicatedAllocation, prefersDedicatedAllocation);
+
+
+19556 vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
+
+
+19559 res = allocator->AllocateMemory(
+
+19561 requiresDedicatedAllocation,
+19562 prefersDedicatedAllocation,
+
+19564 pBufferCreateInfo->usage,
+
+19566 *pAllocationCreateInfo,
+19567 VMA_SUBALLOCATION_TYPE_BUFFER,
+
+
+
+19571 #if VMA_RECORDING_ENABLED
+19572 if(allocator->GetRecorder() != VMA_NULL)
+
+19574 VMA_ASSERT(0 &&
"Not implemented.");
+
+
+
+
+
+
+
+
+19583 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
+
+
+
+
+19588 #if VMA_STATS_STRING_ENABLED
+19589 (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
+
+19591 if(pAllocationInfo != VMA_NULL)
+
+19593 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+19598 allocator->FreeMemory(
+
+
+19601 *pAllocation = VK_NULL_HANDLE;
+19602 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
+19603 *pBuffer = VK_NULL_HANDLE;
+
+
+19606 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
+19607 *pBuffer = VK_NULL_HANDLE;
+
+
+
+
+
+
+
+
+
+
+19618 VMA_ASSERT(allocator);
+
+19620 if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
+
+
+
+
+19625 VMA_DEBUG_LOG(
"vmaDestroyBuffer");
+
+19627 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19629 #if VMA_RECORDING_ENABLED
+19630 if(allocator->GetRecorder() != VMA_NULL)
+
+19632 allocator->GetRecorder()->RecordDestroyBuffer(
+19633 allocator->GetCurrentFrameIndex(),
+
+
+
+
+19638 if(buffer != VK_NULL_HANDLE)
+
+19640 (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
+
+
+19643 if(allocation != VK_NULL_HANDLE)
+
+19645 allocator->FreeMemory(
+
+
+
+
+
+
+
+19653 const VkImageCreateInfo* pImageCreateInfo,
+
+
+
+
+
+19659 VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
+
+19661 if(pImageCreateInfo->extent.width == 0 ||
+19662 pImageCreateInfo->extent.height == 0 ||
+19663 pImageCreateInfo->extent.depth == 0 ||
+19664 pImageCreateInfo->mipLevels == 0 ||
+19665 pImageCreateInfo->arrayLayers == 0)
+
+19667 return VK_ERROR_VALIDATION_FAILED_EXT;
+
+
+19670 VMA_DEBUG_LOG(
"vmaCreateImage");
+
+19672 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19674 *pImage = VK_NULL_HANDLE;
+19675 *pAllocation = VK_NULL_HANDLE;
+
+
+19678 VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
+19679 allocator->m_hDevice,
+
+19681 allocator->GetAllocationCallbacks(),
+
+
+
+19685 VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
+19686 VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
+19687 VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
+
+
+19690 VkMemoryRequirements vkMemReq = {};
+19691 bool requiresDedicatedAllocation =
false;
+19692 bool prefersDedicatedAllocation =
false;
+19693 allocator->GetImageMemoryRequirements(*pImage, vkMemReq,
+19694 requiresDedicatedAllocation, prefersDedicatedAllocation);
+
+19696 res = allocator->AllocateMemory(
+
+19698 requiresDedicatedAllocation,
+19699 prefersDedicatedAllocation,
+
+
+
+19703 *pAllocationCreateInfo,
+
+
+
+
+19708 #if VMA_RECORDING_ENABLED
+19709 if(allocator->GetRecorder() != VMA_NULL)
+
+19711 allocator->GetRecorder()->RecordCreateImage(
+19712 allocator->GetCurrentFrameIndex(),
+
+19714 *pAllocationCreateInfo,
+
+
+
+
+
+
+
+
+
+19724 res = allocator->BindImageMemory(*pAllocation, 0, *pImage, VMA_NULL);
+
+
+
+
+19729 #if VMA_STATS_STRING_ENABLED
+19730 (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
+
+19732 if(pAllocationInfo != VMA_NULL)
+
+19734 allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
+
+
+
+
+19739 allocator->FreeMemory(
+
+
+19742 *pAllocation = VK_NULL_HANDLE;
+19743 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
+19744 *pImage = VK_NULL_HANDLE;
+
+
+19747 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
+19748 *pImage = VK_NULL_HANDLE;
+
+
+
+
+
+
+
+
+
+
+19759 VMA_ASSERT(allocator);
+
+19761 if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
+
+
+
+
+19766 VMA_DEBUG_LOG(
"vmaDestroyImage");
+
+19768 VMA_DEBUG_GLOBAL_MUTEX_LOCK
+
+19770 #if VMA_RECORDING_ENABLED
+19771 if(allocator->GetRecorder() != VMA_NULL)
+
+19773 allocator->GetRecorder()->RecordDestroyImage(
+19774 allocator->GetCurrentFrameIndex(),
+
+
+
+
+19779 if(image != VK_NULL_HANDLE)
+
+19781 (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
+
+19783 if(allocation != VK_NULL_HANDLE)
+
+19785 allocator->FreeMemory(
+
+
+
+
+
+
+Definition: vk_mem_alloc.h:2897
+uint32_t memoryTypeBits
Bitmask containing one bit set for every memory type acceptable for this allocation.
Definition: vk_mem_alloc.h:2923
+VmaPool pool
Pool that this allocation should be created in.
Definition: vk_mem_alloc.h:2929
+VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:2915
+void * pUserData
Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
Definition: vk_mem_alloc.h:2936
+VkMemoryPropertyFlags requiredFlags
Flags that must be set in a Memory Type chosen for an allocation.
Definition: vk_mem_alloc.h:2910
+float priority
A floating-point value between 0 and 1, indicating the priority of the allocation relative to other m...
Definition: vk_mem_alloc.h:2943
+VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:2905
+VmaAllocationCreateFlags flags
Use VmaAllocationCreateFlagBits enum.
Definition: vk_mem_alloc.h:2899
Represents single memory allocation.
-Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
Definition: vk_mem_alloc.h:3267
-VkDeviceSize offset
Offset in VkDeviceMemory object to the beginning of this allocation, in bytes. (deviceMemory,...
Definition: vk_mem_alloc.h:3291
-void * pMappedData
Pointer to the beginning of this allocation as mapped data.
Definition: vk_mem_alloc.h:3311
-uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:3272
-VkDeviceSize size
Size of this allocation, in bytes.
Definition: vk_mem_alloc.h:3302
-void * pUserData
Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
Definition: vk_mem_alloc.h:3316
-VkDeviceMemory deviceMemory
Handle to Vulkan memory object.
Definition: vk_mem_alloc.h:3281
-Description of a Allocator to be created.
Definition: vk_mem_alloc.h:2422
-VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:2427
-uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:2453
-const VkDeviceSize * pHeapSizeLimit
Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
Definition: vk_mem_alloc.h:2478
-VmaAllocatorCreateFlags flags
Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
Definition: vk_mem_alloc.h:2424
-const VmaVulkanFunctions * pVulkanFunctions
Pointers to Vulkan functions. Can be null.
Definition: vk_mem_alloc.h:2484
-const VkAllocationCallbacks * pAllocationCallbacks
Custom CPU memory allocation callbacks. Optional.
Definition: vk_mem_alloc.h:2436
-VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2496
-VkDeviceSize preferredLargeHeapBlockSize
Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB....
Definition: vk_mem_alloc.h:2433
-const VmaRecordSettings * pRecordSettings
Parameters for recording of VMA calls. Can be null.
Definition: vk_mem_alloc.h:2491
-VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:2430
-uint32_t vulkanApiVersion
Optional. The highest version of Vulkan that the application is designed to use.
Definition: vk_mem_alloc.h:2505
-const VkExternalMemoryHandleTypeFlagsKHR * pTypeExternalMemoryHandleTypes
Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
Definition: vk_mem_alloc.h:2516
-const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
Definition: vk_mem_alloc.h:2439
+Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
Definition: vk_mem_alloc.h:3264
+VkDeviceSize offset
Offset in VkDeviceMemory object to the beginning of this allocation, in bytes. (deviceMemory,...
Definition: vk_mem_alloc.h:3288
+void * pMappedData
Pointer to the beginning of this allocation as mapped data.
Definition: vk_mem_alloc.h:3308
+uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:3269
+VkDeviceSize size
Size of this allocation, in bytes.
Definition: vk_mem_alloc.h:3299
+void * pUserData
Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
Definition: vk_mem_alloc.h:3313
+VkDeviceMemory deviceMemory
Handle to Vulkan memory object.
Definition: vk_mem_alloc.h:3278
+Description of a Allocator to be created.
Definition: vk_mem_alloc.h:2419
+VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:2424
+uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:2450
+const VkDeviceSize * pHeapSizeLimit
Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
Definition: vk_mem_alloc.h:2475
+VmaAllocatorCreateFlags flags
Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
Definition: vk_mem_alloc.h:2421
+const VmaVulkanFunctions * pVulkanFunctions
Pointers to Vulkan functions. Can be null.
Definition: vk_mem_alloc.h:2481
+const VkAllocationCallbacks * pAllocationCallbacks
Custom CPU memory allocation callbacks. Optional.
Definition: vk_mem_alloc.h:2433
+VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2493
+VkDeviceSize preferredLargeHeapBlockSize
Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB....
Definition: vk_mem_alloc.h:2430
+const VmaRecordSettings * pRecordSettings
Parameters for recording of VMA calls. Can be null.
Definition: vk_mem_alloc.h:2488
+VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:2427
+uint32_t vulkanApiVersion
Optional. The highest version of Vulkan that the application is designed to use.
Definition: vk_mem_alloc.h:2502
+const VkExternalMemoryHandleTypeFlagsKHR * pTypeExternalMemoryHandleTypes
Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
Definition: vk_mem_alloc.h:2513
+const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
Definition: vk_mem_alloc.h:2436
Represents main object of this library initialized.
-Information about existing VmaAllocator object.
Definition: vk_mem_alloc.h:2532
-VkDevice device
Handle to Vulkan device object.
Definition: vk_mem_alloc.h:2547
-VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2537
-VkPhysicalDevice physicalDevice
Handle to Vulkan physical device object.
Definition: vk_mem_alloc.h:2542
-Statistics of current memory usage and available budget, in bytes, for specific memory heap.
Definition: vk_mem_alloc.h:2638
-VkDeviceSize blockBytes
Sum size of all VkDeviceMemory blocks allocated from particular heap, in bytes.
Definition: vk_mem_alloc.h:2641
-VkDeviceSize allocationBytes
Sum size of all allocations created in particular heap, in bytes.
Definition: vk_mem_alloc.h:2652
-VkDeviceSize usage
Estimated current memory usage of the program, in bytes.
Definition: vk_mem_alloc.h:2662
-VkDeviceSize budget
Estimated amount of memory available to the program, in bytes.
Definition: vk_mem_alloc.h:2673
+Information about existing VmaAllocator object.
Definition: vk_mem_alloc.h:2529
+VkDevice device
Handle to Vulkan device object.
Definition: vk_mem_alloc.h:2544
+VkInstance instance
Handle to Vulkan instance object.
Definition: vk_mem_alloc.h:2534
+VkPhysicalDevice physicalDevice
Handle to Vulkan physical device object.
Definition: vk_mem_alloc.h:2539
+Statistics of current memory usage and available budget, in bytes, for specific memory heap.
Definition: vk_mem_alloc.h:2635
+VkDeviceSize blockBytes
Sum size of all VkDeviceMemory blocks allocated from particular heap, in bytes.
Definition: vk_mem_alloc.h:2638
+VkDeviceSize allocationBytes
Sum size of all allocations created in particular heap, in bytes.
Definition: vk_mem_alloc.h:2649
+VkDeviceSize usage
Estimated current memory usage of the program, in bytes.
Definition: vk_mem_alloc.h:2659
+VkDeviceSize budget
Estimated amount of memory available to the program, in bytes.
Definition: vk_mem_alloc.h:2670
Represents Opaque object that represents started defragmentation process.
-Parameters for defragmentation.
Definition: vk_mem_alloc.h:3666
-const VmaPool * pPools
Either null or pointer to array of pools to be defragmented.
Definition: vk_mem_alloc.h:3706
-uint32_t allocationCount
Number of allocations in pAllocations array.
Definition: vk_mem_alloc.h:3672
-uint32_t maxGpuAllocationsToMove
Maximum number of allocations that can be moved to a different place using transfers on GPU side,...
Definition: vk_mem_alloc.h:3726
-VkDeviceSize maxGpuBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places using ...
Definition: vk_mem_alloc.h:3721
-VmaDefragmentationFlags flags
Reserved for future use. Should be 0.
Definition: vk_mem_alloc.h:3669
-VkBool32 * pAllocationsChanged
Optional, output. Pointer to array that will be filled with information whether the allocation at cer...
Definition: vk_mem_alloc.h:3687
-uint32_t poolCount
Numer of pools in pPools array.
Definition: vk_mem_alloc.h:3690
-VkCommandBuffer commandBuffer
Optional. Command buffer where GPU copy commands will be posted.
Definition: vk_mem_alloc.h:3735
-uint32_t maxCpuAllocationsToMove
Maximum number of allocations that can be moved to a different place using transfers on CPU side,...
Definition: vk_mem_alloc.h:3716
-const VmaAllocation * pAllocations
Pointer to array of allocations that can be defragmented.
Definition: vk_mem_alloc.h:3681
-VkDeviceSize maxCpuBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places using ...
Definition: vk_mem_alloc.h:3711
-Deprecated. Optional configuration parameters to be passed to function vmaDefragment().
Definition: vk_mem_alloc.h:3757
-uint32_t maxAllocationsToMove
Maximum number of allocations that can be moved to different place.
Definition: vk_mem_alloc.h:3767
-VkDeviceSize maxBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places.
Definition: vk_mem_alloc.h:3762
-Parameters for incremental defragmentation steps.
Definition: vk_mem_alloc.h:3748
-uint32_t moveCount
Definition: vk_mem_alloc.h:3749
-VmaDefragmentationPassMoveInfo * pMoves
Definition: vk_mem_alloc.h:3750
-Definition: vk_mem_alloc.h:3738
-VkDeviceMemory memory
Definition: vk_mem_alloc.h:3740
-VkDeviceSize offset
Definition: vk_mem_alloc.h:3741
-VmaAllocation allocation
Definition: vk_mem_alloc.h:3739
-Statistics returned by function vmaDefragment().
Definition: vk_mem_alloc.h:3771
-uint32_t deviceMemoryBlocksFreed
Number of empty VkDeviceMemory objects that have been released to the system.
Definition: vk_mem_alloc.h:3779
-VkDeviceSize bytesMoved
Total number of bytes that have been copied while moving allocations to different places.
Definition: vk_mem_alloc.h:3773
-VkDeviceSize bytesFreed
Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects.
Definition: vk_mem_alloc.h:3775
-uint32_t allocationsMoved
Number of allocations that have been moved to different places.
Definition: vk_mem_alloc.h:3777
-Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
Definition: vk_mem_alloc.h:2231
-void * pUserData
Optional, can be null.
Definition: vk_mem_alloc.h:2237
-PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
Optional, can be null.
Definition: vk_mem_alloc.h:2233
-PFN_vmaFreeDeviceMemoryFunction pfnFree
Optional, can be null.
Definition: vk_mem_alloc.h:2235
-Describes parameter of created VmaPool.
Definition: vk_mem_alloc.h:3068
-float priority
A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relat...
Definition: vk_mem_alloc.h:3116
-uint32_t memoryTypeIndex
Vulkan memory type index to allocate this pool from.
Definition: vk_mem_alloc.h:3071
-VmaPoolCreateFlags flags
Use combination of VmaPoolCreateFlagBits.
Definition: vk_mem_alloc.h:3074
-uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:3110
-VkDeviceSize blockSize
Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes....
Definition: vk_mem_alloc.h:3083
-size_t minBlockCount
Minimum number of blocks to be always allocated in this pool, even if they stay empty.
Definition: vk_mem_alloc.h:3088
-VkDeviceSize minAllocationAlignment
Additional minimum alignment to be used for all allocations created from this pool....
Definition: vk_mem_alloc.h:3123
-size_t maxBlockCount
Maximum number of blocks that can be allocated in this pool. Optional.
Definition: vk_mem_alloc.h:3096
-void * pMemoryAllocateNext
Additional pNext chain to be attached to VkMemoryAllocateInfo used for every allocation made by this ...
Definition: vk_mem_alloc.h:3133
+Parameters for defragmentation.
Definition: vk_mem_alloc.h:3663
+const VmaPool * pPools
Either null or pointer to array of pools to be defragmented.
Definition: vk_mem_alloc.h:3703
+uint32_t allocationCount
Number of allocations in pAllocations array.
Definition: vk_mem_alloc.h:3669
+uint32_t maxGpuAllocationsToMove
Maximum number of allocations that can be moved to a different place using transfers on GPU side,...
Definition: vk_mem_alloc.h:3723
+VkDeviceSize maxGpuBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places using ...
Definition: vk_mem_alloc.h:3718
+VmaDefragmentationFlags flags
Reserved for future use. Should be 0.
Definition: vk_mem_alloc.h:3666
+VkBool32 * pAllocationsChanged
Optional, output. Pointer to array that will be filled with information whether the allocation at cer...
Definition: vk_mem_alloc.h:3684
+uint32_t poolCount
Numer of pools in pPools array.
Definition: vk_mem_alloc.h:3687
+VkCommandBuffer commandBuffer
Optional. Command buffer where GPU copy commands will be posted.
Definition: vk_mem_alloc.h:3732
+uint32_t maxCpuAllocationsToMove
Maximum number of allocations that can be moved to a different place using transfers on CPU side,...
Definition: vk_mem_alloc.h:3713
+const VmaAllocation * pAllocations
Pointer to array of allocations that can be defragmented.
Definition: vk_mem_alloc.h:3678
+VkDeviceSize maxCpuBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places using ...
Definition: vk_mem_alloc.h:3708
+Deprecated. Optional configuration parameters to be passed to function vmaDefragment().
Definition: vk_mem_alloc.h:3754
+uint32_t maxAllocationsToMove
Maximum number of allocations that can be moved to different place.
Definition: vk_mem_alloc.h:3764
+VkDeviceSize maxBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places.
Definition: vk_mem_alloc.h:3759
+Parameters for incremental defragmentation steps.
Definition: vk_mem_alloc.h:3745
+uint32_t moveCount
Definition: vk_mem_alloc.h:3746
+VmaDefragmentationPassMoveInfo * pMoves
Definition: vk_mem_alloc.h:3747
+Definition: vk_mem_alloc.h:3735
+VkDeviceMemory memory
Definition: vk_mem_alloc.h:3737
+VkDeviceSize offset
Definition: vk_mem_alloc.h:3738
+VmaAllocation allocation
Definition: vk_mem_alloc.h:3736
+Statistics returned by function vmaDefragment().
Definition: vk_mem_alloc.h:3768
+uint32_t deviceMemoryBlocksFreed
Number of empty VkDeviceMemory objects that have been released to the system.
Definition: vk_mem_alloc.h:3776
+VkDeviceSize bytesMoved
Total number of bytes that have been copied while moving allocations to different places.
Definition: vk_mem_alloc.h:3770
+VkDeviceSize bytesFreed
Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects.
Definition: vk_mem_alloc.h:3772
+uint32_t allocationsMoved
Number of allocations that have been moved to different places.
Definition: vk_mem_alloc.h:3774
+Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
Definition: vk_mem_alloc.h:2228
+void * pUserData
Optional, can be null.
Definition: vk_mem_alloc.h:2234
+PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
Optional, can be null.
Definition: vk_mem_alloc.h:2230
+PFN_vmaFreeDeviceMemoryFunction pfnFree
Optional, can be null.
Definition: vk_mem_alloc.h:2232
+Describes parameter of created VmaPool.
Definition: vk_mem_alloc.h:3065
+float priority
A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relat...
Definition: vk_mem_alloc.h:3113
+uint32_t memoryTypeIndex
Vulkan memory type index to allocate this pool from.
Definition: vk_mem_alloc.h:3068
+VmaPoolCreateFlags flags
Use combination of VmaPoolCreateFlagBits.
Definition: vk_mem_alloc.h:3071
+uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:3107
+VkDeviceSize blockSize
Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes....
Definition: vk_mem_alloc.h:3080
+size_t minBlockCount
Minimum number of blocks to be always allocated in this pool, even if they stay empty.
Definition: vk_mem_alloc.h:3085
+VkDeviceSize minAllocationAlignment
Additional minimum alignment to be used for all allocations created from this pool....
Definition: vk_mem_alloc.h:3120
+size_t maxBlockCount
Maximum number of blocks that can be allocated in this pool. Optional.
Definition: vk_mem_alloc.h:3093
+void * pMemoryAllocateNext
Additional pNext chain to be attached to VkMemoryAllocateInfo used for every allocation made by this ...
Definition: vk_mem_alloc.h:3130
Represents custom memory pool.
-Describes parameter of existing VmaPool.
Definition: vk_mem_alloc.h:3138
-VkDeviceSize size
Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
Definition: vk_mem_alloc.h:3141
-size_t blockCount
Number of VkDeviceMemory blocks allocated for this pool.
Definition: vk_mem_alloc.h:3160
-VkDeviceSize unusedRangeSizeMax
Size of the largest continuous free memory region available for new allocation.
Definition: vk_mem_alloc.h:3157
-size_t allocationCount
Number of VmaAllocation objects created from this pool that were not destroyed or lost.
Definition: vk_mem_alloc.h:3147
-VkDeviceSize unusedSize
Total number of bytes in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:3144
-size_t unusedRangeCount
Number of continuous memory ranges in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:3150
-Parameters for recording calls to VMA functions. To be used in VmaAllocatorCreateInfo::pRecordSetting...
Definition: vk_mem_alloc.h:2407
-const char * pFilePath
Path to the file that should be written by the recording.
Definition: vk_mem_alloc.h:2417
-VmaRecordFlags flags
Flags for recording. Use VmaRecordFlagBits enum.
Definition: vk_mem_alloc.h:2409
-Calculated statistics of memory usage in entire allocator.
Definition: vk_mem_alloc.h:2599
-VkDeviceSize allocationSizeAvg
Definition: vk_mem_alloc.h:2610
-VkDeviceSize allocationSizeMax
Definition: vk_mem_alloc.h:2610
-VkDeviceSize unusedBytes
Total number of bytes occupied by unused ranges.
Definition: vk_mem_alloc.h:2609
-VkDeviceSize unusedRangeSizeAvg
Definition: vk_mem_alloc.h:2611
-uint32_t allocationCount
Number of VmaAllocation allocation objects allocated.
Definition: vk_mem_alloc.h:2603
-VkDeviceSize unusedRangeSizeMax
Definition: vk_mem_alloc.h:2611
-VkDeviceSize usedBytes
Total number of bytes occupied by all allocations.
Definition: vk_mem_alloc.h:2607
-uint32_t blockCount
Number of VkDeviceMemory Vulkan memory blocks allocated.
Definition: vk_mem_alloc.h:2601
-VkDeviceSize allocationSizeMin
Definition: vk_mem_alloc.h:2610
-uint32_t unusedRangeCount
Number of free ranges of memory between allocations.
Definition: vk_mem_alloc.h:2605
-VkDeviceSize unusedRangeSizeMin
Definition: vk_mem_alloc.h:2611
-General statistics from current state of Allocator.
Definition: vk_mem_alloc.h:2616
-VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
Definition: vk_mem_alloc.h:2618
-VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
Definition: vk_mem_alloc.h:2617
-VmaStatInfo total
Definition: vk_mem_alloc.h:2619
-Pointers to some Vulkan functions - a subset used by the library.
Definition: vk_mem_alloc.h:2361
-PFN_vkBindImageMemory vkBindImageMemory
Definition: vk_mem_alloc.h:2371
-PFN_vkCreateImage vkCreateImage
Definition: vk_mem_alloc.h:2376
-PFN_vkAllocateMemory vkAllocateMemory
Definition: vk_mem_alloc.h:2364
-PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
Definition: vk_mem_alloc.h:2368
-PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
Definition: vk_mem_alloc.h:2373
-PFN_vkFreeMemory vkFreeMemory
Definition: vk_mem_alloc.h:2365
-PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
Definition: vk_mem_alloc.h:2372
-PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
Definition: vk_mem_alloc.h:2369
-PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
Definition: vk_mem_alloc.h:2363
-PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
Definition: vk_mem_alloc.h:2362
-PFN_vkDestroyBuffer vkDestroyBuffer
Definition: vk_mem_alloc.h:2375
-PFN_vkDestroyImage vkDestroyImage
Definition: vk_mem_alloc.h:2377
-PFN_vkBindBufferMemory vkBindBufferMemory
Definition: vk_mem_alloc.h:2370
-PFN_vkMapMemory vkMapMemory
Definition: vk_mem_alloc.h:2366
-PFN_vkUnmapMemory vkUnmapMemory
Definition: vk_mem_alloc.h:2367
-PFN_vkCmdCopyBuffer vkCmdCopyBuffer
Definition: vk_mem_alloc.h:2378
-PFN_vkCreateBuffer vkCreateBuffer
Definition: vk_mem_alloc.h:2374
+Describes parameter of existing VmaPool.
Definition: vk_mem_alloc.h:3135
+VkDeviceSize size
Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
Definition: vk_mem_alloc.h:3138
+size_t blockCount
Number of VkDeviceMemory blocks allocated for this pool.
Definition: vk_mem_alloc.h:3157
+VkDeviceSize unusedRangeSizeMax
Size of the largest continuous free memory region available for new allocation.
Definition: vk_mem_alloc.h:3154
+size_t allocationCount
Number of VmaAllocation objects created from this pool that were not destroyed or lost.
Definition: vk_mem_alloc.h:3144
+VkDeviceSize unusedSize
Total number of bytes in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:3141
+size_t unusedRangeCount
Number of continuous memory ranges in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:3147
+Parameters for recording calls to VMA functions. To be used in VmaAllocatorCreateInfo::pRecordSetting...
Definition: vk_mem_alloc.h:2404
+const char * pFilePath
Path to the file that should be written by the recording.
Definition: vk_mem_alloc.h:2414
+VmaRecordFlags flags
Flags for recording. Use VmaRecordFlagBits enum.
Definition: vk_mem_alloc.h:2406
+Calculated statistics of memory usage in entire allocator.
Definition: vk_mem_alloc.h:2596
+VkDeviceSize allocationSizeAvg
Definition: vk_mem_alloc.h:2607
+VkDeviceSize allocationSizeMax
Definition: vk_mem_alloc.h:2607
+VkDeviceSize unusedBytes
Total number of bytes occupied by unused ranges.
Definition: vk_mem_alloc.h:2606
+VkDeviceSize unusedRangeSizeAvg
Definition: vk_mem_alloc.h:2608
+uint32_t allocationCount
Number of VmaAllocation allocation objects allocated.
Definition: vk_mem_alloc.h:2600
+VkDeviceSize unusedRangeSizeMax
Definition: vk_mem_alloc.h:2608
+VkDeviceSize usedBytes
Total number of bytes occupied by all allocations.
Definition: vk_mem_alloc.h:2604
+uint32_t blockCount
Number of VkDeviceMemory Vulkan memory blocks allocated.
Definition: vk_mem_alloc.h:2598
+VkDeviceSize allocationSizeMin
Definition: vk_mem_alloc.h:2607
+uint32_t unusedRangeCount
Number of free ranges of memory between allocations.
Definition: vk_mem_alloc.h:2602
+VkDeviceSize unusedRangeSizeMin
Definition: vk_mem_alloc.h:2608
+General statistics from current state of Allocator.
Definition: vk_mem_alloc.h:2613
+VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
Definition: vk_mem_alloc.h:2615
+VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
Definition: vk_mem_alloc.h:2614
+VmaStatInfo total
Definition: vk_mem_alloc.h:2616
+Pointers to some Vulkan functions - a subset used by the library.
Definition: vk_mem_alloc.h:2358
+PFN_vkBindImageMemory vkBindImageMemory
Definition: vk_mem_alloc.h:2368
+PFN_vkCreateImage vkCreateImage
Definition: vk_mem_alloc.h:2373
+PFN_vkAllocateMemory vkAllocateMemory
Definition: vk_mem_alloc.h:2361
+PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
Definition: vk_mem_alloc.h:2365
+PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
Definition: vk_mem_alloc.h:2370
+PFN_vkFreeMemory vkFreeMemory
Definition: vk_mem_alloc.h:2362
+PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
Definition: vk_mem_alloc.h:2369
+PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
Definition: vk_mem_alloc.h:2366
+PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
Definition: vk_mem_alloc.h:2360
+PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
Definition: vk_mem_alloc.h:2359
+PFN_vkDestroyBuffer vkDestroyBuffer
Definition: vk_mem_alloc.h:2372
+PFN_vkDestroyImage vkDestroyImage
Definition: vk_mem_alloc.h:2374
+PFN_vkBindBufferMemory vkBindBufferMemory
Definition: vk_mem_alloc.h:2367
+PFN_vkMapMemory vkMapMemory
Definition: vk_mem_alloc.h:2363
+PFN_vkUnmapMemory vkUnmapMemory
Definition: vk_mem_alloc.h:2364
+PFN_vkCmdCopyBuffer vkCmdCopyBuffer
Definition: vk_mem_alloc.h:2375
+PFN_vkCreateBuffer vkCreateBuffer
Definition: vk_mem_alloc.h:2371
VkResult vmaCreateImage(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkImage *pImage, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Function similar to vmaCreateBuffer().
VkResult vmaFindMemoryTypeIndexForImageInfo(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
void vmaDestroyBuffer(VmaAllocator allocator, VkBuffer buffer, VmaAllocation allocation)
Destroys Vulkan buffer and frees allocated memory.
VkResult vmaAllocateMemoryForImage(VmaAllocator allocator, VkImage image, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Function similar to vmaAllocateMemoryForBuffer().
struct VmaPoolCreateInfo VmaPoolCreateInfo
Describes parameter of created VmaPool.
-void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size, void *pUserData)
Callback function called before vkFreeMemory.
Definition: vk_mem_alloc.h:2217
+void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size, void *pUserData)
Callback function called before vkFreeMemory.
Definition: vk_mem_alloc.h:2214
struct VmaRecordSettings VmaRecordSettings
Parameters for recording calls to VMA functions. To be used in VmaAllocatorCreateInfo::pRecordSetting...
struct VmaAllocatorInfo VmaAllocatorInfo
Information about existing VmaAllocator object.
VkResult vmaEndDefragmentationPass(VmaAllocator allocator, VmaDefragmentationContext context)
struct VmaAllocationInfo VmaAllocationInfo
Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
-#define VMA_RECORDING_ENABLED
Definition: vk_mem_alloc.h:2029
+#define VMA_RECORDING_ENABLED
Definition: vk_mem_alloc.h:2026
VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
Creates Allocator object.
struct VmaStats VmaStats
General statistics from current state of Allocator.
-VkFlags VmaPoolCreateFlags
Definition: vk_mem_alloc.h:3064
+VkFlags VmaPoolCreateFlags
Definition: vk_mem_alloc.h:3061
struct VmaDefragmentationInfo VmaDefragmentationInfo
Deprecated. Optional configuration parameters to be passed to function vmaDefragment().
VkResult vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
Flushes memory of given allocation.
void vmaFreeStatsString(VmaAllocator allocator, char *pStatsString)
@@ -16683,57 +16685,57 @@ $(function() {
VkBool32 vmaTouchAllocation(VmaAllocator allocator, VmaAllocation allocation)
Returns VK_TRUE if allocation is not lost and atomically marks it as used in current frame.
struct VmaPoolStats VmaPoolStats
Describes parameter of existing VmaPool.
VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
Checks magic number in margins around all allocations in given memory types (in both default and cust...
-VmaRecordFlagBits
Flags to be used in VmaRecordSettings::flags.
Definition: vk_mem_alloc.h:2393
-@ VMA_RECORD_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2401
-@ VMA_RECORD_FLUSH_AFTER_CALL_BIT
Enables flush after recording every function call.
Definition: vk_mem_alloc.h:2399
-VmaAllocatorCreateFlagBits
Flags for created VmaAllocator.
Definition: vk_mem_alloc.h:2241
-@ VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
Definition: vk_mem_alloc.h:2316
-@ VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
Definition: vk_mem_alloc.h:2246
-@ VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT
Definition: vk_mem_alloc.h:2298
-@ VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
Definition: vk_mem_alloc.h:2334
-@ VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT
Definition: vk_mem_alloc.h:2286
-@ VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:2271
-@ VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2353
-@ VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
Definition: vk_mem_alloc.h:2351
-VkFlags VmaAllocationCreateFlags
Definition: vk_mem_alloc.h:2897
+VmaRecordFlagBits
Flags to be used in VmaRecordSettings::flags.
Definition: vk_mem_alloc.h:2390
+@ VMA_RECORD_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2398
+@ VMA_RECORD_FLUSH_AFTER_CALL_BIT
Enables flush after recording every function call.
Definition: vk_mem_alloc.h:2396
+VmaAllocatorCreateFlagBits
Flags for created VmaAllocator.
Definition: vk_mem_alloc.h:2238
+@ VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
Definition: vk_mem_alloc.h:2313
+@ VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
Definition: vk_mem_alloc.h:2243
+@ VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT
Definition: vk_mem_alloc.h:2295
+@ VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
Definition: vk_mem_alloc.h:2331
+@ VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT
Definition: vk_mem_alloc.h:2283
+@ VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:2268
+@ VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2350
+@ VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
Definition: vk_mem_alloc.h:2348
+VkFlags VmaAllocationCreateFlags
Definition: vk_mem_alloc.h:2894
void vmaDestroyPool(VmaAllocator allocator, VmaPool pool)
Destroys VmaPool object and frees Vulkan device memory.
VkResult vmaCreatePool(VmaAllocator allocator, const VmaPoolCreateInfo *pCreateInfo, VmaPool *pPool)
Allocates Vulkan device memory and creates VmaPool object.
void vmaFreeMemory(VmaAllocator allocator, const VmaAllocation allocation)
Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(),...
-VmaDefragmentationFlagBits
Flags to be used in vmaDefragmentationBegin(). None at the moment. Reserved for future use.
Definition: vk_mem_alloc.h:3656
-@ VMA_DEFRAGMENTATION_FLAG_INCREMENTAL
Definition: vk_mem_alloc.h:3657
-@ VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:3658
+VmaDefragmentationFlagBits
Flags to be used in vmaDefragmentationBegin(). None at the moment. Reserved for future use.
Definition: vk_mem_alloc.h:3653
+@ VMA_DEFRAGMENTATION_FLAG_INCREMENTAL
Definition: vk_mem_alloc.h:3654
+@ VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:3655
VkResult vmaBindBufferMemory(VmaAllocator allocator, VmaAllocation allocation, VkBuffer buffer)
Binds buffer to allocation.
struct VmaDefragmentationPassInfo VmaDefragmentationPassInfo
Parameters for incremental defragmentation steps.
void vmaMakePoolAllocationsLost(VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)
Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInf...
struct VmaDeviceMemoryCallbacks VmaDeviceMemoryCallbacks
Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
-void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size, void *pUserData)
Callback function called after successful vkAllocateMemory.
Definition: vk_mem_alloc.h:2210
+void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size, void *pUserData)
Callback function called after successful vkAllocateMemory.
Definition: vk_mem_alloc.h:2207
VkResult vmaAllocateMemoryForBuffer(VmaAllocator allocator, VkBuffer buffer, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
void vmaFreeMemoryPages(VmaAllocator allocator, size_t allocationCount, const VmaAllocation *pAllocations)
Frees memory and destroys multiple allocations.
void vmaGetAllocationInfo(VmaAllocator allocator, VmaAllocation allocation, VmaAllocationInfo *pAllocationInfo)
Returns current information about specified allocation and atomically marks it as used in current fra...
void vmaGetMemoryTypeProperties(VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
Given Memory Type Index, returns Property Flags of this memory type.
VkResult vmaDefragmentationEnd(VmaAllocator allocator, VmaDefragmentationContext context)
Ends defragmentation process.
-VkFlags VmaDefragmentationFlags
Definition: vk_mem_alloc.h:3660
+VkFlags VmaDefragmentationFlags
Definition: vk_mem_alloc.h:3657
VkResult vmaBindBufferMemory2(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize allocationLocalOffset, VkBuffer buffer, const void *pNext)
Binds buffer to allocation with additional parameters.
-VmaPoolCreateFlagBits
Flags to be passed as VmaPoolCreateInfo::flags.
Definition: vk_mem_alloc.h:3008
-@ VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT
Enables alternative, linear allocation algorithm in this pool.
Definition: vk_mem_alloc.h:3043
-@ VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:3062
-@ VMA_POOL_CREATE_BUDDY_ALGORITHM_BIT
Enables alternative, buddy allocation algorithm in this pool.
Definition: vk_mem_alloc.h:3054
-@ VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT
Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
Definition: vk_mem_alloc.h:3026
-@ VMA_POOL_CREATE_ALGORITHM_MASK
Definition: vk_mem_alloc.h:3058
+VmaPoolCreateFlagBits
Flags to be passed as VmaPoolCreateInfo::flags.
Definition: vk_mem_alloc.h:3005
+@ VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT
Enables alternative, linear allocation algorithm in this pool.
Definition: vk_mem_alloc.h:3040
+@ VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:3059
+@ VMA_POOL_CREATE_BUDDY_ALGORITHM_BIT
Enables alternative, buddy allocation algorithm in this pool.
Definition: vk_mem_alloc.h:3051
+@ VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT
Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
Definition: vk_mem_alloc.h:3023
+@ VMA_POOL_CREATE_ALGORITHM_MASK
Definition: vk_mem_alloc.h:3055
void vmaUnmapMemory(VmaAllocator allocator, VmaAllocation allocation)
Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
VkResult vmaDefragment(VmaAllocator allocator, const VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)
Deprecated. Compacts memory by moving allocations.
VkResult vmaCreateBufferWithAlignment(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkDeviceSize minAlignment, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Creates a buffer with additional minimum alignment.
struct VmaBudget VmaBudget
Statistics of current memory usage and available budget, in bytes, for specific memory heap.
void vmaBuildStatsString(VmaAllocator allocator, char **ppStatsString, VkBool32 detailedMap)
Builds and returns statistics as string in JSON format.
-VmaMemoryUsage
Definition: vk_mem_alloc.h:2721
-@ VMA_MEMORY_USAGE_MAX_ENUM
Definition: vk_mem_alloc.h:2784
-@ VMA_MEMORY_USAGE_CPU_ONLY
Definition: vk_mem_alloc.h:2752
-@ VMA_MEMORY_USAGE_CPU_COPY
Definition: vk_mem_alloc.h:2774
-@ VMA_MEMORY_USAGE_GPU_TO_CPU
Definition: vk_mem_alloc.h:2768
-@ VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED
Definition: vk_mem_alloc.h:2782
-@ VMA_MEMORY_USAGE_CPU_TO_GPU
Definition: vk_mem_alloc.h:2759
-@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2742
-@ VMA_MEMORY_USAGE_UNKNOWN
Definition: vk_mem_alloc.h:2725
+VmaMemoryUsage
Definition: vk_mem_alloc.h:2718
+@ VMA_MEMORY_USAGE_MAX_ENUM
Definition: vk_mem_alloc.h:2781
+@ VMA_MEMORY_USAGE_CPU_ONLY
Definition: vk_mem_alloc.h:2749
+@ VMA_MEMORY_USAGE_CPU_COPY
Definition: vk_mem_alloc.h:2771
+@ VMA_MEMORY_USAGE_GPU_TO_CPU
Definition: vk_mem_alloc.h:2765
+@ VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED
Definition: vk_mem_alloc.h:2779
+@ VMA_MEMORY_USAGE_CPU_TO_GPU
Definition: vk_mem_alloc.h:2756
+@ VMA_MEMORY_USAGE_GPU_ONLY
Definition: vk_mem_alloc.h:2739
+@ VMA_MEMORY_USAGE_UNKNOWN
Definition: vk_mem_alloc.h:2722
VkResult vmaBindImageMemory2(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize allocationLocalOffset, VkImage image, const void *pNext)
Binds image to allocation with additional parameters.
void vmaDestroyAllocator(VmaAllocator allocator)
Destroys allocator object.
VkResult vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
Invalidates memory of given allocation.
@@ -16745,31 +16747,31 @@ $(function() {
VkResult vmaBeginDefragmentationPass(VmaAllocator allocator, VmaDefragmentationContext context, VmaDefragmentationPassInfo *pInfo)
VkResult vmaFlushAllocations(VmaAllocator allocator, uint32_t allocationCount, const VmaAllocation *allocations, const VkDeviceSize *offsets, const VkDeviceSize *sizes)
Flushes memory of given set of allocations.
VkResult vmaCreateBuffer(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
-VkFlags VmaAllocatorCreateFlags
Definition: vk_mem_alloc.h:2355
+VkFlags VmaAllocatorCreateFlags
Definition: vk_mem_alloc.h:2352
VkResult vmaAllocateMemoryPages(VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, size_t allocationCount, VmaAllocation *pAllocations, VmaAllocationInfo *pAllocationInfo)
General purpose memory allocation for multiple allocation objects at once.
VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
Checks magic number in margins around all allocations in given memory pool in search for corruptions.
VkResult vmaMapMemory(VmaAllocator allocator, VmaAllocation allocation, void **ppData)
Maps memory represented by given allocation and returns pointer to it.
struct VmaDefragmentationPassMoveInfo VmaDefragmentationPassMoveInfo
struct VmaDefragmentationInfo2 VmaDefragmentationInfo2
Parameters for defragmentation.
struct VmaDefragmentationStats VmaDefragmentationStats
Statistics returned by function vmaDefragment().
-VmaAllocationCreateFlagBits
Flags to be passed as VmaAllocationCreateInfo::flags.
Definition: vk_mem_alloc.h:2788
-@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
Definition: vk_mem_alloc.h:2883
-@ VMA_ALLOCATION_CREATE_MAPPED_BIT
Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
Definition: vk_mem_alloc.h:2819
-@ VMA_ALLOCATION_CREATE_DONT_BIND_BIT
Definition: vk_mem_alloc.h:2856
-@ VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT
Definition: vk_mem_alloc.h:2876
-@ VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
Set this flag if the allocation should have its own memory block.
Definition: vk_mem_alloc.h:2795
-@ VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
Definition: vk_mem_alloc.h:2850
-@ VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT
Definition: vk_mem_alloc.h:2832
-@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_FRAGMENTATION_BIT
Definition: vk_mem_alloc.h:2886
-@ VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT
Definition: vk_mem_alloc.h:2839
-@ VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT
Definition: vk_mem_alloc.h:2865
-@ VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT
Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
Definition: vk_mem_alloc.h:2806
-@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT
Definition: vk_mem_alloc.h:2880
-@ VMA_ALLOCATION_CREATE_STRATEGY_MASK
Definition: vk_mem_alloc.h:2890
-@ VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT
Definition: vk_mem_alloc.h:2845
-@ VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT
Definition: vk_mem_alloc.h:2860
-@ VMA_ALLOCATION_CREATE_STRATEGY_WORST_FIT_BIT
Definition: vk_mem_alloc.h:2869
-@ VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2895
+VmaAllocationCreateFlagBits
Flags to be passed as VmaAllocationCreateInfo::flags.
Definition: vk_mem_alloc.h:2785
+@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
Definition: vk_mem_alloc.h:2880
+@ VMA_ALLOCATION_CREATE_MAPPED_BIT
Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
Definition: vk_mem_alloc.h:2816
+@ VMA_ALLOCATION_CREATE_DONT_BIND_BIT
Definition: vk_mem_alloc.h:2853
+@ VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT
Definition: vk_mem_alloc.h:2873
+@ VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
Set this flag if the allocation should have its own memory block.
Definition: vk_mem_alloc.h:2792
+@ VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
Definition: vk_mem_alloc.h:2847
+@ VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT
Definition: vk_mem_alloc.h:2829
+@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_FRAGMENTATION_BIT
Definition: vk_mem_alloc.h:2883
+@ VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT
Definition: vk_mem_alloc.h:2836
+@ VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT
Definition: vk_mem_alloc.h:2862
+@ VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT
Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
Definition: vk_mem_alloc.h:2803
+@ VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT
Definition: vk_mem_alloc.h:2877
+@ VMA_ALLOCATION_CREATE_STRATEGY_MASK
Definition: vk_mem_alloc.h:2887
+@ VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT
Definition: vk_mem_alloc.h:2842
+@ VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT
Definition: vk_mem_alloc.h:2857
+@ VMA_ALLOCATION_CREATE_STRATEGY_WORST_FIT_BIT
Definition: vk_mem_alloc.h:2866
+@ VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM
Definition: vk_mem_alloc.h:2892
void vmaSetPoolName(VmaAllocator allocator, VmaPool pool, const char *pName)
Sets name of a custom pool.
void vmaSetCurrentFrameIndex(VmaAllocator allocator, uint32_t frameIndex)
Sets index of the current frame.
void vmaDestroyImage(VmaAllocator allocator, VkImage image, VmaAllocation allocation)
Destroys Vulkan image and frees allocated memory.
@@ -16781,7 +16783,7 @@ $(function() {
void vmaGetPhysicalDeviceProperties(VmaAllocator allocator, const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
VkResult vmaFindMemoryTypeIndex(VmaAllocator allocator, uint32_t memoryTypeBits, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
void vmaGetPoolName(VmaAllocator allocator, VmaPool pool, const char **ppName)
Retrieves name of a custom pool.
-VkFlags VmaRecordFlags
Definition: vk_mem_alloc.h:2403
+VkFlags VmaRecordFlags
Definition: vk_mem_alloc.h:2400
void vmaSetAllocationUserData(VmaAllocator allocator, VmaAllocation allocation, void *pUserData)
Sets pUserData in given allocation to new value.
void vmaGetAllocatorInfo(VmaAllocator allocator, VmaAllocatorInfo *pAllocatorInfo)
Returns information about existing VmaAllocator object - handle to Vulkan device etc.
diff --git a/docs/html/vk_khr_dedicated_allocation.html b/docs/html/vk_khr_dedicated_allocation.html
index a5e9410..309f0c5 100644
--- a/docs/html/vk_khr_dedicated_allocation.html
+++ b/docs/html/vk_khr_dedicated_allocation.html
@@ -82,7 +82,7 @@ $(function() {
VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
Creates Allocator object.
-@ VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:2271
+@ VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:2268
That's all. The extension will be automatically used whenever you create a buffer using vmaCreateBuffer() or image using vmaCreateImage().
When using the extension together with Vulkan Validation Layer, you will receive warnings like this: