wgpu_core/hub.rs
1/*! Allocating resource ids, and tracking the resources they refer to.
2
3The `wgpu_core` API uses identifiers of type [`Id<R>`] to refer to
4resources of type `R`. For example, [`id::DeviceId`] is an alias for
5`Id<Device<Empty>>`, and [`id::BufferId`] is an alias for
6`Id<Buffer<Empty>>`. `Id` implements `Copy`, `Hash`, `Eq`, `Ord`, and
7of course `Debug`.
8
9Each `Id` contains not only an index for the resource it denotes but
10also a [`Backend`] indicating which `wgpu` backend it belongs to. You
11can use the [`gfx_select`] macro to dynamically dispatch on an id's
12backend to a function specialized at compile time for a specific
13backend. See that macro's documentation for details.
14
15`Id`s also incorporate a generation number, for additional validation.
16
17The resources to which identifiers refer are freed explicitly.
18Attempting to use an identifier for a resource that has been freed
19elicits an error result.
20
21## Assigning ids to resources
22
23The users of `wgpu_core` generally want resource ids to be assigned
24in one of two ways:
25
26- Users like `wgpu` want `wgpu_core` to assign ids to resources itself.
27 For example, `wgpu` expects to call `Global::device_create_buffer`
28 and have the return value indicate the newly created buffer's id.
29
30- Users like `player` and Firefox want to allocate ids themselves, and
31 pass `Global::device_create_buffer` and friends the id to assign the
32 new resource.
33
34To accommodate either pattern, `wgpu_core` methods that create
35resources all expect an `id_in` argument that the caller can use to
36specify the id, and they all return the id used. For example, the
37declaration of `Global::device_create_buffer` looks like this:
38
39```ignore
40impl<G: GlobalIdentityHandlerFactory> Global<G> {
41 /* ... */
42 pub fn device_create_buffer<A: HalApi>(
43 &self,
44 device_id: id::DeviceId,
45 desc: &resource::BufferDescriptor,
46 id_in: Input<G, id::BufferId>,
47 ) -> (id::BufferId, Option<resource::CreateBufferError>) {
48 /* ... */
49 }
50 /* ... */
51}
52```
53
54Users that want to assign resource ids themselves pass in the id they
55want as the `id_in` argument, whereas users that want `wgpu_core`
56itself to choose ids always pass `()`. In either case, the id
57ultimately assigned is returned as the first element of the tuple.
58
59Producing true identifiers from `id_in` values is the job of an
60[`IdentityHandler`] implementation, which has an associated type
61[`Input`] saying what type of `id_in` values it accepts, and a
62[`process`] method that turns such values into true identifiers of
63type `I`. There are two kinds of `IdentityHandler`s:
64
65- Users that want `wgpu_core` to assign ids generally use
66 [`IdentityManager`] ([wrapped in a mutex]). Its `Input` type is
67 `()`, and it tracks assigned ids and generation numbers as
68 necessary. (This is what `wgpu` does.)
69
70- Users that want to assign ids themselves use an `IdentityHandler`
71 whose `Input` type is `I` itself, and whose `process` method simply
72 passes the `id_in` argument through unchanged. For example, the
73 `player` crate uses an `IdentityPassThrough` type whose `process`
74 method simply adjusts the id's backend (since recordings can be
75 replayed on a different backend than the one they were created on)
76 but passes the rest of the id's content through unchanged.
77
78Because an `IdentityHandler<I>` can only create ids for a single
79resource type `I`, constructing a [`Global`] entails constructing a
80separate `IdentityHandler<I>` for each resource type `I` that the
81`Global` will manage: an `IdentityHandler<DeviceId>`, an
82`IdentityHandler<TextureId>`, and so on.
83
84The [`Global::new`] function could simply take a large collection of
85`IdentityHandler<I>` implementations as arguments, but that would be
86ungainly. Instead, `Global::new` expects a `factory` argument that
87implements the [`GlobalIdentityHandlerFactory`] trait, which extends
88[`IdentityHandlerFactory<I>`] for each resource id type `I`. This
89trait, in turn, has a `spawn` method that constructs an
90`IdentityHandler<I>` for the `Global` to use.
91
92What this means is that the types of resource creation functions'
93`id_in` arguments depend on the `Global`'s `G` type parameter. A
94`Global<G>`'s `IdentityHandler<I>` implementation is:
95
96```ignore
97<G as IdentityHandlerFactory<I>>::Filter
98```
99
100where `Filter` is an associated type of the `IdentityHandlerFactory` trait.
101Thus, its `id_in` type is:
102
103```ignore
104<<G as IdentityHandlerFactory<I>>::Filter as IdentityHandler<I>>::Input
105```
106
107The [`Input<G, I>`] type is an alias for this construction.
108
109## Id allocation and streaming
110
111Perhaps surprisingly, allowing users to assign resource ids themselves
112enables major performance improvements in some applications.
113
114The `wgpu_core` API is designed for use by Firefox's [WebGPU]
115implementation. For security, web content and GPU use must be kept
116segregated in separate processes, with all interaction between them
117mediated by an inter-process communication protocol. As web content uses
118the WebGPU API, the content process sends messages to the GPU process,
119which interacts with the platform's GPU APIs on content's behalf,
120occasionally sending results back.
121
122In a classic Rust API, a resource allocation function takes parameters
123describing the resource to create, and if creation succeeds, it returns
124the resource id in a `Result::Ok` value. However, this design is a poor
125fit for the split-process design described above: content must wait for
126the reply to its buffer-creation message (say) before it can know which
127id it can use in the next message that uses that buffer. On a common
128usage pattern, the classic Rust design imposes the latency of a full
129cross-process round trip.
130
131We can avoid incurring these round-trip latencies simply by letting the
132content process assign resource ids itself. With this approach, content
133can choose an id for the new buffer, send a message to create the
134buffer, and then immediately send the next message operating on that
135buffer, since it already knows its id. Allowing content and GPU process
136activity to be pipelined greatly improves throughput.
137
138To help propagate errors correctly in this style of usage, when resource
139creation fails, the id supplied for that resource is marked to indicate
140as much, allowing subsequent operations using that id to be properly
141flagged as errors as well.
142
143[`Backend`]: wgt::Backend
144[`Global`]: crate::global::Global
145[`Global::new`]: crate::global::Global::new
146[`gfx_select`]: crate::gfx_select
147[`IdentityHandler`]: crate::identity::IdentityHandler
148[`Input`]: crate::identity::IdentityHandler::Input
149[`process`]: crate::identity::IdentityHandler::process
150[`Id<R>`]: crate::id::Id
151[wrapped in a mutex]: ../identity/trait.IdentityHandler.html#impl-IdentityHandler%3CI%3E-for-Mutex%3CIdentityManager%3E
152[WebGPU]: https://www.w3.org/TR/webgpu/
153[`IdentityManager`]: crate::identity::IdentityManager
154[`Input<G, I>`]: crate::identity::Input
155[`IdentityHandlerFactory<I>`]: crate::identity::IdentityHandlerFactory
156*/
157
158use crate::{
159 binding_model::{BindGroup, BindGroupLayout, PipelineLayout},
160 command::{CommandBuffer, RenderBundle},
161 device::Device,
162 hal_api::HalApi,
163 id,
164 identity::GlobalIdentityHandlerFactory,
165 instance::{Adapter, HalSurface, Instance, Surface},
166 pipeline::{ComputePipeline, RenderPipeline, ShaderModule},
167 registry::Registry,
168 resource::{Buffer, QuerySet, Sampler, StagingBuffer, Texture, TextureClearMode, TextureView},
169 storage::{Element, Storage, StorageReport},
170};
171
172#[cfg(debug_assertions)]
173use std::cell::Cell;
174use std::{fmt::Debug, marker::PhantomData};
175
176/// Type system for enforcing the lock order on [`Hub`] fields.
177///
178/// If type `A` implements `Access<B>`, that means we are allowed to
179/// proceed with locking resource `B` after we lock `A`.
180///
181/// The implementations of `Access` basically describe the edges in an
182/// acyclic directed graph of lock transitions. As long as it doesn't have
183/// cycles, any number of threads can acquire locks along paths through
184/// the graph without deadlock. That is, if you look at each thread's
185/// lock acquisitions as steps along a path in the graph, then because
186/// there are no cycles in the graph, there must always be some thread
187/// that is able to acquire its next lock, or that is about to release
188/// a lock. (Assume that no thread just sits on its locks forever.)
189///
190/// Locks must be acquired in the following order:
191///
192/// - [`Adapter`]
193/// - [`Device`]
194/// - [`CommandBuffer`]
195/// - [`RenderBundle`]
196/// - [`PipelineLayout`]
197/// - [`BindGroupLayout`]
198/// - [`BindGroup`]
199/// - [`ComputePipeline`]
200/// - [`RenderPipeline`]
201/// - [`ShaderModule`]
202/// - [`Buffer`]
203/// - [`StagingBuffer`]
204/// - [`Texture`]
205/// - [`TextureView`]
206/// - [`Sampler`]
207/// - [`QuerySet`]
208///
209/// That is, you may only acquire a new lock on a `Hub` field if it
210/// appears in the list after all the other fields you're already
211/// holding locks for. When you are holding no locks, you can start
212/// anywhere.
213///
214/// It's fine to add more `Access` implementations as needed, as long
215/// as you do not introduce a cycle. In other words, as long as there
216/// is some ordering you can put the resource types in that respects
217/// the extant `Access` implementations, that's fine.
218///
219/// See the documentation for [`Hub`] for more details.
220pub trait Access<A> {}
221
222pub enum Root {}
223
224// These impls are arranged so that the target types (that is, the `T`
225// in `Access<T>`) appear in locking order.
226//
227// TODO: establish an order instead of declaring all the pairs.
228impl Access<Instance> for Root {}
229impl Access<Surface> for Root {}
230impl Access<Surface> for Instance {}
231impl<A: HalApi> Access<Adapter<A>> for Root {}
232impl<A: HalApi> Access<Adapter<A>> for Surface {}
233impl<A: HalApi> Access<Device<A>> for Root {}
234impl<A: HalApi> Access<Device<A>> for Surface {}
235impl<A: HalApi> Access<Device<A>> for Adapter<A> {}
236impl<A: HalApi> Access<CommandBuffer<A>> for Root {}
237impl<A: HalApi> Access<CommandBuffer<A>> for Device<A> {}
238impl<A: HalApi> Access<RenderBundle<A>> for Device<A> {}
239impl<A: HalApi> Access<RenderBundle<A>> for CommandBuffer<A> {}
240impl<A: HalApi> Access<PipelineLayout<A>> for Root {}
241impl<A: HalApi> Access<PipelineLayout<A>> for Device<A> {}
242impl<A: HalApi> Access<PipelineLayout<A>> for RenderBundle<A> {}
243impl<A: HalApi> Access<BindGroupLayout<A>> for Root {}
244impl<A: HalApi> Access<BindGroupLayout<A>> for Device<A> {}
245impl<A: HalApi> Access<BindGroupLayout<A>> for PipelineLayout<A> {}
246impl<A: HalApi> Access<BindGroup<A>> for Root {}
247impl<A: HalApi> Access<BindGroup<A>> for Device<A> {}
248impl<A: HalApi> Access<BindGroup<A>> for BindGroupLayout<A> {}
249impl<A: HalApi> Access<BindGroup<A>> for PipelineLayout<A> {}
250impl<A: HalApi> Access<BindGroup<A>> for CommandBuffer<A> {}
251impl<A: HalApi> Access<ComputePipeline<A>> for Device<A> {}
252impl<A: HalApi> Access<ComputePipeline<A>> for BindGroup<A> {}
253impl<A: HalApi> Access<RenderPipeline<A>> for Device<A> {}
254impl<A: HalApi> Access<RenderPipeline<A>> for BindGroup<A> {}
255impl<A: HalApi> Access<RenderPipeline<A>> for ComputePipeline<A> {}
256impl<A: HalApi> Access<ShaderModule<A>> for Device<A> {}
257impl<A: HalApi> Access<ShaderModule<A>> for BindGroupLayout<A> {}
258impl<A: HalApi> Access<Buffer<A>> for Root {}
259impl<A: HalApi> Access<Buffer<A>> for Device<A> {}
260impl<A: HalApi> Access<Buffer<A>> for BindGroupLayout<A> {}
261impl<A: HalApi> Access<Buffer<A>> for BindGroup<A> {}
262impl<A: HalApi> Access<Buffer<A>> for CommandBuffer<A> {}
263impl<A: HalApi> Access<Buffer<A>> for ComputePipeline<A> {}
264impl<A: HalApi> Access<Buffer<A>> for RenderPipeline<A> {}
265impl<A: HalApi> Access<Buffer<A>> for QuerySet<A> {}
266impl<A: HalApi> Access<StagingBuffer<A>> for Device<A> {}
267impl<A: HalApi> Access<Texture<A>> for Root {}
268impl<A: HalApi> Access<Texture<A>> for Device<A> {}
269impl<A: HalApi> Access<Texture<A>> for Buffer<A> {}
270impl<A: HalApi> Access<TextureView<A>> for Root {}
271impl<A: HalApi> Access<TextureView<A>> for Device<A> {}
272impl<A: HalApi> Access<TextureView<A>> for Texture<A> {}
273impl<A: HalApi> Access<Sampler<A>> for Root {}
274impl<A: HalApi> Access<Sampler<A>> for Device<A> {}
275impl<A: HalApi> Access<Sampler<A>> for TextureView<A> {}
276impl<A: HalApi> Access<QuerySet<A>> for Root {}
277impl<A: HalApi> Access<QuerySet<A>> for Device<A> {}
278impl<A: HalApi> Access<QuerySet<A>> for CommandBuffer<A> {}
279impl<A: HalApi> Access<QuerySet<A>> for RenderPipeline<A> {}
280impl<A: HalApi> Access<QuerySet<A>> for ComputePipeline<A> {}
281impl<A: HalApi> Access<QuerySet<A>> for Sampler<A> {}
282
283#[cfg(debug_assertions)]
284thread_local! {
285 /// Per-thread state checking `Token<Root>` creation in debug builds.
286 ///
287 /// This is the number of `Token` values alive on the current
288 /// thread. Since `Token` creation respects the [`Access`] graph,
289 /// there can never be more tokens alive than there are fields of
290 /// [`Hub`], so a `u8` is plenty.
291 static ACTIVE_TOKEN: Cell<u8> = Cell::new(0);
292}
293
294/// A zero-size permission token to lock some fields of [`Hub`].
295///
296/// Access to a `Token<T>` grants permission to lock any field of
297/// [`Hub`] following the one of type [`Registry<T, ...>`], where
298/// "following" is as defined by the [`Access`] implementations.
299///
300/// Calling [`Token::root()`] returns a `Token<Root>`, which grants
301/// permission to lock any field. Dynamic checks ensure that each
302/// thread has at most one `Token<Root>` live at a time, in debug
303/// builds.
304///
305/// The locking methods on `Registry<T, ...>` take a `&'t mut
306/// Token<A>`, and return a fresh `Token<'t, T>` and a lock guard with
307/// lifetime `'t`, so the caller cannot access their `Token<A>` again
308/// until they have dropped both the `Token<T>` and the lock guard.
309///
310/// Tokens are `!Send`, so one thread can't send its permissions to
311/// another.
312pub(crate) struct Token<'a, T: 'a> {
313 // The `*const` makes us `!Send` and `!Sync`.
314 level: PhantomData<&'a *const T>,
315}
316
317impl<'a, T> Token<'a, T> {
318 /// Return a new token for a locked field.
319 ///
320 /// This should only be used by `Registry` locking methods.
321 pub(crate) fn new() -> Self {
322 #[cfg(debug_assertions)]
323 ACTIVE_TOKEN.with(|active| {
324 let old = active.get();
325 assert_ne!(old, 0, "Root token was dropped");
326 active.set(old + 1);
327 });
328 Self { level: PhantomData }
329 }
330}
331
332impl Token<'static, Root> {
333 /// Return a `Token<Root>`, granting permission to lock any [`Hub`] field.
334 ///
335 /// Debug builds check dynamically that each thread has at most
336 /// one root token at a time.
337 pub fn root() -> Self {
338 #[cfg(debug_assertions)]
339 ACTIVE_TOKEN.with(|active| {
340 assert_eq!(0, active.replace(1), "Root token is already active");
341 });
342
343 Self { level: PhantomData }
344 }
345}
346
347impl<'a, T> Drop for Token<'a, T> {
348 fn drop(&mut self) {
349 #[cfg(debug_assertions)]
350 ACTIVE_TOKEN.with(|active| {
351 let old = active.get();
352 active.set(old - 1);
353 });
354 }
355}
356
357#[derive(Debug)]
358pub struct HubReport {
359 pub adapters: StorageReport,
360 pub devices: StorageReport,
361 pub pipeline_layouts: StorageReport,
362 pub shader_modules: StorageReport,
363 pub bind_group_layouts: StorageReport,
364 pub bind_groups: StorageReport,
365 pub command_buffers: StorageReport,
366 pub render_bundles: StorageReport,
367 pub render_pipelines: StorageReport,
368 pub compute_pipelines: StorageReport,
369 pub query_sets: StorageReport,
370 pub buffers: StorageReport,
371 pub textures: StorageReport,
372 pub texture_views: StorageReport,
373 pub samplers: StorageReport,
374}
375
376impl HubReport {
377 pub fn is_empty(&self) -> bool {
378 self.adapters.is_empty()
379 }
380}
381
382#[allow(rustdoc::private_intra_doc_links)]
383/// All the resources for a particular backend in a [`Global`].
384///
385/// To obtain `global`'s `Hub` for some [`HalApi`] backend type `A`,
386/// call [`A::hub(global)`].
387///
388/// ## Locking
389///
390/// Each field in `Hub` is a [`Registry`] holding all the values of a
391/// particular type of resource, all protected by a single [`RwLock`].
392/// So for example, to access any [`Buffer`], you must acquire a read
393/// lock on the `Hub`s entire [`buffers`] registry. The lock guard
394/// gives you access to the `Registry`'s [`Storage`], which you can
395/// then index with the buffer's id. (Yes, this design causes
396/// contention; see [#2272].)
397///
398/// But most `wgpu` operations require access to several different
399/// kinds of resource, so you often need to hold locks on several
400/// different fields of your [`Hub`] simultaneously. To avoid
401/// deadlock, there is an ordering imposed on the fields, and you may
402/// only acquire new locks on fields that come *after* all those you
403/// are already holding locks on, in this ordering. (The ordering is
404/// described in the documentation for the [`Access`] trait.)
405///
406/// We use Rust's type system to statically check that `wgpu_core` can
407/// only ever acquire locks in the correct order:
408///
409/// - A value of type [`Token<T>`] represents proof that the owner
410/// only holds locks on the `Hub` fields holding resources of type
411/// `T` or earlier in the lock ordering. A special value of type
412/// `Token<Root>`, obtained by calling [`Token::root`], represents
413/// proof that no `Hub` field locks are held.
414///
415/// - To lock the `Hub` field holding resources of type `T`, you must
416/// call its [`read`] or [`write`] methods. These require you to
417/// pass in a `&mut Token<A>`, for some `A` that implements
418/// [`Access<T>`]. This implementation exists only if `T` follows `A`
419/// in the field ordering, which statically ensures that you are
420/// indeed allowed to lock this new `Hub` field.
421///
422/// - The locking methods return both an [`RwLock`] guard that you can
423/// use to access the field's resources, and a new `Token<T>` value.
424/// These both borrow from the lifetime of your `Token<A>`, so since
425/// you passed that by mutable reference, you cannot access it again
426/// until you drop the new token and lock guard.
427///
428/// Because a thread only ever has access to the `Token<T>` for the
429/// last resource type `T` it holds a lock for, and the `Access` trait
430/// implementations only permit acquiring locks for types `U` that
431/// follow `T` in the lock ordering, it is statically impossible for a
432/// program to violate the locking order.
433///
434/// This does assume that threads cannot call `Token<Root>` when they
435/// already hold locks (dynamically enforced in debug builds) and that
436/// threads cannot send their `Token`s to other threads (enforced by
437/// making `Token` neither `Send` nor `Sync`).
438///
439/// [`Global`]: crate::global::Global
440/// [`A::hub(global)`]: HalApi::hub
441/// [`RwLock`]: parking_lot::RwLock
442/// [`buffers`]: Hub::buffers
443/// [`read`]: Registry::read
444/// [`write`]: Registry::write
445/// [`Token<T>`]: Token
446/// [`Access<T>`]: Access
447/// [#2272]: https://github.com/gfx-rs/wgpu/pull/2272
448pub struct Hub<A: HalApi, F: GlobalIdentityHandlerFactory> {
449 pub adapters: Registry<Adapter<A>, id::AdapterId, F>,
450 pub devices: Registry<Device<A>, id::DeviceId, F>,
451 pub pipeline_layouts: Registry<PipelineLayout<A>, id::PipelineLayoutId, F>,
452 pub shader_modules: Registry<ShaderModule<A>, id::ShaderModuleId, F>,
453 pub bind_group_layouts: Registry<BindGroupLayout<A>, id::BindGroupLayoutId, F>,
454 pub bind_groups: Registry<BindGroup<A>, id::BindGroupId, F>,
455 pub command_buffers: Registry<CommandBuffer<A>, id::CommandBufferId, F>,
456 pub render_bundles: Registry<RenderBundle<A>, id::RenderBundleId, F>,
457 pub render_pipelines: Registry<RenderPipeline<A>, id::RenderPipelineId, F>,
458 pub compute_pipelines: Registry<ComputePipeline<A>, id::ComputePipelineId, F>,
459 pub query_sets: Registry<QuerySet<A>, id::QuerySetId, F>,
460 pub buffers: Registry<Buffer<A>, id::BufferId, F>,
461 pub staging_buffers: Registry<StagingBuffer<A>, id::StagingBufferId, F>,
462 pub textures: Registry<Texture<A>, id::TextureId, F>,
463 pub texture_views: Registry<TextureView<A>, id::TextureViewId, F>,
464 pub samplers: Registry<Sampler<A>, id::SamplerId, F>,
465}
466
467impl<A: HalApi, F: GlobalIdentityHandlerFactory> Hub<A, F> {
468 fn new(factory: &F) -> Self {
469 Self {
470 adapters: Registry::new(A::VARIANT, factory),
471 devices: Registry::new(A::VARIANT, factory),
472 pipeline_layouts: Registry::new(A::VARIANT, factory),
473 shader_modules: Registry::new(A::VARIANT, factory),
474 bind_group_layouts: Registry::new(A::VARIANT, factory),
475 bind_groups: Registry::new(A::VARIANT, factory),
476 command_buffers: Registry::new(A::VARIANT, factory),
477 render_bundles: Registry::new(A::VARIANT, factory),
478 render_pipelines: Registry::new(A::VARIANT, factory),
479 compute_pipelines: Registry::new(A::VARIANT, factory),
480 query_sets: Registry::new(A::VARIANT, factory),
481 buffers: Registry::new(A::VARIANT, factory),
482 staging_buffers: Registry::new(A::VARIANT, factory),
483 textures: Registry::new(A::VARIANT, factory),
484 texture_views: Registry::new(A::VARIANT, factory),
485 samplers: Registry::new(A::VARIANT, factory),
486 }
487 }
488
489 //TODO: instead of having a hacky `with_adapters` parameter,
490 // we should have `clear_device(device_id)` that specifically destroys
491 // everything related to a logical device.
492 pub(crate) fn clear(
493 &self,
494 surface_guard: &mut Storage<Surface, id::SurfaceId>,
495 with_adapters: bool,
496 ) {
497 use crate::resource::TextureInner;
498 use hal::{Device as _, Surface as _};
499
500 let mut devices = self.devices.data.write();
501 for element in devices.map.iter_mut() {
502 if let Element::Occupied(ref mut device, _) = *element {
503 device.prepare_to_die();
504 }
505 }
506
507 // destroy command buffers first, since otherwise DX12 isn't happy
508 for element in self.command_buffers.data.write().map.drain(..) {
509 if let Element::Occupied(command_buffer, _) = element {
510 let device = &devices[command_buffer.device_id.value];
511 device.destroy_command_buffer(command_buffer);
512 }
513 }
514
515 for element in self.samplers.data.write().map.drain(..) {
516 if let Element::Occupied(sampler, _) = element {
517 unsafe {
518 devices[sampler.device_id.value]
519 .raw
520 .destroy_sampler(sampler.raw);
521 }
522 }
523 }
524
525 for element in self.texture_views.data.write().map.drain(..) {
526 if let Element::Occupied(texture_view, _) = element {
527 let device = &devices[texture_view.device_id.value];
528 unsafe {
529 device.raw.destroy_texture_view(texture_view.raw);
530 }
531 }
532 }
533
534 for element in self.textures.data.write().map.drain(..) {
535 if let Element::Occupied(texture, _) = element {
536 let device = &devices[texture.device_id.value];
537 if let TextureInner::Native { raw: Some(raw) } = texture.inner {
538 unsafe {
539 device.raw.destroy_texture(raw);
540 }
541 }
542 if let TextureClearMode::RenderPass { clear_views, .. } = texture.clear_mode {
543 for view in clear_views {
544 unsafe {
545 device.raw.destroy_texture_view(view);
546 }
547 }
548 }
549 }
550 }
551 for element in self.buffers.data.write().map.drain(..) {
552 if let Element::Occupied(buffer, _) = element {
553 //TODO: unmap if needed
554 devices[buffer.device_id.value].destroy_buffer(buffer);
555 }
556 }
557 for element in self.bind_groups.data.write().map.drain(..) {
558 if let Element::Occupied(bind_group, _) = element {
559 let device = &devices[bind_group.device_id.value];
560 unsafe {
561 device.raw.destroy_bind_group(bind_group.raw);
562 }
563 }
564 }
565
566 for element in self.shader_modules.data.write().map.drain(..) {
567 if let Element::Occupied(module, _) = element {
568 let device = &devices[module.device_id.value];
569 unsafe {
570 device.raw.destroy_shader_module(module.raw);
571 }
572 }
573 }
574 for element in self.bind_group_layouts.data.write().map.drain(..) {
575 if let Element::Occupied(bgl, _) = element {
576 let device = &devices[bgl.device_id.value];
577 unsafe {
578 device.raw.destroy_bind_group_layout(bgl.raw);
579 }
580 }
581 }
582 for element in self.pipeline_layouts.data.write().map.drain(..) {
583 if let Element::Occupied(pipeline_layout, _) = element {
584 let device = &devices[pipeline_layout.device_id.value];
585 unsafe {
586 device.raw.destroy_pipeline_layout(pipeline_layout.raw);
587 }
588 }
589 }
590 for element in self.compute_pipelines.data.write().map.drain(..) {
591 if let Element::Occupied(pipeline, _) = element {
592 let device = &devices[pipeline.device_id.value];
593 unsafe {
594 device.raw.destroy_compute_pipeline(pipeline.raw);
595 }
596 }
597 }
598 for element in self.render_pipelines.data.write().map.drain(..) {
599 if let Element::Occupied(pipeline, _) = element {
600 let device = &devices[pipeline.device_id.value];
601 unsafe {
602 device.raw.destroy_render_pipeline(pipeline.raw);
603 }
604 }
605 }
606
607 for element in surface_guard.map.iter_mut() {
608 if let Element::Occupied(ref mut surface, _epoch) = *element {
609 if surface
610 .presentation
611 .as_ref()
612 .map_or(wgt::Backend::Empty, |p| p.backend())
613 != A::VARIANT
614 {
615 continue;
616 }
617 if let Some(present) = surface.presentation.take() {
618 let device = &devices[present.device_id.value];
619 let suf = A::get_surface_mut(surface);
620 unsafe {
621 suf.unwrap().raw.unconfigure(&device.raw);
622 //TODO: we could destroy the surface here
623 }
624 }
625 }
626 }
627
628 for element in self.query_sets.data.write().map.drain(..) {
629 if let Element::Occupied(query_set, _) = element {
630 let device = &devices[query_set.device_id.value];
631 unsafe {
632 device.raw.destroy_query_set(query_set.raw);
633 }
634 }
635 }
636
637 for element in devices.map.drain(..) {
638 if let Element::Occupied(device, _) = element {
639 device.dispose();
640 }
641 }
642
643 if with_adapters {
644 drop(devices);
645 self.adapters.data.write().map.clear();
646 }
647 }
648
649 pub(crate) fn surface_unconfigure(
650 &self,
651 device_id: id::Valid<id::DeviceId>,
652 surface: &mut HalSurface<A>,
653 ) {
654 use hal::Surface as _;
655
656 let devices = self.devices.data.read();
657 let device = &devices[device_id];
658 unsafe {
659 surface.raw.unconfigure(&device.raw);
660 }
661 }
662
663 pub fn generate_report(&self) -> HubReport {
664 HubReport {
665 adapters: self.adapters.data.read().generate_report(),
666 devices: self.devices.data.read().generate_report(),
667 pipeline_layouts: self.pipeline_layouts.data.read().generate_report(),
668 shader_modules: self.shader_modules.data.read().generate_report(),
669 bind_group_layouts: self.bind_group_layouts.data.read().generate_report(),
670 bind_groups: self.bind_groups.data.read().generate_report(),
671 command_buffers: self.command_buffers.data.read().generate_report(),
672 render_bundles: self.render_bundles.data.read().generate_report(),
673 render_pipelines: self.render_pipelines.data.read().generate_report(),
674 compute_pipelines: self.compute_pipelines.data.read().generate_report(),
675 query_sets: self.query_sets.data.read().generate_report(),
676 buffers: self.buffers.data.read().generate_report(),
677 textures: self.textures.data.read().generate_report(),
678 texture_views: self.texture_views.data.read().generate_report(),
679 samplers: self.samplers.data.read().generate_report(),
680 }
681 }
682}
683
684pub struct Hubs<F: GlobalIdentityHandlerFactory> {
685 #[cfg(all(feature = "vulkan", not(target_arch = "wasm32")))]
686 pub(crate) vulkan: Hub<hal::api::Vulkan, F>,
687 #[cfg(all(feature = "metal", any(target_os = "macos", target_os = "ios")))]
688 pub(crate) metal: Hub<hal::api::Metal, F>,
689 #[cfg(all(feature = "dx12", windows))]
690 pub(crate) dx12: Hub<hal::api::Dx12, F>,
691 #[cfg(all(feature = "dx11", windows))]
692 pub(crate) dx11: Hub<hal::api::Dx11, F>,
693 #[cfg(feature = "gles")]
694 pub(crate) gl: Hub<hal::api::Gles, F>,
695 #[cfg(all(
696 not(all(feature = "vulkan", not(target_arch = "wasm32"))),
697 not(all(feature = "metal", any(target_os = "macos", target_os = "ios"))),
698 not(all(feature = "dx12", windows)),
699 not(all(feature = "dx11", windows)),
700 not(feature = "gles"),
701 ))]
702 pub(crate) empty: Hub<hal::api::Empty, F>,
703}
704
705impl<F: GlobalIdentityHandlerFactory> Hubs<F> {
706 pub(crate) fn new(factory: &F) -> Self {
707 Self {
708 #[cfg(all(feature = "vulkan", not(target_arch = "wasm32")))]
709 vulkan: Hub::new(factory),
710 #[cfg(all(feature = "metal", any(target_os = "macos", target_os = "ios")))]
711 metal: Hub::new(factory),
712 #[cfg(all(feature = "dx12", windows))]
713 dx12: Hub::new(factory),
714 #[cfg(all(feature = "dx11", windows))]
715 dx11: Hub::new(factory),
716 #[cfg(feature = "gles")]
717 gl: Hub::new(factory),
718 #[cfg(all(
719 not(all(feature = "vulkan", not(target_arch = "wasm32"))),
720 not(all(feature = "metal", any(target_os = "macos", target_os = "ios"))),
721 not(all(feature = "dx12", windows)),
722 not(all(feature = "dx11", windows)),
723 not(feature = "gles"),
724 ))]
725 empty: Hub::new(factory),
726 }
727 }
728}