In WebGPU, can you reuse the same render pass in multiple frames? - webgpu

In WebGPU you can create a render pass by defining its descriptor:
const renderPassDesc: GPURenderPassDescriptor = {
colorAttachments: [
{
view: context.getCurrentTexture().createView(),
loadValue: [0.2, 0.3, 0.5, 1],
storeOp: "store"
}
]
};
And then run it through the command encoder and start recording.
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass(renderPassDesc);
So, essentially, it appears that you need the current texture to start recording (i.e. without calling context.getCurrentTexture().createView() you can't create the descriptor and without it you can't start the recording). But the API seems to suggest that the texture can change every frame (note that this used to be the case even months ago, when the API was different and you would be retrieving the texture from the swap chain). So, basically, it appears that you can't reuse render passes across different frames (unless of course you don't render to the swap chain, and target an offscreen texture instead).
So, the question is. In WebGPU, can you reuse the same render pass in multiple frames?
Comparison with Vulkan
My question stems from the (little) exposure I had to Vulkan. In Vulkan, you can reuse recorded resources because there is a way to know upfront how many VKImage objects are in the swap chain; they are going to have 0-based indices such as 0, 1 and 2. I can't remember the exact syntax, but I remember that basically you can record 3 separate command buffers, one per VKImage and reuse them across frames. All you have to do is query in the render loop the index of the current VKImage and retrieve the corresponding recorded command buffer.

By seeing the specification about the getCurrentTexture it seems that there is no control over the number of "swap" textures, at this time.
The texture is created (if it is null or it is destroyed) in the "allocate a new context texture" step, as the note there states that:
If a previously presented texture from context matches the required criteria, its GPU memory may be re-used.
Each time on the "update the rendering [of the] Document" step, if the current texture is not null and its not destroyed then it will be presented, destroyed, and set to null.
Another note from the specs:
Developers can expect that the same GPUTexture object will be returned by every call to getCurrentTexture() made within the same frame (i.e. between invocations of Update the rendering) unless configure() is called.
All of this seems to point that you have to get the current texture for each frame and create all related other objects as well.

Related

Efficient way of editing vertex positions during runtime in Unreal Engine

I’m looking for a way to update the vertex positions of every vertex of a mesh with 65536 vertices from C++ code. It needs to be updated every few frames with values calculated in code, so it needs to be somewhat efficient.
I tried this, with no effect:
if (NewElement->GetStaticMeshComponent()->GetStaticMesh()->RenderData->LODResources.Num() > 0)
{
FPositionVertexBuffer* VertexBuffer = &NewElement->GetStaticMeshComponent()->GetStaticMesh()->RenderData->LODResources[0].VertexBuffers.PositionVertexBuffer;
if (VertexBuffer)
{
const int32 VertexCount = VertexBuffer->GetNumVertices();
for (int32 Index = 0; Index < VertexCount; Index++)
{
VertexBuffer->VertexPosition(Index) += FVector(float(Index), float(100* Index), float(10000 * Index));
}
}
}
I’ll appreciate help with finding a working solution.
As for now, I’m looking for a simple solution, just to start with something. But I know, that updating the mesh CPU side is not the most efficient way, so maybe it would be easier/faster to calculate the position for every vertex and then pass it to Vertex shader? Or generate some pseudo-texture, upload it to GPU and use it in the vertex shader? Does anyone have an example of such a mechanism in UE?
Regards
Your code doesn't actually push any updates to the GPU. You're using a static mesh here which isn't really intended to have vertices modified at runtime , hence the "static" moniker. That's not to say you can't modify that data at runtime but that's not what you're doing here. Your code is only changing data CPU-side.
If you look through the various vertex buffers implemented in engine code, you'll see that ultimately they all extend FRenderResource which provides RHI-management functions or FVertexBuffer, which is an FRenderResource and contains an FBufferRHIRef field, which is the actual GPU-bound vertex buffer.
Because rendering in Unreal Engine is multithreaded, the engine uses the concept of scene proxies which extend from FPrimitiveSceneProxy. Each primitive type that exists on the game thread and needs to be rendered will have some form of a FPrimitiveSceneProxy created and will pass data and updates to its proxy in a thread-safe manner, usually by queuing rendering commands via ENQUEUE_RENDER_COMMAND(...) which you would pass a lamba function of what should be executed when the rendering thread determines its time to run it. This proxy will contain the vertex and index buffers, and is where the "real" updates to your rendered geometry happen.
One example could be the following (excerpt taken from BaseDynamicMeshSceneProxy.h, FMeshRenderBufferSet::TransferVertexUpdateToGPU() function), which shows the render buffer collection in a scene proxy for a UDynamicMeshComponent pushing an update of its vertex positions to the GPU by copying its CPU-bound data directly into its GPU-bound vertex position buffer:
FPositionVertexBuffer& VertexBuffer = this->PositionVertexBuffer;
void* VertexBufferData = RHILockBuffer(VertexBuffer.VertexBufferRHI, 0, VertexBuffer.GetNumVertices() * VertexBuffer.GetStride(), RLM_WriteOnly);
FMemory::Memcpy(VertexBufferData, VertexBuffer.GetVertexData(), VertexBuffer.GetNumVertices() * VertexBuffer.GetStride());
RHIUnlockBuffer(VertexBuffer.VertexBufferRHI);
I won't be providing a full sample here for you because, as you can see from everything described to this point, there is much more to it than a simple snippet of code to achieve what you're looking for, but I wanted to outline the overall concept and patterns of what you'll need to understand to achieve this because if you're going to do this directly in your own code, you must understand these concepts and it can be a bit confusing when you first start digging into Unreal Engine's rendering code.
The best resource to help gain a solid understanding of the patterns the engine expects you to follow would be the official documentation found here: Unreal Engine Graphics Programming.
If you are wanting to modify geometry at runtime, there are also other options available which will make the process mush easier than trying to write it completely yourself, such as the engine-provided Procedural Mesh Component plugin, the third-party RuntimeMeshComponent plugin, and in later versions of Unreal Engine (4 and 5), the UDynamicMeshComponent (aka USimpleDynamicMeshComponent in earlier versions) which is part of the Interactive tools framework and in most recent versions of the engine has become a core part of the engine runtime module GeometryFramework.
I hope this helps you in your journey. Runtime-modifiable geometry is tough to get started but it's definitely worth the journey.

How does the "needsCompositing" bit work?

I'm trying to better understand how the Flutter framework interprets the "needsCompositing" / "alwaysNeedsCompositing" bits.
When the needsCompositing bit is set on a render object, does every single ancestor render object up to the nearest repaint boundary also need compositing (i.e., its own composited layer)? Is this because any of those objects might, say, add a clip which may affect the newly composited child and in order to ensure that it does, a clip layer has to be used instead?
The part that seems surprising is that this would appear to add N new layers for N render objects just because one descendant needs compositing.
If this is true, I suppose this explains why you'd want to organize things into a shallow "repaint boundary sandwich."
needsCompositing is just going to signal to the render object whether it should be pushing a layer or not, which may or may not be true depending on what properties are set on the object. alwaysNeedsCompositing is unconditionally saying that.
It has no effect on ancestors - they can independently need or not need compositing, which again will control how the layer tree is constructed so that composition will happen on the engine side.
On an update, we do pump logic up the tree to make sure things get properly marked dirty (up to the nearest RepaintBoundary). However, individual render objects may respond to that by only pumping the data further up the tree and not setting their needsCompositing to true, if they never actually need compositing.

Maya Plugin attribute validation

I am trying to validate my custom MPxEmitterNode attributes.
I have force_min and force_max attributes that are double3 typed in maya parlance, basically two objects containing double[3] data.
I want to ensure the force_min is less than force_max for each of its 3 components. I'd like to do this by just swapping the min and max around if someone enters a value on the attribute in the attribute editor, or calls mels setAttr for those attributes, which then fails the "min < max" check.
I have tried setting up ATTRIBUTE_AFFECTS relationships between force_min, force_max and their individual component x,y,z objects. That seems to cause a cyclic issue leading to Maya crashing. I have also tried editing the custom compute function for the derived MPxEmitterNode, so it sets the force_min and force_max values to swap. The force_* attributes are seemingly never computed in this case.
Any help would be much appreciated.
Generally the 'Maya' way to do this would be to let the output look wrong if the min and max are set incorrectly. You don't know who is going to set those attributes -- it could be as connection or a script, and it could even get reset in between frames of an animation -- and so it's better to let the dag evaluation flow through even if the result is nonsense. It's like setting a radius of zero on a sphere node --it's 'correct' even thought it's wrong.
You can however swap the values inside your compute() method to get the same effect as swapping the values without resetting the plug values themselves. Setting an input plug from inside compute is a bad idea, because it introduces a loop into the flow of the dag evaluation. Dag nodes must be acyclical (that's the "a" in dag: Directed Acyclic Graph)

What's the strategy in game engines to perform secure state changes?

I created a run loop in OpenGL ES which is called by a CADisplayLink at 60fps. AFAIK CADisplayLink calls it's target on a background thread.
I have about 100 state variables which are used by the run loop.
The problem: From the main thread, I want to change state variables which are used in the run loop to draw something. A frame must be drawn only after all state variables have been set to their target values.
I am afraid that at some point when I change a state variable, and I'm not done yet changing them all (in one big method in same run loop iteration on main thread), for example position of a geometric shape, there is multi-threading related crash or problem where the CADisplayLink will kick in right in the middle of my method that updates the state variables, and then draw garbage or crash.
Obviously when I just use synchronized or atomic properties it won't help because it is still not transactional. I think I need transactions.
My naive approach is this:
Instance variable read by run loop:
BOOL updatingState;
The run loop method will skip drawing if updatingState reads YES.
Then before starting to change state I set it to YES. And when everything is changed, I set it back to NO.
Now of course, problem: What if -while I am changing this- the run loop method is reading the values?
How do game engines deal with this problem? What kind of locking mechanisms do they have so the changing of the state variables can be finished before the next frame is going to be drawn?
You might find a read-copy-update strategy useful. One possible implementation is that each object actually contains two copies of the rendering parameters and an atomic flag is used to tell the rendering thread which to use. You will need to use a read memory barrier in the renderer to make sure that the flag is read before reading any of the parameters and a write memory barrier in the updater thread to make sure that all of the parameter updates are written before flipping the flag.
The usual way how this is done is that all state updates happen at each run loop iteration, before the drawing is done. That is, the run loop looks schematically like this:
updateState();
draw();
With this model, the drawing only happens after the a consistent state has been reached.
For this to work, you need to have a model where events such as key presses are polled for on each updateState() instead of happening asychronously, and a time measurement on each iteration to tell you how much time elapsed since the last frame.
I can't help you how this is realized in the concrete case of iOS programming, though, as I don't know anything about that. But I hope I could point you in the right direction.
I think this is a common problem in concurrency, so there are several ways to do it:
Use an immutable state class to hold the state variables.
Use a locking mechanism (if an immutable class cannot be used) to protect the state variables.
Have multiple states which you can modify, but only one is "active." This will allow you to reuse states and it will reduce copying and memory allocation.
Additionally, consider this situation:
Thread 1. Start drawing something.
Thread 1. Read 1/2 of the state 01 parameters (first state).
Thread 2. Swap out state 01 with state 02 (second state).
Thread 1. Reads the other 1/2 of state 02, but it's different from the state 01 parameters.
So the best option is not to allow the update of the state during the drawing, so option 3 might be the best way to do it because you would simply pick up the latest state and draw it. Let's say you have two states: drawingState and nonDrawingState. In your draw function you will always use the drawingState to draw while other threads modify the nonDrawingState. Once you're done drawing, then you can swap the states and continue drawing with the latest state modifications.

Loading Next Level

I am making a game in Unity. There are currently two levels. I count to 30 seconds, when the time becomes 0, I want to load the next level automatically. But when the next level is loaded, the game screen freezes and I cannot move anything. The function I used to load the next level follows below (this function is in a script which an empty game object which will not be destroyed when loading a new level carries):
function loadNextlvl(){
var cur_level = Application.loadedLevel;
yield WaitForSeconds(5.0);
Application.LoadLevel(cur_level + 1);
}
What should I do?
My work with Unity has been hobby-driven only, but anytime I've used Application.LoadLevel I passed it a string of the level name, rather than a pointer. I see from the API that it's overloaded to take an int as well, but maybe for testing purposes, call it by name to see if that works.
Also, you need to tell Unity the levels you're going to be using. You can do this in the Build Settings off the file menu.
Lastly, you can try using Application.levelCount to see if you're within the bounds of levels.