what are the difference of between glGenBuffers and glGenFramebuffers and glGenRenderbuffers - iphone

the description of them in http://www.khronos.org/opengles/sdk/docs/man/ are almost same. The only difference is the name

glGenBuffers creates regular buffers for vertex data, etc.
glGenFrameBuffers creates a framebuffer object primarily used as render targets for offscreen rendering.
glGenRenderBuffers creates a renderbuffer object that are specifically used with framebuffer objects for any depth-testing required.

There's a difference between plain old buffers and frame buffers and render buffers (used in offscreen rendering). The overall functionality of those "glGen__" functions is the same, but they just generate different kinds of buffer object names.

Related

Efficient way of editing vertex positions during runtime in Unreal Engine

I’m looking for a way to update the vertex positions of every vertex of a mesh with 65536 vertices from C++ code. It needs to be updated every few frames with values calculated in code, so it needs to be somewhat efficient.
I tried this, with no effect:
if (NewElement->GetStaticMeshComponent()->GetStaticMesh()->RenderData->LODResources.Num() > 0)
{
FPositionVertexBuffer* VertexBuffer = &NewElement->GetStaticMeshComponent()->GetStaticMesh()->RenderData->LODResources[0].VertexBuffers.PositionVertexBuffer;
if (VertexBuffer)
{
const int32 VertexCount = VertexBuffer->GetNumVertices();
for (int32 Index = 0; Index < VertexCount; Index++)
{
VertexBuffer->VertexPosition(Index) += FVector(float(Index), float(100* Index), float(10000 * Index));
}
}
}
I’ll appreciate help with finding a working solution.
As for now, I’m looking for a simple solution, just to start with something. But I know, that updating the mesh CPU side is not the most efficient way, so maybe it would be easier/faster to calculate the position for every vertex and then pass it to Vertex shader? Or generate some pseudo-texture, upload it to GPU and use it in the vertex shader? Does anyone have an example of such a mechanism in UE?
Regards
Your code doesn't actually push any updates to the GPU. You're using a static mesh here which isn't really intended to have vertices modified at runtime , hence the "static" moniker. That's not to say you can't modify that data at runtime but that's not what you're doing here. Your code is only changing data CPU-side.
If you look through the various vertex buffers implemented in engine code, you'll see that ultimately they all extend FRenderResource which provides RHI-management functions or FVertexBuffer, which is an FRenderResource and contains an FBufferRHIRef field, which is the actual GPU-bound vertex buffer.
Because rendering in Unreal Engine is multithreaded, the engine uses the concept of scene proxies which extend from FPrimitiveSceneProxy. Each primitive type that exists on the game thread and needs to be rendered will have some form of a FPrimitiveSceneProxy created and will pass data and updates to its proxy in a thread-safe manner, usually by queuing rendering commands via ENQUEUE_RENDER_COMMAND(...) which you would pass a lamba function of what should be executed when the rendering thread determines its time to run it. This proxy will contain the vertex and index buffers, and is where the "real" updates to your rendered geometry happen.
One example could be the following (excerpt taken from BaseDynamicMeshSceneProxy.h, FMeshRenderBufferSet::TransferVertexUpdateToGPU() function), which shows the render buffer collection in a scene proxy for a UDynamicMeshComponent pushing an update of its vertex positions to the GPU by copying its CPU-bound data directly into its GPU-bound vertex position buffer:
FPositionVertexBuffer& VertexBuffer = this->PositionVertexBuffer;
void* VertexBufferData = RHILockBuffer(VertexBuffer.VertexBufferRHI, 0, VertexBuffer.GetNumVertices() * VertexBuffer.GetStride(), RLM_WriteOnly);
FMemory::Memcpy(VertexBufferData, VertexBuffer.GetVertexData(), VertexBuffer.GetNumVertices() * VertexBuffer.GetStride());
RHIUnlockBuffer(VertexBuffer.VertexBufferRHI);
I won't be providing a full sample here for you because, as you can see from everything described to this point, there is much more to it than a simple snippet of code to achieve what you're looking for, but I wanted to outline the overall concept and patterns of what you'll need to understand to achieve this because if you're going to do this directly in your own code, you must understand these concepts and it can be a bit confusing when you first start digging into Unreal Engine's rendering code.
The best resource to help gain a solid understanding of the patterns the engine expects you to follow would be the official documentation found here: Unreal Engine Graphics Programming.
If you are wanting to modify geometry at runtime, there are also other options available which will make the process mush easier than trying to write it completely yourself, such as the engine-provided Procedural Mesh Component plugin, the third-party RuntimeMeshComponent plugin, and in later versions of Unreal Engine (4 and 5), the UDynamicMeshComponent (aka USimpleDynamicMeshComponent in earlier versions) which is part of the Interactive tools framework and in most recent versions of the engine has become a core part of the engine runtime module GeometryFramework.
I hope this helps you in your journey. Runtime-modifiable geometry is tough to get started but it's definitely worth the journey.

In WebGPU, can you reuse the same render pass in multiple frames?

In WebGPU you can create a render pass by defining its descriptor:
const renderPassDesc: GPURenderPassDescriptor = {
colorAttachments: [
{
view: context.getCurrentTexture().createView(),
loadValue: [0.2, 0.3, 0.5, 1],
storeOp: "store"
}
]
};
And then run it through the command encoder and start recording.
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass(renderPassDesc);
So, essentially, it appears that you need the current texture to start recording (i.e. without calling context.getCurrentTexture().createView() you can't create the descriptor and without it you can't start the recording). But the API seems to suggest that the texture can change every frame (note that this used to be the case even months ago, when the API was different and you would be retrieving the texture from the swap chain). So, basically, it appears that you can't reuse render passes across different frames (unless of course you don't render to the swap chain, and target an offscreen texture instead).
So, the question is. In WebGPU, can you reuse the same render pass in multiple frames?
Comparison with Vulkan
My question stems from the (little) exposure I had to Vulkan. In Vulkan, you can reuse recorded resources because there is a way to know upfront how many VKImage objects are in the swap chain; they are going to have 0-based indices such as 0, 1 and 2. I can't remember the exact syntax, but I remember that basically you can record 3 separate command buffers, one per VKImage and reuse them across frames. All you have to do is query in the render loop the index of the current VKImage and retrieve the corresponding recorded command buffer.
By seeing the specification about the getCurrentTexture it seems that there is no control over the number of "swap" textures, at this time.
The texture is created (if it is null or it is destroyed) in the "allocate a new context texture" step, as the note there states that:
If a previously presented texture from context matches the required criteria, its GPU memory may be re-used.
Each time on the "update the rendering [of the] Document" step, if the current texture is not null and its not destroyed then it will be presented, destroyed, and set to null.
Another note from the specs:
Developers can expect that the same GPUTexture object will be returned by every call to getCurrentTexture() made within the same frame (i.e. between invocations of Update the rendering) unless configure() is called.
All of this seems to point that you have to get the current texture for each frame and create all related other objects as well.

Unity - use GetRawTextureData to change underlying RGB bytes without copying

I am trying to use OpenCV with Unity for image processing, and I am trying to make the data transfers between OpenCV and Unity code as efficient as possible.
Currently, I am able to create a new byte[] in C#, then load an image into these bytes in OpenCV, and then use texture.LoadRawTextureData(array) and texture.Apply() to show this texture in Unity.
However, the Unity documentation recommends to use texture.GetRawTextureData() to get a reference to the NativeArray (the version of function that returns byte[] makes a copy of the raw data) and then write the data directly into this buffer (+ call Apply()).
Unfortunately, the documentation on NativeArrays is rather scarce - how exactly do NativeArrays look in the memory? They do have an ToArray() function, but this again makes a copy of the data. What I need is a byte[] array, which can be either RGB24 or RGBA32 (RGBA seems to be preferred, as even though it is memory-inefficient if the texture is opaque, the modern GPUs apparently do not support RGB24).
Is there any way to pass the pointer to the beginning of the buffer in the texture without making copies and calling LoadRawTextureData()? Or are the data in the Texture in a completely different format?
I had the same confusion over NativeArray. It looks like the CopyTo() method doesn't seem to alloc memory. There is a ToArray() method, which I'm certain allocates.
I was able to work out this utility method which is working fine for a webcam feed.
private byte[] m_byteCache = null;
public byte[] GetRawTextureData(Texture2D texture)
{
NativeArray<byte> nativeByteArray = texture.GetRawTextureData<byte>();
if (m_byteCache?.Length != nativeByteArray.Length)
{
m_byteCache = new byte[nativeByteArray.Length];
}
nativeByteArray.CopyTo(m_byteCache);
return m_byteCache;
}
ToArray() allocates a new array. CopyTo() doesn't alloc memory but of course it copies the data.
But what I gather from the documentation is that you should be able to just access the NativeArray like a normal array to modify the memory and then call .Apply() on the corresponding Texture object. If you can make your C# OpenCV code write to it, that should do the trick.
The issue with NativeArray is I guess that you directly get the memory of whatever implementation your code runs on, so the exact byte representation could differ depending on the platform. Also the memory will be invalid as soon as the texture is gone.

Dynamic Array in Metal(API)?

I am writing an object recognition program. I am using Metal Api.The problem is that i need array list or dynamic array, but there is no dynamic array in Metal. Is there a way to declare one or to implement your own?
There is no way to do dynamic memory allocation inside the Metal kernels (shaders). I'd just define more buffers on the CPU side and pass it to the shader (instead of creating dynamic arrays inside the shaders). Just make sure to change the 'storage mode' to 'private' for the buffers you want to use just in the shader for the intermediate calculations. The 'private' mode mean the buffer is just on the GPU and CPU does not have access to it (and can reduce overhead).

OpenGL ES - glImageProcessing - remove texture

I am using the glImageProcessing example from Apple to perform some filter operations on. However, I would like to be able to load a new image into the texture.
Currently, the example loads the image with the line:
loadTexture("Image.png", &Input, &renderer);
(which I've modified to accept an actual UIImage):
loadTexture(image, &Input, &renderer);
However, in testing how to redraw a new image I tried implementing (in Imaging.c):
loadTexture(image, &Input, &renderer);
loadTexture(newImage, &Input, &renderer);
and the sample app crashes at the line:
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
in Texture.c
I have also tried deleting the active texture by
loadTexture(image, &Input, &renderer);
glDeleteTextures(GL_TEXTURE_2D, 0);
loadTexture(newImage, &Input, &renderer);
which also fails.
Does anyone have any idea how to remove the image/texture from the opengl es interface so that I can load a new image???
Note: in Texture.c, apple states "The caller of this function is responsible for deleting the GL texture object." I suppose this is what I am asking how to do. Apple doesn't seem to give any clues ;-)
Also note: I've seen this question posed many places, but no one seems to have an answer. I'm sure others will appreciate some help on this topic as well! Many thanks!
Cheers,
Brett
You're using glDeleteTextures() incorrectly in the second case. The first parameter to that function is how many textures you wish to delete, and the second is an array of texture names (or a pointer to a single texture name). You'll need to do something like the following:
glDeleteTextures(1, &textureName);
Where textureName is the name of the texture obtained at its creation. It looks like that value is stored within the texID component of the Image struct passed into loadTexture().
That doesn't fully explain the crash you see, which seems like a memory management issue with your input image (possibly an autoreleased object that is being discarded before you access its CGImage component).