I'm working with the Rendering Plugin sample for Unity.
Unity calls into the plugin and hands it a texture pointer, which is an OpenGL texture name (SetTextureFromUnity function for reference). I have another texture I'm creating and managing in my plugin. I would like to somehow get Unity's texture reference to use my texture instead of the one it hands to me.
Is it possible to make the Unity texture just point to my texture? I'm super rough with low level OpenGL.
Alternatively: is it possible to create a Texture2D from an openGL texture that already exists? Then I could just create my own texture and hand it back to unity to wrap with a Texture2D.
I know I could copy data from my texture to the Unity texture, but that's incredibly slow and inefficient when the data for both is already on the GPU. I am looking into FBO copies but all of this seems like overkill when I just want pointer A to point to the same thing that pointer B is pointing to (in theory any way)...
#define SIZE 1024
#define MASK (SIZE - 1)
struct texture
{
my_image_object *image;
};
struct texture table[SIZE];
void
bind_my_image_object(my_image_object *image)
{
uintmax_t hash;
size_t index;
hash = my_pointer_hash_function((uintptr_t)image);
index = hash & MASK;
texture = table + index;
glBindTexture(GL_TEXTURE_2D, index);
if (texture->image != image)
{
your_texture_upload_routine(image);
texture->image = image;
}
}
Use a decent hash function. Shift down the pointer value to avoid considering the typical object alignment in your hash function. e.g.: (uintptr_t)image >> 3 on 64bit PC. And use a table large enough so you don't trash your texture every frame. Also use a table size that is a power of 2 so the hash & MASK will wrap properly.
General hash table advice apply. Your hashtable may vary.
Related
Does Unity's renderer read the entire texture, or only the pixels the UVs overlap?
For example, in the following texture with the following UVs, only rows C, D, E and F are needed. Disregarding the extra storage space the rest of the texture occupies, are there any drawbacks to doing this?
Does the renderer read the entire texture or only the relevant pixels?
Unity would keep the whole texture in memory. Texture mapping is done in shaders.
That's why its recommended to try and occupy as much UV space as possible. You can even go further and use same texture for multiple objects.
Even tho this only covers opengl, it is a good resource for understanding how all of this works. https://learnopengl.com/Getting-started/Textures
I have a server which renders a mesh using OpenGL with a custom frame buffer. In my fragment shader I write the gl_primitiveID into an output variable. Afterwards I call glReadPixels to read the IDs out. Then I know which triangles were rendered (and are therefore visible) and I send all those triangles to a client which runs on Unity. On this client I add the vertex and index data to a GameObject and it renders it without a problem. I get the exact same rendering result in Unity as I got with OpenGL, unless I start to zoom out.
Here are pictures of the mesh rendered with Unity:
My first thought was that I have different resolutions, but this is not the case. I have 1920*1080 on both server and client. I use the same view and projection matrix from the client on my server, so this also shouldn't be the problem. What could be cause of this error?
In case you need to see some of the code I wrote.
Here is my vertex shader code:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * vec4(position, 1.0f);
}
Here is my fragment shader code:
#version 330 core
layout(location = 0) out int primitiveID;
void main(void)
{
primitiveID = gl_PrimitiveID + 1; // +1 because the first primitive is 0
}
and here is my getVisibleTriangles method:
std::set<int> getVisibleTriangles() {
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RED_INTEGER, GL_INT, &pixels[0]);
std::set<int> visibleTriangles;
for (int i = 0; i < pixelsLength; i += 4) {
int id = * (int*) &pixels[i];
if (id != 0) {
visibleTriangles.insert(id-1); // id 0 is NO_PRIMITIVE
}
}
return visibleTriangles;
}
Oh my god, I can't believe I made such a stupid mistake.
After all it WAS a resolution problem.
I didn't call gl_Viewport (only when resizing the window). Apparently when creating a window with glfwCreateWindow GLFW creates the window but since the parameters are only hints and not hard constraints (as it is stated here: glfw reference) it is possible that those are not exactly fulfilled. Since I have passed a desired size of 1920*1080 (which is also my resolution) apparently the drawing area did not actually get the size of 1920*1080, because there is also some space needed for the menu etc. So it is rendered with a lower resolution (to be exact, it was 1920*1061) on the server which results in missing triangles on the client.
Before getting into the shader details for this problem, are you sure that the problem doesn't lie with the way the zoom out functionality has been implemented in Unity? It's just a hunch since I have seen it in older projects, but if the zoom in/out functionality works by actually moving the camera then the movement of the clipping planes will create those "holes" when the mesh surfaces go outside the range. Although the placement of some of those holes in the shared image makes me doubt that this is the case, but you never know.
If this happens to be the way the zoom function works then you can confirm this by looking at the editor mode while zooming out. It will display the position of the clipping planes of the camera in relation to the mesh.
I am making a unity 2D RTS game and I thought of using a big texture for the tiled map (instead of a lot of textures - for the memory reasons...).
The tiled map is supposed to generate randomly at runtime so I don't want to save a texture and upload it. I want the map to be generated and then build it from a set of textures resources.
so, I have a little tiles textures of grass/forest/hills etc. and after I generate the map randomly, I need to draw those little textures on my big map texture so I will use it as my map.
How can I draw a texture from my resources on other texture? I saw there is only a Get/SetPixel functions... so I can use it to copy all the pixels one by one to the big texture, but there is something easier?
Is my solution for the map is OK? (is it better from just create a lot of texture tiles side by side? There is other better solution?)
The correct way to create a large tiled map would be to compose it from smaller, approximately-screen-sized chunks. Unity will correctly not draw the chunks that are off the screen.
As for your question about copying to a texture: I have not done this before in Unity, but this process is called Blitting, and there just happens to be a method in Unity called Graphics.Blit(). It takes a source texture and copies it into a destination texture, which sounds like exactly what you're looking for. However, it requires Unity Pro :(
There is also SetPixels(), but it sounds like this function does the processing on the CPU rather than the GPU, so it's going to be extremely slow/resource-intensive.
Well, after more searching I discovered the Get/SetPixel s
Texture2D sourceTex = //get it from somewere
var pix = sourceTex.GetPixels(x, y, width, height); // get the block of pixels
var destTex = new Texture2D(width, height); // create new texture to copy the pixels to it
destTex.SetPixels(pix);
destTex.Apply(); // important to save changes
EDIT - To help clarify the question up top.. I guess I'm looking for which sorting would perform better: sorting by program or sorting by textures? Will it matter? All my objects are in similar z space and all are stored in the same VBO. And if I don't switch shaders via glUseProgram do I have to re-set attributes for each object?
Original Post:
This is sort of a 2-part question. I'm trying to figure out how best to sort my 3d objects before drawing them, and what open gl calls have to be done for each glDrawElements and which ones can be done once per screen refresh (or even just once). The purpose is of course for speed. For my game let's assume that z front to back isn't much of an issue (most objects are at the same z). So I won't be sorting for z other than to do all objects with transparency last.
Of course I don't want the sorting process to take longer than rendering unsorted.
Part 2 is which open gl calls have to be used per glDrawElements and which ones can be done only when the information changes? And does presentRenderbuffer wipe certain things out so that you have to re-call them.
Most opengl 2 demos do every call for every object. Actually most demos only draw one object. So in an 3d engine (like I'm writing) I want to avoid unnecessary redundant calls.
This is the order I was doing it (unsorted, unoptimized):
glUseProgram(glPrograms[useProgram]);
glDisable(GL_BLEND);
glEnable(GL_CULL_FACE);
Loop through objects {
Do all matrix calcs
Set Uniforms (matrix, camera pos, light pos, light colors, material properties)
Activate Textures.. (x2)
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture0);
glUniform1i(glUniforms[useProgram][U_textureSampler], 0);
Bind VBOs
glBindBuffer(GL_ARRAY_BUFFER, modelVertVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelIndVBO);
Set Attributes (vertexpos, texcoord, norm, tan, bitan)
glDrawElements(GL_TRIANGLES, models[modelToUse].indSize, GL_UNSIGNED_INT, (void *) (models[modelToUse].indOffset * sizeof(GLuint)));
}
Of course that only worked when all objects used the same shader/program. In practice they won't.
3D objects are in an array with all the properties for each object: model id, shader id, texture ids, position, etc. So my idea was to do a fast simple sort to stack similar objects' index numbers in other arrays. Then draw the items in each of those arrays. I could sort by 3d model (object type), texture, or by shader. Many models share the same texture. Many models share the same shader. At this point I have 3 shaders. ALL OBJECTS share a single VBO.
Can I do it like this?
Bind the VBO - since all objects use the same one
Loop through object types {
If shader hasn't changed
glUseProgram
Set Attributes
If texture hasn't changed
glActiveTexture(s) - based on which program is active
Loop through objects of that type {
Do matrix calcs
Set Uniforms - based on which program is active
glDrawElements
}
}
EDIT - To be clear - I'm still drawing all objects, just in a different order to combine uses of shaders and/or textures so as to avoid binding and then rebinding again within one 'frame' of the game.
I'm currently getting a crash on a glDrawElements on the 2nd refresh, but I think that will be easy to find. I only include this fact because it leads me to think that binding a texture might not carry over to a second frame (or presentBuffers).
Is it going to be faster to avoid changing the shader, or changing the texture? Will attributes, the vbo, and the textures stay active across multiple glDrawElement calls? Across multiple presentBuffers?
Answering my own question.
First some context. I currently have 3 shaders and expect I'll end up with no more than 4 or 5. Example I have a bump map shader that uses a base and normal texture. I also have a shader that doesn't use a base texture and instead uses a solid color for the object but does still have a normal texture. Then I have the opposite, a flat lighting simple shader that uses a base texture only.
I have many different 3d models but all use the same VBO. And some 3d models use the same textures as others.
So in the definition of a 3d object I added a renderSort property that I can preset knowing what shader program it uses and what texture(s) it needs.
Then as I update objects and determine if they need to be drawn on screen, I also do a one pass simple sort on them based on the renderSort property of their 3d object type... I just toss the array index of the object in a 'bucket' array. I don't see having more than 10 of these buckets.
After the update and quick-sort I render.
The render iterates through the buckets, and inside that through the objects in each bucket. Inside the inner loop I check to see if the program has changed since the last object, and do a glUseProgram if it's changed. Same with textures.. I only bind them if they're not currently bound. Then update all the other uniforms and do the glDrawElements.
The previous way.. unsorted.. if there were 1000 objects it would call glUseProgram, bind the textures, bind the vbo, set all the attributes.. 1000 times.
Now.. it only changes these things when it needs to.. if it needs to 1000 times it will still do it 1000 times. But with the bucket sort it should only need to do it once per bucket. This way I prioritize drawing properly even if they're not sorted properly.
Here's the code:
Sorting...
if (drawThisOne) {
// if an object needs to be drawn - toss it in a sort bucket.
// each itemType has a predetermined bucket number so that objects will be grouped into rough program and texture groups
int itemTypeID = allObjects[objectIndex].itemType;
int bucket = itemTypes[itemTypeID].renderSort;
sorted3dObjects[bucket][sorted3Counts[bucket]]=objectIndex;
// increment the count for that bucket
sorted3Counts[bucket]++;
}
Rendering...
// only do these once per cycle as all objects are in the same VBO
glBindBuffer(GL_ARRAY_BUFFER, modelVertVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelIndVBO);
for (int bucket=0; bucket<10; bucket++) {
// does this bucket have anything in it?
if (sorted3Counts[bucket]>0) {
// if so itterate though items in that bucket
for (int thisObject=0; thisObject < sorted3Counts[bucket]; thisObject++) {
// get the object index for this object in this bucket
int objectIndex = sorted3dObjects[bucket][thisObject];
int itemTypeID = candyPieces[objectIndex].pieceType;
int modelToUse = itemTypes[itemTypeID].model;
// switching to psudocode...
GLuint useProgram = itemTypes[itemTypeID].shader;
if (Program Changed or is not set) {
glUseProgram(glPrograms[useProgram]);
glDisable(GL_BLEND);
glEnable(GL_CULL_FACE);
currentProgram=useProgram;
USE glVertexAttribPointer to set all attributes
}
// based on which program is active set textures and program specific uniforms
switch (useProgram) { ....
if (Texture Changed or is not set) {
glActiveTexture(s)
}
}
Matrix Calculations
glUniform - to set unforms
glDrawElements(GL_TRIANGLES, models[modelToUse].indSize, GL_UNSIGNED_INT, (void *) (models[modelToUse].indOffset * sizeof(GLuint)));
}}}
I have managed to get a CVPixelBufferRef from an AVPlayer to feed pixel data that I can use to texture a 2D object. When my pixelbuffer has data in it I do:
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage('
kCFAllocatorDefault,
videoTextureCache_,
pixelBuffer, //this is a CVPixelBufferRef
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
I would like to use this buffer to create a GL_TEXTURE_CUBE_MAP. My video frame data is actually 6 sections in one image (e.g. a cubestrip) that in total makes the sides of a cube. Any thoughts on a way to do this?
I had thought to just pretend my GL_TEXTURE_2D was a GL_TEXTURE_CUBE_MAP and replace the texture on my skybox with the texture generated by the code above, but this creates a distorted mess (as I suppose should be expected when trying to force a skybox to be textured with a GL_TEXTURE_2D.
The other idea was to setup unpacking using glPixelStorei and then read from the pixelbuffur:
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, X);
glPixelStorei(GL_UNPACK_SKIP_ROWS, Y);
glTexImage2D(...,&pixelbuffer);
But unbelievably GL_UNPACK_ROW_LENGTH is not supported in OpenGl ES2.0 for iOS.
So, is there:
-Any way to split us the pixel data in my CVPixelBufferRef through indexing the buffer to some pixel subset before using it to make a texture?
-Any way to make a 6 new GL_TEXTURE_2D as indexed subsets of my GL_TEXTURE_2D that is created by the code above
-any way to convert a GL_TEXTURE_2D to a valid GL_TEXTURE_CUBE_MAP (e.g. GLKit has a Skybox effect that loads a GL_TEXTURE_CUBE_MAP from a single cubestrip file. It doesnt have a method to load a texture from memory though or I would be sorted)
-any other ideas?
If it were impossible any other way (which is unlikely, there probably is an alternate way -- so this is probably not the best answer & involves more work than necessary) here is a hack I'd try:
How a cube map works is it projects the texture for each face from a point in the center of the geometry out toward each of the cube faces. So you could reproduce that behavior yourself; you could use Projective Texturing to make six draw calls, one for each face of your cube. Each time, you'd first draw the face you're interested in to the stencil buffer, then calculate the projection matrix for your texture (this technique is used a lot for 'spotlight' effects in games), then figure out the transform matrix required to augment the fragment shader's texture read so that for each face, only the portion of the texture that corresponds to that face winds up within the (0..1) texture lookup range. If everything has gone right, anything outside the 0..1 range should be discarded by the stencil buffer, and you'd be left with a DIY cube map out of a TEXTURE_2D.
The above method is actually really similar to what I'm doing for an app right now, except I'm only using projective texturing to mask off & replace a small portion of the cube map. I need to pixel-match the edges of the small square I'm projecting so that it's seamlessly applied to the skybox, so that's why I feel confident that this method will actually reproduce the cube map behavior -- otherwise, pixel-matching wouldn't be possible.
Anyway, I hope you find a way to simply transition your 2D to CUBEMAP, because that would probably be much easier and cleaner.