In all the examples I've seen, these lines are used before drawing meshes:
glEnableClientState(GL10.GL_VERTEX_ARRAY);
glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
and sometimes glEnableClientState(GL10.GL_NORMAL_ARRAY);
And then these are always disabled again at the end of the draw call for each mesh.
I don't really understand what they actually do, and why you would want to disable them. I know that I probably need to turn them on if I'm drawing triangles from an array, using textures, and using lighting. But I don't know when I actually need to turn them off.
I presume it would be more efficient not to disable and re-enable these for each mesh in your scene if you don't have to. Can you just leave them on all the time? In what circumstances do you need to disable them?
I haven't been able to find any explanation of the actual meaning of these client states, so I don't know where I can safely leave them on or off in my code.
Can you just leave them on all the time?
Yes, if you want to, and if all your primitives uses all the arrays you're enabling.
In what circumstances do you need to disable them?
In order to not destroy or mess up the next drawings.
For example, consider you have a primitive that uses normals, you'll simply enable it by a call to glEnableClientState(GL_NORMAL_ARRAY) and telling OpenGL where your normal data is through glNormalPointer(). If you don't disable GL_NORMAL_ARRAY your next coming primitive will use the same normal array as your previous primitive. This may have consequences if your next coming primitive doesn't use normals.
Therefore, it's considered as a good practice to restore the OpenGL state when a primitive's drawing is done. That being said, you can leave them enabled if all your primitives uses all the arrays you enable, exactly like I leave GL_TEXTURE_2D enabled during the entire time the application is running. That's because I know I'll use textures frequently, and then there's no reason for enable/disable it in every object's draw call; this will only decrease the application's performance.
glEnableClientState(GL_VERTEX_ARRAY)
If you declare like above, it enables the OpenGL to use the vertices from the vertex array
otherwise opengl dont know what array it has to use to show the vertices so that it will display nothing
Related
I am using a single surface shader with a custom vertex function, and tried to I use macros like UNITY_PASS_SHADOWCASTER to add pass-specific code to the shadow processing, for example moving the vertices away from the light source to fix self-shadowing. However, I discovered that doing so has weird effects on how the shadows are rendered on the object, and even when some of its pixels are displayed.
Eventually, I managed to find out that the ShadowCaster pass must be called at least twice even if there is a single light source: once with the virtual camera matching the light source, but also a second time when the shadow is to be applied to it. This is the call that controls the visibility of the shadows behind the object.
Now I have two questions:
What is this mode of execution called?
How do I make code branch depending on which of these mode is executing? In other words, I want to move the vertices to a different position when casting the shadow, but make them stay when the shadows are applied to the object. At the moment, I am checking whether ObjSpaceLightDir matches ObjSpaceViewDir, but it doesn't sound like the best idea. Considering the shader pass is probably being compiled only once, I suppose I would have to look for a runtime variable, but I am not sure whether there is even any...
I managed to find mentions of a ShadowCollector pass for older versions of Unity. Is this the same thing?
I am using Unity 2020.3.32f1 with the built-in render pipeline.
Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!
I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.
My question does have a slight basis in GLSL, since that happens to be the shading language I know.
Its my opinion that shaders & the programmable graphics pipeline are a huge step up from the fixed function pipeline. Shaders are excellent at applying effects and making 3D graphics look far more realistic. However, not every effect is meant to be applied to every scenario. For instance, I wouldn't want my flag waving effect used across an entire scene. If that scene contains one flag, I want that flag to wave back and forth and thats about it. I'd want a water effect applied only to water. You get the idea.
My question is what is the best way to implement this toggling of effects. The only way I can think of is to have a series of uniform variables and toggle/untoggle them before and after drawing something.
For instance,
(pseudocode)
toggle flag effect uniform
draw flag
untoggle flag effect uniform
Inside the shader code, it would check the value of these uniforms and act accordingly.
EDIT: I understand one can have multiple shader programs, and switch on their use as needed, but would this actually be faster than the above method or come with a serious performance overhead from moving all that data around in the GPU? It would seem to be that possibly doing this multiple times per frame would be extremely costly
I'd like to hear what people think the optimal draw calls are for Open GL ES (on the iphone).
Specifically I've read in many places that it is best to minimise the number of calls to glDrawArrays/glDrawElements - I think Apple say 10 should be the max in their recent WWDC presentation. As I understand it to do this you need to put all the vertices into one array if possible, so you only need to make the drawArrays call once.
But I am confused because this surely means you can't use the translate, rotate, scale functions, because it would apply across the whole geometry. Which is fine except doesn't that mean you need to pre-calculate every vertex position yourself, rather than getting open gl to do it?
Also, doesn't it mean you can't use any of the fan/strip settings unless you just have a continuous shape?
These drawbacks make me think I'm not understanding something correctly, so I guess I'm looking for confirmation that I should:
Be trying to make an uber array of all triangles to draw.
Resign myself to the fact I'll have to work out all the vertex positions myself.
Forget about push'ing and pop'ing each thing to draw into it's desired location
Is that what others do?
Thanks
Vast question, batching is always a matter of compromise.
The ideal structure for performance would be, as you mention, to one single array containing all triangles to draw.
Starting from here, we can start adding constraints :
One additional constraint is that
having vertex indices in 16bits saves
bandwidth and memory, and probably
the fast path for your platform. So
you could consider grouping triangles
in chunks of 65536 vertices.
Then, if you want to switch the
shader/material/glState used to draw
geometry, you have no choice (*) but
to emit one draw call per
shader/material/glState. So grouping
triangles could consider grouping by
shaderID/materialID/glStateID.
Next, if you want to animate things,
you have no choice (*) but to
transmit your transform matrix to GL,
and then issue a draw call. So
grouping triangles could consider
grouping triangles by 'transform
groups', for example, all static
geometry together, animated geometry
that have common transforms can be
grouped too.
In these cases, you'd have to transform the vertices yourself (using CPU) before merging the meshes together.
Regarding triangle strips, you can transform any mesh in strips, even if it has discontinuities in its topology, by introducing degenerate triangles. So this is a technique that always apply.
All in all, reducing draw calls is a game of compromises, some techniques might work well for a 3d model, while others may be more suited for other 3d models. IMHO, the key is to be creative and to carefully benchmark your application to see if your changes actually improve performance on your target platform.
HTH, cheers,
(*) actually there are techniques that allow to reduce the number of draw calls in these cases, such as :
texture atlases to group different textures in a single one, to prevent
switching textures in GL, thus
allowing to limit draw calls
(pseudo) hardware instancing that allow shaders to fetch transforms
from various sources to transform
mesh instances in different ways.
...