I am making a script which can generate multiple objects in Pyglet. In this example (see link below) there are two pyramids in 3d space, but every triangle is recalculated in every frame. My aim is to make a swarm with a large number of pyramids flying around, but i cant seem to figure out how to implement vertex lists in a batch. (assuming this is the fastest way to do it).
Do they need to be indexed for example? (batch.add_indexed(...) )
A standard seems to be:
batch1 = pyglet.graphics.Batch()
then add vertices to batch1. and finally:
def on_draw():
batch1.draw()
So how to do the intermediate step, where pyramids are added to vertex lists? A final question: when would you suggest to use multiple batches?
Thank you!
apfz
http://www.2shared.com/file/iXq7AOvg/pyramid_move.html
Just have a look at pyglet.sprite.Sprite._create_vertex_list for inspiration. There, the vertices for simple sprites (QUADS) are generated and added to a batch.
def _create_vertex_list(self):
if self._subpixel:
vertex_format = 'v2f/%s' % self._usage
else:
vertex_format = 'v2i/%s' % self._usage
if self._batch is None:
self._vertex_list = graphics.vertex_list(4,
vertex_format,
'c4B', ('t3f', self._texture.tex_coords))
else:
self._vertex_list = self._batch.add(4, GL_QUADS, self._group,
vertex_format,
'c4B', ('t3f', self._texture.tex_coords))
self._update_position()
self._update_color()
So the required function is Batch.add(vertex_list). Your vertices should only be recalculated if your pyramid changes it's position and not at every draw call. Instead of v2f you need to use v3f for 3D-coordinates and of course you need GL_TRIANGLES instead of GL_QUADS. Here is an example of a torus rendered with pyglet.
Related
i want to create a shader that can cover a surface with "circles" from many random positions.
the circles keep growing until all surface covered with them.
here my first try with amplify shader editor.
the problem is i don't know how make this shader that create array of "point maker" with random positions.also i want to controll circles with
c# example:
point_maker = new point_maker[10];
point_maker[1].position = Vector2.one;
point_maker[1].scale = 1;
and etc ...
Heads-up: That's probably not the way to do what you're looking for, as every pixel in your shader would need to loop over all your input points, while each of those pixels will only be covered by one at most. It's a classic case of embracing the benefits of the parallel nature of shaders. (The keyword for me here is 'random', as in 'random looking').
There's 2 distinct problems here: generating circles, and masking them.
I would go onto generating a grid out of your input space (most likely your UV coordinates so I'll assume that from here), by taking the fractional part of the coords scaled by some value: UV (usually) go between 0 and 1, so if you want 100 circles you'd multiply the coord by 10. You now have a grid of 100 pieces of UVs, where you can do something similar to what you have to generate the circle (tip: dot product a vector on itself gives the square distance, which is much cheaper to compute).
You want some randomness, so you need to add some offset to the center of the circle. You need some sort of random number (there might be some in ASE I can't remember, or make one your own - there's plenty of that you look online) that is unique per cell of the grid. To do this you'd input the remainder of your frac() as value to your hash/random method. You also need to limit that offset depending on the radius of the circle so it doesn't touch the sides of the cell. You can overlay more than one layer of circles if you want more coverage as well.
Second step is to figure out if you want to display those circles at all, and for this you could make the drawing conditional to the distance from the center of the circle to an input coordinate you provide to the shader, by some threshold. (it doesn't have to be an 'if' condition per se, it could be clamping the value to the bg color or something)
I'm making a lot of assumptions on what you want to do here, and if you have stronger conditions on the point distribution you might be better off rendering quads to a render texture for example, but that's a whole other topic :)
Given a mesh in Unity & C# (that itself was created in realtime by merging simpler base meshes), how could we during runtime* turn it into a smooth, almost like wrapped-in-cloth mesh version of itself? Not quite a fully convex version, but more rounded, softening sharp edges, bridging deep gaps and so on. The surface would also ideally look like when the "smoothing angle" normals setting is applied to imported objects. Thanks!
Before & after sketch
*The mesh setup is made by people and its specifics unknown beforehand. All its basic shape parts (before we merge them) are known though. The base parts may also remain unmerged if that helps a solution, and it would be extra terrific if there was a runtime solution that would fastly apply the wrapper mash even with base parts that change their transform over time, but a static one-time conversion would be great too.
(Some related keywords may be: marching cube algorithm & metaballs, skin above bones, meshfilter converting, smoothing shader, softening, vertices subdivision.)
There are many ways to get something similar so you can pick your preferred one:
Marching Cubes
This algorithm is easy to use but the result always inherits the blocky 'style' of it. If that's the look you want then use it. If you need something more smooth and/or pixel perfect then look for other ways.
Ray Marching and Signed Distance Functions
This is quite interesting technique that may give you a lot of control. You can represent your base parts with simple cube/cylinder/etc. equations and blend them together with simple math.
Here you can see some examples:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm
The best thing here is that it's very simple to setup, you don't even need to merge your base parts, you just push your data to renderer. Worse, is that it may get computationaly hard on rendering part.
Old school mesh modifications
Here you have the most options but it's also most complicated. You start with your base parts which don't have much data by themselves so you should probably join them into one mesh using CSG Union operation.
Having this mesh you can compute neighbors data for your primitives:
for each vertex find triangles containing it.
for each vertex find edges containing it.
for each edge find triangles containing it.
etc.
With such data you may be able to do things like:
Find and cut some sharp vertex.
Find and cut some sharp edge.
Move the vertex to minimize angle between triangles/edges it creates.
and so on...
There are really a lot of details that may work for you or not, you just need to test some to see which one gives the preferred results
.
One simple thing I'd start with:
For each vertex find all vertices connected to it by any edge.
Compute average position of all those vertices.
Use some alpha parameter in [0,1] range to blend between initial vertex position and averaged one.
Implement multiple iterations of this algorithm and add parameter for it.
Experiment with alpha and number of iterations.
Using this way you also have two distinct phases: computation and rendering, so doing it with animation may become too slow, but just rendering the mesh will be faster than in Ray Marching approach.
Hope this helps.
EDIT:
Unfortunately I've never had such need so I don't have any sample code but here you have some pseudo-code that may help you:
You have your mesh:
Mesh mesh;
Array of vertex neighbors:
For any vertex index N, triNeighbors[N] will store indices of other vertices connected by edge
List<HashSet<int>> triNeighbors = new List<HashSet<int>>();
int[] meshTriangles = mesh.triangles;
// iterate vert indices per triangle and store neighbors
for( int i = 0; i < meshTriangles.Length; i += 3 ) {
// three indices making a triangle
int v0 = meshTriangles[i];
int v1 = meshTriangles[i+1];
int v2 = meshTriangles[i+2];
int maxV = Mathf.Max( Mathf.Max( v0, v1 ), v2 );
while( triNeighbors.Count <= maxV )
triNeighbors.Add( new HashSet<int>() );
triNeighbors[v0].Add( v1 );
triNeighbors[v0].Add( v2 );
triNeighbors[v1].Add( v0 );
triNeighbors[v1].Add( v2 );
triNeighbors[v2].Add( v0 );
triNeighbors[v2].Add( v1 );
}
Now, for any single vertex, with index N you can compute its new, averaged position like:
int counter = 0;
int N = 0;
Vector3 sum = Vector3.zero;
if( triNeighbors.Count > N && triNeighbors[N] != null )
{
foreach( int V in triNeighbors[N] ) {
sum += mesh.vertices[ V ];
counter++;
}
sum /= counter;
}
There may be some bugs in this code, I've just made it up but you should get the point.
I have to implement a basic tracking program in MATLAB that, given a set of frames from a videogame, it analyzes each one of them and then creates a bounding box around each object. I've used the function regionprops in order to obtain the coordinates of the bounding boxes for each object, and visualized them using the function rectangle, as follows:
for i = 1:size( frames,2 )
CC{1,i} = findConnectedComponents( frames{1,i} );
stats{1,i} = regionprops( 'struct',CC{1,i},'BoundingBox','Centroid' );
imshow( frames{1,i} ),hold on
for j = 1:size(stats{1,i},1)
r = rectangle( 'Position',stats{1,i}(j).BoundingBox );
r.FaceColor = [0 0.5 0.5 0.45];
end
end
This works just fine, but I'd like to go one step further and be able to differenciate static objects from moving objects. I thought of using the centroid to see, for each object, if it is different in each frame (which would mean that the object is moving), but in each image I have a different number of objects.
For example, if I am trying this on Space Invaders, when you kill an alien it disappears, so the number of objects is reduced. Also each projectile is a separate object and there could be a different number of projectiles in different moments of the game.
So my question is, how could I classify the objects based on wether they move or not, and paint them with two different colors?
In the case of consistent background, using optical flow is ideal for you.
The basic idea is pretty simple, consider subtracting two consecutive frames, and use this to get flow vector of the objects that moved between frames.
You can look at Lucas–Kanade method
and Horn–Schunck method.
Here is a link for matlab implementation of the same.
I've designed an algorithm that matches correspondent lines seen from different positions of a robot.
Now I want to merge correspondent lines into one.
Does anyone know an algorithm for this purpose?
It seems like what you're trying to do is a mosaic but restricted to 2D. Or at least something similar considering only extracted features. I'll go through the basic idea of how to do it (as I remember it anyway).
You extract useful features in both images (your lines)
You do feature matching (your matching)
You extract relative positional information about your cameras from the matched features. This allows to determining a transform between the two.
You transform one image into the other's perspective or both to a different perspective
Since you say you're working in a 2D plane that's where you will want to transform to. If your scans can be considered to not add any 3D distortion (always from the same hight facing perpendicular to the plane) then you need only deal with 2D transformations.
To do what you call the merging of the lines you need to perform step 3 and 4 of the mosaic algorithm.
For step 3 you will need to use a robust approach to calculate your 2D Transformation (rotation and translation) from one picture/scan to the other. Probably something like least mean squares (or other approaches for estimating parameters from multiple values).
For step 4 you use the calculated 2D transform and possibly a previous transformation that was calculated for the previous picture (not needed if you're matching from the composed image, a.k.a moasic, to a new image instead of sequetial images) use it on the image it would apply to. In your case probably just your 2D lines from the new scan (and not a full image) will need to be transformed by this global 2D transform to take their position and orientation to the global map reference.
Hope this helps. Good Luck!
I have a simple task: I have 10,000 3D boxes, each with a x,y,z, width, height, depth, rotation, and color. I want to throw them into a 3D space, visualize it, and let the user fly through it using the mouse. Is there an easy way to put this together?
One easy way of doing this using recent (v 3.2) OpenGL would be:
make an array with 8 vertices (the corners of a cube), give them coordinates on the unit cube, that is from (-1,-1, -1) to (1, 1, 1)
create a vertex buffer object
use glBufferData to get your array into the vertex buffer
bind the vertex buffer
create, set up, and bind any textures that you may want to use (skip this if you don't use textures)
create a vertex shader which applies a transform matrix that is read from "some source" (see below) according to the value of gl_InstanceID
compile the shader, link the program, bind the program
set up the instance transform data (see below) for all cube instances
depending on what method you use to communicate the transform data, you may draw everything in one batch, or use several batches
call glDrawElementsInstanced N number of times with count set to as many elements as will fit into one batch
if you use several batches, update the transform data in between
the vertex shader applies the transform in addition to the normal MVP stuff
To communicate the per-cube transform data, you have several alternatives, among them are:
uniform buffer objects, you have a guaranteed minimum of 4096 values, respectively 256 4x4 matrices, but you can query the actual value
texture buffer objects, again you have a guaranteed minimum of 65536 values, respectively 4096 4x4 matrices (but usually something much larger, my elderly card can do 128,000,000 values, you should query the actual value)
manually set uniforms for each batch, this does not need any "buffer" stuff, but is most probably somewhat slower
Alternatively: Use pseudo-instancing which will work even on hardware that does not support instancing directly. It is not as elegant and very slightly slower, but it does the job.