I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)
Related
I'm trying to figure out a good way to programmatically generate contours describing a 2D surface, from a 3D STEP model. Application is generating NC code for a laser-cutting program from a 3D model.
Note: it's easy enough to do this in a wide variety of CAD systems. I am writing software that needs to do it automatically.
For example, this (a STEP model):
Needs to become this (a vector file, like an SVG or a DXF):
Perhaps the most obvious way of tackling the problem is to parse the STEP model and run some kind of algorithm to detect planes and select the largest as the cut surface, then generate the contour. Not a simple task!
I've also considered using a pre-existing SDK to render the model using an orthographic camera, capture a high-res image, and then operating on it to generate the appropriate contours. This method would work, but it will be CPU-heavy, and its accuracy will be limited to the pixel resolution of the rendered image - not ideal.
This is perhaps a long shot, but does anyone have thoughts about this? Cheers!
I would use a CAD library to load the STEP file (not a CAD API), look for the planar face with the higher number of edge curves in the face loop and transpose them on the XY plane. Afterward, finding 2D geometry min/max for centering etc. would be pretty easy.
Depending on the programming language you are using I would search for "CAD control" or "CAD component" on Google combining it with "STEP import".
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.
I have a 1000x1000x300 array, and I'd like to render it as voxels to explore the data set interactively in realtime.
Ideally I'd like something:
Supports large data sets with a reasonable framerate (probably gpu accelerated)
Scriptable from Python (so I can change data and re-render instantly, ala matplotlib)
Supports color voxels
A nice bonus would be if I could add buttons, support mouse clicks, etc from the display, to support my own interactive ways of working with the data.
I tried Mayavi, but it seemed to choke on the data set size. Maybe I did something wrong?
I'm currently using osirix lite (radiology software), but I can't script it, it only supports monochrome data, it has a cap on array size due to being the lite version, and most importantly its very tedious to import and re-display data when I make changes to the data set.
I'm trying to implement a state preserving particle system on the iPhone using OpenGL ES 2.0. By state-preserving, I mean that each particle is integrated forward in time, having a unique velocity and position vector that changes with time and can not be calculated from the initial conditions at every rendering call.
Here's one possible way I can think of.
Setup particle initial conditions in VBO.
Integrate particles in vertex shader, write result to texture in fragment shader. (1st rendering call)
Copy data from texture to VBO.
Render particles from data in VBO. (2nd rendering call)
Repeat 2.-4.
The only thing I don't know how to do efficiently is step 3. Do I have to go through the CPU? I wonder if is possible to do this entirely on the GPU with OpenGL ES 2.0. Any hints are greatly appreciated!
I don't think this is possible without simply using glReadPixels -- ES2 doesn't have the same flexible buffer management that OpenGL has to allow you to copy buffer contents using the GPU (where, for example, you could copy data between the texture and vbo, or use simply use transform feedback which is basically designed to do exactly what you want).
I think your only option if you need to use the GPU is to use glReadPixels to copy the framebuffer contents back out after rendering. You probably also want to check and use EXT_color_buffer_float or related if available to make sure you have high precision values (RGBA8 is probably not going to be sufficient for your particles). If you're intermixing this with normal rendering, you probably want to build in a bunch of buffering (wait a frame or two) so you don't stall the CPU waiting for the GPU (this would be especially bad on PowerVR since it buffers a whole frame before rendering).
ES3.0 will have support for transform feedback, which doesn't help but hopefully gives you some hope for the future.
Also, if you are running on an ARM cpu, it seems like it'd be faster to use NEON to quickly update all your particles. It can be quite fast and will skip all the overhead you'll incur from the CPU+GPU method.
I am working on drawing large directed acyclic graphs in WebGL using the gwt-g3d library as per the technique shown here: http://www-graphics.stanford.edu/papers/h3/
At this point, I have a simple two-level graph rendering:
Performance is terrible -- it takes about 1.5-2 seconds to render this thing. I'm not an OpenGL expert, so here is the general approach I am taking. Maybe somebody can point out some optimizations that will get this rendering quicker.
I am astonished how long it takes to push the MODELVIEW matrix and buffers to the graphics card. This is where the lion's share of the time is wasted. Should I instead be doing MODELVIEW transformations in the vertex shader?
This leads me to believe that manipulating the MODELVIEW matrix and pushing it once for each node shouldn't be a bad practice, but the timings don't lie:
https://gamedev.stackexchange.com/questions/27042/translate-the-modelview-matrix-or-change-vertex-coordinates
Group nodes in larger chunks instead of rendering them separately. Do background caching of all geometry with applied transformations that most likely will not be modified and store it in one buffer and render in one call.
Another solution: Store nodes(box + line) in one buffer(You can store more than you need at current time) and their transformations in texture. apply transformations in vertex shader based on node index(texture coordinates) It should be faster drastically faster.
To test support use this site. I have MAX_VERTEX_TEXTURE_IMAGE_UNITS | 4
Best solution will be Geometry Instancing but it currently isn't supported in WebGL.