I've written a gstreamer plugin. It is based on GstGLFilter. I want to save the input texture in the function "filter_texture". This saved texture will be operated with the next input texture. How can I save the input texture?
Related
I want to extract textures from a texture array as I want to edit them (the array comes with an asset bought from the asset store so I don't have the original textures).
The only way I can think to to it is to use the TextureArray class to scroll through each texture and each mip level of the texture and send the data to a render texture.
Is there another way?
I have generated a CSV file from an OpenNI program that tracks skeleton joints.
The CSV file contains the frame number, confidence value, and xyz positions of each joint all separated by columns.
I'd like to plot the extracted points and display them in a video. While I have done this before using MATLAB and generated movie files with it, I was wondering what would be a good approach to be able to create a video file of these 3D joints where a user can rotate the video's camera 360 degrees?
Should I continue to explore that through MATLAB? Is there a way I can do this through OpenNI? Any suggestions would be incredibly appreciated.
What's the source of the input data? e.g., CT scans, MRI, etc.
I have recorded a 3D video using a Fujifilm Finepix Real 3d w3 camera. The resulting video file is a single AVI, so the frames from both lenses must somehow be incorporated within the single file.
I now wish to read in the video into Matlab such that I have two image sequences, each corresponding to either the left lens or the right lens.
So far I have played back the AVI file using the computer vision toolbox functions (vision.VideoFileReader etc), however it ignores one of the lenses and plays back only a single lens' image sequence.
How do I access both image sequences within Matlab?
Im an iPhone developer and i'm trying to get a 3D model that I create in Cinema 4D into an app im making. I have actually found a way to get the model in (by exporting it as a .dae or obj and using a python script) which works really well however I can't get the textures to come with it. My script actually can only handle 1 texture as well.
Basically I need to ether create and export a UV map in c4d (but I have no idea how to do this) or I figure out a way to read multiple textures into my Open Gl - ES app with a script or PowerVR. (this is probably better)
Sorry for the noob questions but im very new at the 3D world.
Cheers
I would recommend that you use Blender. Export your Cinema-4D model for Blender and use Blender to create UVMaps.
You need to make seams and unwrap the model. After that save a targa template for your texture, apply your texture on that targa. Save it as png or jpg. Apply that texture image to your model in Blender. Now you can export Wavefront OBJ file.
Use OpenGLOBJLoader class to render your model in iPhone.
And one more thing: you should invert (subtract from 1) texture coordinates on y axis in order to get your texture rendered properly.
For example, if you have texture coordinates like this:
vt 0.800008 0.400000
vt 0.800008 0.150000
...
make sure that you have them inverted like this:
vt 0.800008 0.600000
vt 0.800008 0.850000
...
I need a shader that starts with a given texture, then each frame of animation operates on the previous output of the shader plus other input.
How do I organize the framebuffers so that each frame has access to the output of the previous frame without having to move the buffer back and forth from the CPU?
OpenGL ES 2.0 has Framebuffer Objects (FBOs), with it you can render directly into a texture, and you can use that texture as input for your next iteration.
That's the only way of doing it. Use two FBOs and two textures, each attached to each FBO. Read from one texture and write into the other, and then swap the textures, so you read from the last written, and write to the first. This is called "Ping-Pong" rendering.