How to render image in Matlab off screen? - matlab

I have a set of planar convex polygons in 3D that I can render in Matlab using patch. As part of my code though, I need to efficiently render the patches to the image plane of a virtual camera in the 3D environment.
The algorithm I'm basing my code used OpenGL raytracing to render the image, but I don't know how I could do this in or from Matlab.
How can I do this rendering quickly from within a matlab script?

Related

MeshLab to displace a 3D mesh using a heightmap

I'd really love to know if and how MeshLab could displace a 3D mesh from a grayscale heightmap image.
I suppose I could settle for baking texture to vertex colours and then displacing with that method?
I feel as though this is the only thing that MeshLab cannot do...
If not MeshLab, then perhaps some other commandline utility that could displace a custom mesh from a custom image?
Anything I could use to automate that process?

How to export Unity particles as static 3D Mesh or 3D Image

I have some particle system that draws their trails emissive circuits trees :
Actualy, I made a script that "pauses" their simulation, however, I would like to turn them into a mesh or 3D image (maybe an inverted cubemap shader ?).
Anyone knows how I could achieve that?
There is no built-in or Asset Store support for exporting a "frame" of particles. The Shuriken particle shader is proprietary.
There are alternative particle rendering libraries that may let you do what you want to do.

VR distortion correction methods

I’ve read this article http://www.gamasutra.com/blogs/BrianKehrer/20160125/264161/VR_Distortion_Correction_using_Vertex_Displacement.php
about distortion correction with vertex displacement in VR. Moreover, there are some words about other ways of distortion correction. I use unity for my experiments(and try to modify fibrum sdk, but it does not matter in my question because I only want to understand how these methods work in general).
As I mentioned there are three ways of doing this correction:
Using Pixel-based shader.
Projecting the render target onto a warped mesh - and rendering the final output to the screen.
Vertex displacement method
I understand how the pixel-based shader works. However, I don’t understand others.
The first question is about projection render target onto a warped mesh. As I understand, I should firstly render image from game cameras to 2(for each eye) tessellated quads, then apply shader with correction to this quads and then draw quads in front of main camera. But I’m afraid, I’m wrong.
The second one is about vertex displacement. Should I simply apply shader(that translate vertex coordinates of every object from world-space into inverse-lens distorted & screenspace(lens space)) to camera?
p.s. Sorry for my terrible English, I just want to understand how it works.
For the third method (vertex displacement), yes. That's exactly what you do. However, you must be careful because this is a non-linear transformation, and it won't be properly interpolated between vertices. You need your meshes to be reasonably tessellated for this technique to work properly. Otherwiese, long edges may be displayed as distorted curves, and you can potentially have z-fighting issues too. Here you can see a good description of the technique.
For the warped distortion mesh, this is how it goes. You render the screen, without distortion, to a render texture. Usually, this texture is bigger than your real screen resolution to compensate for effects of distortion on the apparent resolution. Then you create a tessellated quad, distort its vertices, and render it to screen using the texture. Since these vertices are distorted, it will distort your image.

Can I use Matlab simulink 3D Animation as a 3D CG viewer?

Simulink 3D Animation is a toolbox for simulink. I read the documentation of it and understood that you can load popular 3D CG data into it and view it at least statically, with some programming in matlab.
Assume I have loaded some 3D object into Simulink 3D Animation successfully. Then Can I rotate the 3D object or do other standard operation on it without programming in Simulink 3D Animation or matlab? For example, I expect it has a rotate buttons to let me rotate the 3D object.
As the second minor question, can you use Simulink 3D animation when you have only matlab but simulink?
Thank you in advance.
Yes, despite the name, Simulink 3D Animation works with MATLAB only, without Simulink (see System requirements).
For the rest, I would go with #thewaywewalk' suggestion, and try it out and/or watch some videos or webinars.
This is an example of using MATLAB 3D Animation Toolbox for object detection and tracking with an Unmanned Aerial Vehicle.
PAPER:https://ieeexplore.ieee.org/document/9373417
CODE:https://github.com/gsilano/MAT-Fly

quartz 2d / openGl / cocos2d image distortion in iphone by moving vertices for 2.5d iphone game

We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.