Draw spheroid in Paraview? - paraview

Does someone know how to draw spheroid in Paraview?
What have I tried:
to make something by using menu "Sources":
there is Sphere object which I can't change, in a manner to make it spheroid
there is Programmable object, but none of available vtk data types can make spheroid, and the one that assumable can is vtkParametricEllipsoid, which is not available
read the manual
There is Python console in Paraview, so maybe someone knows more about it

In case anyone cares...
It was much easier then I assumed.
At the top of properties panel, to the right, there is small "gear" button, that can toggle advanced object properties. After creating sphere, roll this advanced properties and use Scale input boxes to define X, Y, Z radiuses.

Related

How to create a triangular support in CadQuery?

I need to create a right angle with a triangular support, as seen in the example below (which I created in Blender). I know how to create the right angle, but I don't really know how to create the triangular support. The only way that I was able to think of would be to create a 2D polygon that would be the outer face of the object where the support is, then extrude it to the thickness of the support, and then extrude the rest to complete the object. This, however, seems clumsy and against the idea of CQ, where the code should be (kind of) similar to how a human would describe such object.
Is it possible to create the right angle first and then add the support? How?

How can you freely transform Objects with the Hololens?

We want to be able to freely transform objects with the HoloLens. We are currently using the BoundingBox which will scale all three axis of the objects uniform. Our goal is to stretch the object and scale every axis on their own.
Is there an alternative to the BoundingBox or did we miss some kind of setting which allows just that?
Example video of how the solution should look like: https://www.youtube.com/watch?v=DJGGofLSdB8
You can reuse the BoundingBox.cs script and modify some code to recalculate the scale value to implement free stretch.
The code from Line 1381 to line 1401 is scaling transform for the bounding box calculated based on the position of grab pointer. And the variable newScala at line 1387 in this script is the parameter that will be used to create the new transform with the scale of each axis. In summary, this way will reuse most of this existing code and minor changes to implement your idea.
You can now set Non Uniform scale in the new BoundsControl script, which supersedes the old BoundingBox. This option allows you to freely transform objects on any axis.
To do this, change property "Scale Behavior" to "Non Uniform Scale", which is under "Scale Handles Configuration" of the BoundsControl script.
https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_BoundsControl.html#scale-handles-configuration

Geodesic on Matlab PLY Surface Mesh

I have a CT scan for an heart and I am designing a device that rests on top of it. As such, getting the right lengths for certain attributes is important. The CT scan was segmented in MeshLab and my advisor gave me code that uses PLY_IO to read the ply file exported from MeshLab. From this, I have a map of the surface. surf(Map.X, Map.Y,Map.Z) outputs the 3D model. Now, what I would ideally want is to be able to select points graphically via the figure window and have Matlab either tell me what the points are or allow me to draw a geodesic line to determine its length. Question: Does anyone have any idea of how I could do this in a simple way?
Ultimately, just drawing on the figure might be ok too if I can just get it in the right orientation. Ideally, though, I would select the start and end point and then Matlab would graphically show a geodesic on the surface that I can later find the length of. I'm willing to do some programming for this, but hopefully there's something out there you guys might already know about.
One way to interactively extract points on a surface is to use datacursormode. Here's a simple example of how to get two points:
surf(peaks);
dcm_obj = datacursormode(gcf);
set(dcm_obj,'DisplayStyle','datatip',...
'SnapToDataVertex','off','Enable','on')
disp('Select first point then press any key')
pause
c_info{1} = getCursorInfo(dcm_obj);
disp('Select second point then press any key')
pause
c_info{2} = getCursorInfo(dcm_obj);
Note that if you (or the user) changes mode (e.g. by clicking the rotate button) in order to select the point, you will have to switch back to datacursor mode to move the datacursor again:
You should now have c_info{1}.position and c_info{2}.position which are two points on the surface. Calculating the geodesic is another matter - have a look on the File Exchange, see if there's anything around already that will do the job for the type of data you have already.

How to flip a VRML file (.wrl), or flip the VR model in Matlab

I have a fairly complex VRML model of a prosthetic right hand in a .wrl file (3 Megabytes) which I am manipulating (animating according to commands) in Matlab. I'd like to make a mirror image (horizontal flip) of the file (to be a left hand). I do not mind whether I use a free program to process the file (which I imagine should just involve mirroring all the horizontal co-ordinates) or if there is a Matlab command that can flip a VR model, but I haven't been able to find a solution. There's nothing else in the "world" so everything within the file can be flipped.
There are named transforms in the file and I need them to keep their names, because those joints get animated, but it's not a problem if I have to change the sign of the rotations to get things moving in the correct mirrored direction.
I'm just looking for a simple and free solution.
Thanks!
I wouldn't suggest to do this manually. The easiest solution would be to export your model to a .max or any other file that you can open with a 3D modeling program (3dsmax, DeepExploration, Maya, etc.) Once you have the new file, that you can open with any of these programs, I think it's much easier to do your job. Usually you can do it using the interface by dragging or clicking buttons, not manually going through the coordinate values :)

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}