Can you please give me some idea about how to design a data structure in C# (3.0) which will give a representation of 3D data structure.
I mean to say something similar to cube. Like stock data to be viewed based on time , location .
Kindly give a simple working example or even a link will do.
I doubt this is what you're looking for, but since a CUBE has three identical dimensions it can be represented with a single integer.
int CUBE = 4; // A 4x4x4 cube
Stock data has more than three dimensions (if you must call them that) and each is unique.
Is this homework?
How about:
struct StockTickData
{
string Symbol;
decimal Price;
DateTime When;
string Where;
}
I'm not sure you really need "3D" here.
erm, well taking your question to mean one thing, I would suggest something like
class cube{
private size;
public set_size(value){
if (value < 0){
value = -value; // makes sure we have a positive size
}
this.size = value
}
public get_size(){
return this.size;
}
public get_volume()
{
return this.size*this.size*this.size
}
}
But you may also mean a 3D array... which is an array of arrays of arrays
Of top of my head, you might have the inner most array having three elements, representing an x,y,z value of a vertex. You would then have an array of these vertex arrays, lets say three again, which would be triangles. Then you have an array of these triangles to make an object.
Though here is a situation where object orientated programming will make it simpler to develop. Make a vertex class with thee integers and functions to control the single vertex. Then make a triangle class which has three 'vertex' properties and functions to control the triangle, such as rotating around one vertex. Then another class for an object that can have an array of triangles.
Let me know if you want me to expand or clarify any of this
Your cube needs following properties:
1) Location coordinate which is most likely vector of 3 floats describing XYZ coordinates.
2) Dimensions of your Cube, again vector of 3 floats describing height width and depth of cube
3) Orientation of your Cube, again vector of 3 floats describing yaw pitch and roll angles
Basically a 3x3 matrix is enough to represent cube.
[X Y Z]
[L W D]
[Y P R]
These 3 vectors are the minimum and sufficient to describe a cube in 3D space, and perform various operations on it. Operations like rotation, stretching, moving are performed using matrices. DirectX/Direct3D documentation has lots of info this kind of stuff, if this is what are you looking for. Also any basic 3D gamedev book will do.
Related
I am trying to upload an STL file to MATLAB and be able to manipulate it but can't find the best way to do it.
What I am trying to do is import an STL a file of a hand tool and be able to rotate the 3D image by giving it roll, pitch and yaw angles. The whole system will involve a live read out from an IMU which calculates these angles (going to use a 9 axis IMU - 9250 and hope to incorporate space movement into this but that's progress for another day) which will feed into a function which alters the orientation of the model made from the STL to show in real time how the body is moving. Its important to note the body is fixed so no points can move relative to each other (simplifying the problem).
Currently I have not got far but have modeled the STL fixed in space:
model = createpde(3);
importGeometry(model,'Test_model.stl');
pdegplot(model);
This will plot the STL file. The model is made up of a certain number of faces and vertices which can be plotted but I cannot see a way of manipulating these. I figure that there should be some way of converting this to a 3D matrix of points in x,y,z which I can mulitply by a rotation vector to give a new position rotated by the three angles.
Rx = rotx(psi);
Ry = roty(theta);
Rz = rotz(phi);
R = Rx*Ry*Rz;
Then multiply the model by this and update the plot.
I will also need a way of offsetting all points by certain values to be able to change the point of rotation (where the IMU is placed). I figure once I get the coordinates in a matrix then I can offset them all by certain values in each direction x, y and z.
Can anyone help with this, I have been looking for similar projects but I have not been able to find anything with good code explanations as of yet. The way I am proposing is only my idea, if there is an easier method then please say. Thanks!
I do not have comment privileges so this may not seem like a complete answer.
I've done exactly this type of thin in MATLAB for other research but had to write my own data parsers as I do not have any tool boxes, or importGeometry() didn't exist at the time. The STL is structured as a list of triangles each with a normal and three vertices. I'd ask you, after importing STL what is the data format? An array of positions, a struct or object? Also, what s/w was used to make it. The gmsh format is easier to work with as it gives you a reduced list of points and lists of connections between them based on what simplex contains the points.
If the output of importGeometry is a struct with the full data set then you will have repeated data and need to (1) parse the struct, (2) delete duplicates, (3) stack the results in a 3-by-N or N-by-3 matrix, then operate on this result with the rotation matrix and update plots.
You haven't really asked a specific question but I hope that my comments were helpful.
I have a segmented image. I wish to extract the middle pixel(s) of each segmentation. The goal is to extract the mean color from the middle pixel.
The following diagram illustrates what I mean by 'middle pixel':
The alternative middle pixels are also acceptable.
What algorithms/functions are available in Matlab to achieve something similar to this? Thanks.
If I'm understanding what you want correctly, you're looking for the centroid. MATLAB has the regionprops function which measures the properties of separate binary objects as long as the objects.
You can use the Centroid property. Assuming your image is stored in im and is binary, something like this will do:
out = regionprops(im, 'Centroid');
The output will be a structure array of N elements where N corresponds to the total number of objects found in the image. To access the ith object's centroid, simply do:
cen = out(i).Centroid;
If you wish to collect all centroids and place them perhaps in a N x 2 numeric array, something like this would work:
out = reshape([out.Centroid], 2, []).';
Each row would be the centroid of an object found in the image. Take note that an object is considered to be a blob of white pixels that are connected to each other.
I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:
F1 =[-0.000000221102386 0.000000127212463 -0.003908602702784
-0.000000703461004 -0.000000008125894 -0.010618266198273
0.003811584026121 0.012887141181108 0.999845683961494]
And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:
K = [12636.6659110566, 0, 2541.60550098958
0, 12643.3249022486, 1952.06628069233
0, 0, 1]
From this information I then computed the essential matrix using:
E = K'*F*K
With the method of SVD, I finally got the projective transformation matrices:
P1 = K*[ I | 0 ]
and
P2 = K*[ R | t ]
Where R and t are:
R = [ 0.657061402787646 -0.419110137500056 -0.626591577992727
-0.352566614260743 -0.905543541110692 0.235982367268031
-0.666308558758964 0.0658603659069099 -0.742761951588233]
t = [-0.940150699101422
0.320030970080146
0.117033504470591]
I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).
I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.
Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.
Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And #Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!
If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:
You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.
Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)
I am implementing the algorithm for Photometric Stereo where I have already calculated the normals from a set of images with different light directions.
How can I plot the normal vector field in matlab? I have a matrix of normals of size (N x 3).
I'm afraid you have left out a step. You need to retrieve the depth map from the surface normals, and then you can start plotting. To see how to do this, you can check out section 4 of the following paper:
http://www.wisdom.weizmann.ac.il/~vision/photostereo/Photometric%20Stereo%20with%20General%20Unknown%20Lighting%20-%20BasriJacobsKemelmacher_ijcv06.pdf
There are other resources on the web too; I don't know of any built-in function in any Matlab library, but I don't have the Computer Vision toolbox, so who knows?
I suspect you are looking for quiver3.
You need to present normals field, as an gradient field, then you can,use quiver function. And in gradient field previously normalized triple {pn,qn,rn}, the data is presented in such a way , as to rend the third component of it always equal to one (at least in theory). I mean with rn=1, or should I now say, that now : R=1, and you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because: P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, run over X,Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, is the following: P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2);, and so on.
You can see as well: http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
You need to present normals field, as an gradient field, then you can use a Matlab's quiver function. And in gradient field, the previously normalized triple {pn,qn,rn}, of the data, is presented in such a way, as to rend the third component of it always equal to one (at least in theory).
I mean with rn = 1, or should I now say, that now with: some R=1, you actually need only {P,Q} coomponents to present contents of gradient field with ordinary 2D quiver function. Thus, gradient vector is something quite different and distinct from normals field, because:
P=pn/sqrt(pn^2+qn^2+rn^2) , and Q=qn/(pn^2+qn^2+rn^2) POINTWISELY saying.
However you don't bother with double for loops, which would be run over X, Y directions, cause POINTWISELY, correctly rendered calculations for gradient field from normals, are the following:
P=pn./(pn.^2 + qn.^2 + rn.^2).^(1/2); , and: Q=qn./(pn.^2 + qn.^2 + rn.^2);
You can see as well:
http://www.mathworks.com/matlabcentral/fileexchange/authors/126090/
Briefly saying, the gradient field always represents slopes on X, Y directions, while descending exactly one height unit alongside Z axis on the 3D surface retrieved with for instance Photometric Stereo algorithm. That is why the third component in quiver function visualization is always equal to one (i.e. R = 1), and practically irrelevant.
I have posted, last month, some codes, on THE simplest Photometric Stereo methods, on Mathworks Web pages, due to some span of time available to tide it all up, I mean my own so far produced codes in Matlab...
It's usually popular to work with polygons with their vertices sorted CW or CCW in vectors(2*1 or 1*2 matrices). However, how to state polygons with holes in vectors?
I'm going to apply various process on these polygons, so I want a way of representing with which I could work easily or efficiently.(i.e how to state that kind of polygons in my program in order to ease my algorithms?)
polygons are 2D and I'm programming in MATLAB.
EDIT 1 : I'm going to calculate visibility graph of these polygons(with or without holes).
As others have mentioned, a polygon with holes can be represented as an exterior boundary, plus zero or more interior boundaries, all of which are mutually nonoverlapping*. If you use nonzero winding number to determine inside/outside, be sure to specify your interior boundaries in the opposite direction as the exterior boundaries (counterclockwise for exterior and clockwise for interior, or vice-versa) so that the contour integrals are zero inside the holes.
FYI, tis kind of definition/representation has been formalized in the OpenGIS Simple Features Specification (PDF).
As far as representation:
I'd probably have a cell array of K Nx2 matrices, where the first element in the cell array is the exterior boundary, and the remaining elements (if any) in the cell array are the interior boundaries. I would use a cell array because there may not be the same number of points on each boundary.
*nonoverlapping = except at individual points, e.g. a diamond inside a square:
You can break a polygon with a hole in it into two shapes without a hole. When you're doing contour integration in a complex plane, you can create a "cut" from one edge of the polygon that brings you to the edge of the hole; integrate around one side of the hole and back; then traverse around the other side for the second polygon. You end up with two path integrals along each cut that cancel each other out.
"visibility graph" - is this for a radiation view factor calculation with shading? Or a ray-tracing graphics algorithm?
A polygon, plus a list of polygonal holes. Just be sure the various polygons don't intersect.
What do you plan to do with this thing?
It sounds like each hole is just a polygon inside the polygon itself. Perhaps you could store a vector like you describe for the outer polygon, then a vector of more polygon vectors for the holes.
Presumably you'll want to have a tree structure if you want this to be as generic as possible (i.e. polygons with polygonal holes that have polygons inside them with holes inside that, ...). Matlab isn't really great at representing tree structures efficiently, but here's one idea...
Have a struct-array of polygons.
Each polygon is a struct with two fields, 'corners', and 'children'.
The 'corners' field contains a matrix of (x,y) coordinates of the corners, accessed as "data{polyIdx}.corners(:,cornerIdx)".
The 'children' field is a struct-array of polygons.
Here's an example of some code to make a triangle with bogus children that are holes (they aren't really valid though because they will likely overlap:
polygon = struct;
npoints = 3;
polygon.corners = rand(2,npoints);
polygon.children = struct;
nchildren = 5;
for c=1:nchildren
polygon.children(c).corners = rand(2,npoints);
polygon.children(c).children = struct;
end
You could continue to recursively define children that alternate between creating holes and filling them.
What exactly do you mean under "a visibility graph" ?
Two "full" poligons, two states possible, either +1 or -1.
If you're representing a hole, you've got one with state +1 and one with state -1, which represents a hole, resulting in state 0.
If you've got overlapping polygons, you'll end up with resultant state >1. Then you can calculate the borders of a new polygon.
If you've got two polygons with holes that intersect, then first calculate the state of a new polygon which consists of outer borders of the two old ones, then deal with holes.
Anyways, ... I think you get the general principle.
Have no idea how to do it in matlab, I used it only marginally so far, and even that for very simple things.