Creating patterns in matlab / octave to show Moiré patterns - matlab

How can I create radial images like this (see images below)
My goal is to control the number of radial arms, thinkness, along with the angle they are created. I'm trying to create patterns that will show me different Moiré patterns when overlapped and turned / animated in octave / matlab.
PS: I'm using octave 3.8.1
I've tried the code here but it doesn't give me the fine tuning all of the following parameters, of radial arm amount, angle, and thickness. Also the image package is needed which I'm trying to avoid.
http://www.mathworks.com/matlabcentral/answers/uploaded_files/20287/moire_pattern.m

As I see it the two approaches which would be worth investigating first are equations and patches.
You could for instance generate a generic equation for an arm with parameters to control the rotation angle and the shape of the curve. You could then plot that at each of a given number of rotation angles, with varying linewidths (a plot property not an equation parameter). Your equation would probably not look pretty as you'd be best off specifying it parametrically (in terms of a third variable) or in polar coordinates, and then translating it to cartesian for the plot commands.
With patches you'd be computing the outline of the arm (as opposed to the centreline) and would probably find it convenient to generate the patch for one arm and then transform it for each rotation. This would be a one-liner with the appropriate rotational transform matrix, and the expression you use to generate the arm wouldn't need to be nearly so complex as it wouldn't need to handle the rotation. A quadratic might even do at a push.
Another advantage of patches is that, having generated an arm and rotated it around, you could also flip it and generate the figure with the opposite sense for very little extra code.

Related

Align 2 sets of 3D points in Unity by using Singular Value Decomposition method (SVD)

I wanted to translate a set of reference points on contour to a set of corresponding target points. There are total 8 points on each contour.
In order to calculate the rotation & translation vector, I was using Math.Net Numerics library to perform SVD calculation - The idea came from this URL (page 3-7):
But somehow I noticed that transformation done using result from SVD calculation seems inaccurate. The result as shown below:
The transform supposed to move reference points to target points as close as possible, but as highlighted, it moves far away from target point.
In addition, I also did a simple test whereby I calculated centroid for both contours and perform deduction: (TargetCentroid - RefCentroid = translation vector). The final transformation result is the same as going through SVD.
Am I did something wrong? Can anyone suggest a better solution to transform ref point to target point?
Edit:
1. Garment transformation from reference model to various target models
This seems like an over complicated solution to the problem.
If you have the target points, you can just Lerp the given points to their corresponding target points.
Or if the target is the same mesh but of different scale and rotation as in the picture, you can just Lerp the transform values, scale and rotation respectfully, without the need to go over all the points individually.
Using Vector3.Lerp
Edit:
Additionally, lerping will cause all the points to reach their targets at the same time, which is, in most cases, the desired behavior.

Matlab computing wrong surface normals?

I have a big FEM model from where I can get the "surface" of the model, say the elements and vertex that define the surface of that FEM model. For plotting purposes (nice plots are always a win!) I want to plot it nicely. My approach is just to use
lungs.Vertex=vtx;
lungs.Faces=fcs;
patch(lungs,'facecolor','r','edgecolor','none')
NOTE: I need edgecolor none, as this is 4D data and different FEM have different triangulation, if edges are plotted user will be dizzy.
However this will output everything in a really plain red color, which is not nice (as it can not show the complexity of the figure, which are lungs, for the attentive to details).
therefore I decided to use ligthing:
camlight; camlight(-80,-10); lighting phong;
But again, this is not entirely correct. Actually it seems that the patch nromals are not computed correctly by Matlab.
My supposition is that maybe the patches are not always defined counter-clockwise and therefore some normals go to the wrong direction. However that is something its not straightforward to check.
Anyone has a similar problem, or ahint of how should I aproach this problem in order to have a nice surface ploted here?
EDIT
Just for the shake of plotting, here is the result obtained with #magnetometer answer:
If your model gives you outward oriented normals, you can re-sort the faces of your model so that Matlab can properly calculate it's own normals. The following function works if you have triangular faces and outward oriented normals:
function [FaceCor,nnew]=SortFaces(Faces,Normals,Vertices)
FaceCor=Faces;
nnew=Normals*0;
for jj=1:size(Faces,1)
v1=Vertices(Faces(jj,3),:)-Vertices(Faces(jj,2),:);
v2=Vertices(Faces(jj,2),:)-Vertices(Faces(jj,1),:);
nvek=cross(v2,v1); %calculate normal vectors
nvek=nvek/norm(nvek);
nnew(jj,:)=nvek;
if dot(nvek,Normals(jj,:))<0
FaceCor(jj,:)=[Faces(jj,3) Faces(jj,2) Faces(jj,1)];
nnew(jj,:)=-nvek;
end
end
If your FEM model doesn't give you outward directed normals, one way could be to reconstruct the surface using e.g. a crust algorithm that gives you outward directed normals or correctly oriented patches.
EDIT:
As you don't have normals, the only solution that comes to my mind is to reconstruct the surface. This implementation of the crust algorithm has worked well for me in the past. All you need to do is:
[FacesNew,NormalsNew]=MyRobustCrust(Vertices);
If I remember correctly, FacesNew are not yet oriented counterclockwise, but you can use the SortFaces algorithm I posted above to correct for this, as you now have correctly oriented face normals, i.e. run:
[FaceCor,~]=SortFaces(FacesNew,NormalsNew,Vertices)
If you use Matlab's reducepatch (e.g. reducedmodel=reducepatch(fullmodel,reduction); ) to reduce the number of vertices, you will have to reconstruct the surface again, as reducepatch doesn't seem to keep the correct orientation of the patches.

Force Calculation at a Point Within a Vector Field, and then Reacting to that Force

So, this is going to be pretty hard for me to explain, or try to detail out since I only think I know what I'm asking, but I could be asking it with bad wording, so please bear with me and ask questions if need-be.
Currently I have a 3D vector field that's being plotted which corresponds to 40 levels of wind vectors in a 3D space (obviously). These are plotted in 3D levels and then stacked on top of each other using a dummy altitude for now (we're debating how to go about pressure altitude conversion most accurately--not to worry here). The goal is to start at a point within the vector space, modeling that point as a particle that can experience physics, and iteratively go through the vector field reacting to the forces, thus creating a trajectory of sorts through the vector field.
Currently what I'm trying to do is whip up code that would allow me to to start a point within this field and calculate the forces that the particle would feel at that point and then establish a resultant force vector that would indicate the next path of movement throughout the vector space.
Right now I'm stuck in the theoretical aspects of the code, as I'm trying to think through how the particle would feel vectors at a distance.
Any suggestions on ways to attack this problem within MatLab or relevant equations to use?
In order to run my code, you'll need read_grib.r4 and to compile that mex file here is a link to a zip with the code and the required files.
https://www.dropbox.com/s/uodvixdff764frq/WindSim_StackOverflow_Files.zip
I would try to interpolate the wind vector from the adjecent ones. You seem to have a regular grid, that should be no problem. (You can use interp3 for this)
Afterwards, you can use any differential-equation solver for your problem, as you have basically a field of gradients and an initial value. Forward euler would be the simplest one but need a small step size. (N.B.: Your field should be a gradient field)
You may read about this in Wikipedia: http://en.wikipedia.org/wiki/Vector_field#Flow_curves
In response to comment #1:
Yes. In a regular grid, any (arbitrary chosen) point will have eight neighbors. interp3 will so a trilinear interpolation to determine an interpolated gradient vector.
If you use forward-euler, you will then move a small distance in that direction. There you interpolate a gradient and go a small step into this new direction and so on. What happens are two things:
You get a series of points that lie on a streamline and thus form the trajectory of a particle moving along the field
Get large errors, the further you move and the larger the step size is. Use a small step size or use a better solver (Runge-Kutta comes to my mind)
If all you want is plotting, then the streamline function might help.

Ambiguity in DCM to Quaternion conversion using the default Simulink library block

I am simulating a system where I need Direction Cosine Matrix to quaternion conversion. I use the default DCM to Quaternion conversion block available in simulink. However at some points of the simulation, the output quaternion components reverse sign.
Unfortunately I cannot attach the plot image.
Though this is mathematically correct I desire a smooth change. Any idea on how to avoid this and have a smooth curve for the quaternion?
Update 1:
http://tinypic.com/view.php?pic=33dayap&s=6
Above is the simulated plot. The first plot is of the output quaternion. Second plot is of the Direction Cosine Matrix. As you see that even though the dcm components change smoothly, the quaternion changes sign abruptly.
The problem arises because of the double covering property of quaternions: Two unit quaternions correspond to every rotation. At some point, according to some rule, the Matlab implementation switched from one quaternion to the other. There is not much you can do about it.
A messy workaround would be to write your own rotation matrix to quaternion conversion, and pick that representation of the two possibilities that is closer to the previous one, hence avoiding the sudden jumps. It's messy.
Plotting the quaternions is typically not needed in practical applications. Most likely you are rotating an object / vector. If you plot that object / vector (or some projections of it) you won't get any sudden jumps even if there are jumps in the representation of the rotation. Another benefit of plotting the projections of the rotated object is that it is usually much easier to interpret these plots than the quaternions. I don't know whether it makes sense in your application; it worked beautifully in mine.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).