I have a text file with length and orientation of lines. I wish to plot rose diagrams of the orientations at length intervals of 2000m. My lengths go from 98m to 18000m. I do not use MATLAB often - only for very simple things such as plotting a rose diagram of the entire region. I am really lost when it comes to loops.
This is the what I have for the entire region. But I want it broken up into 10 plots. I can do this piece by piece but that will take me quite a while since I have to do this for several text files.
length=faults(:,4);
theta=faults(:,3);
radians=pi*theta/180;
rose (radians,60);
view(90,-90)
Thanks heaps!
EDIT: To better clarify: I wish to extract lines between 0-2000, 2000-4000. 4000-6000, etc. And for each of these intervals plot the orientation. Thanks
The best approach would be to use a for loop, see Mathwork's documentation on Flow Control. I'm not sure what your faults variable is, so I cannot give a complete example. Also, what do you need the variable length for? Anyway, this is roughly how you could proceed with the for loop:
thetas = ...; % matrix of thetas
for i = 1:size(thetas,2)
theta = thetas(:,i);
radians=pi*theta/180;
rose (radians,60);
end
Related
I have to plot different graphics with subplot within a figure but I do not know how many will be plotted because it depends on the data introduced by the user (for loop). In order to plot a subplot I need to know the number of columns and rows. Is there any option in matlab that matlab configures the best number of columns and rows to display plots knowing the total subplots to do? I mean subplot(a,b,c) => best a,b combination knowing c.
Now I ask to the user to introduce number of columns and rows but it's a bit uncomfortable. Maybe an algorithm to take a decomposition of the total subplots into the 2 most equals factors should work but I think it's quite difficult to code that. If it's too complex I'll follow with my way but I was curious about it.
I don't think there is a built-in solution; what has been working well for me in practice is the following approach, where I try to get a nearly square arrangement with a few more columns than rows (because of wider-than-tall screen aspect ratio):
nRows = floor(sqrt(nPlots));
nCols = ceil(nPlots/nRows);
I want to make a plot that discontinues at one point using Matlab.
This is what the plot looks like using scatter:
However, I would like to the plot to be a smooth curve but not scattered dots. If I use plot, it would give me:
I don't want the vertical line.
I think I can break the function manually into two pieces, and draw them separately on one figure, but the problem is that I don't know where the breaking point is before hand.
Is there a good solution to this? Thanks.
To find the jump in the data, you can search for the place where the derivative of the function is the largest:
[~,ind] = max(diff(y));
One way to plot the function would be to set that point to NaN and plotting the function as usual:
y(ind) = NaN;
plot(x,y);
This comes with the disadvantage of losing a data point. To avoid this, you could add a data point with value NaN in the middle:
xn = [x(1:ind), mean([x(ind),x(ind+1)]), x(ind+1:end)];
yn = [y(1:ind), NaN, y(ind+1:end)];
plot(xn,yn);
Another solution would be to split the vectors for the plot:
plot(x(1:ind),y(1:ind),'-b', x(ind+1:end),y(ind+1:end),'-b')
All ways so far just handle one jump. To handle an arbitrary number of jumps in the function, one would need some knowledge how large those jumps will be or how many jumps there are. The solution would be similar though.
you should iterate through your data and find the index where there is largest distance between two consecutive points. Break your array from that index in two separate arrays and plot them separately.
Here is the given system I want to plot and obtain the vector field and the angles they make with the x axis. I want to find the index of a closed curve.
I know how to do this theoretically by choosing convenient points and see how the vector looks like at that point. Also I can always use
to compute the angles. However I am having trouble trying to code it. Please don't mark me down if the question is unclear. I am asking it the way I understand it. I am new to matlab. Can someone point me in the right direction please?
This is a pretty hard challenge for someone new to matlab, I would recommend taking on some smaller challenges first to get you used to matlab's conventions.
That said, Matlab is all about numerical solutions so, unless you want to go down the symbolic maths route (and in that case I would probably opt for Mathematica instead), your first task is to decide on the limits and granularity of your simulated space, then define them so you can apply your system of equations to it.
There are lots of ways of doing this - some more efficient - but for ease of understanding I propose this:
Define the axes individually first
xpts = -10:0.1:10;
ypts = -10:0.1:10;
tpts = 0:0.01:10;
The a:b:c syntax gives you the lower limit (a), the upper limit (c) and the spacing (b), so you'll get 201 points for the x. You could use the linspace notation if that suits you better, look it up by typing doc linspace into the matlab console.
Now you can create a grid of your coordinate points. You actually end up with three 3d matrices, one holding the x-coords of your space and the others holding the y and t. They look redundant, but it's worth it because you can use matrix operations on them.
[XX, YY, TT] = meshgrid(xpts, ypts, tpts);
From here on you can perform whatever operations you like on those matrices. So to compute x^2.y you could do
x2y = XX.^2 .* YY;
remembering that you'll get a 3d matrix out of it and all the slices in the third dimension (corresponding to t) will be the same.
Some notes
Matlab has a good builtin help system. You can type 'help functionname' to get a quick reminder in the console or 'doc functionname' to open the help browser for details and examples. They really are very good, they'll help enormously.
I used XX and YY because that's just my preference, but I avoid single-letter variable names as a general rule. You don't have to.
Matrix multiplication is the default so if you try to do XX*YY you won't get the answer you expect! To do element-wise multiplication use the .* operator instead. This will do a11 = b11*c11, a12 = b12*c12, ...
To raise each element of the matrix to a given power use .^rather than ^ for similar reasons. Likewise division.
You have to make sure your matrices are the correct size for your operations. To do elementwise operations on matrices they have to be the same size. To do matrix operations they have to follow the matrix rules on sizing, as will the output. You will find the size() function handy for debugging.
Plotting vector fields can be done with quiver. To plot the components separately you have more options: surf, contour and others. Look up the help docs and they will link to similar types. The plot family are mainly about lines so they aren't much help for fields without creative use of the markers, colours and alpha.
To plot the curve, or any other contour, you don't have to test the values of a matrix - it won't work well anyway because of the granularity - you can use the contour plot with specific contour values.
Solving systems of dynamic equations is completely possible, but you will be doing a numeric simulation and your results will again be subject to the granularity of your grid. If you have closed form solutions, like your phi expression, they may be easier to work with conceptually but harder to get working in matlab.
This kind of problem is tractable in matlab but it involves some non-basic uses which are pretty hard to follow until you've got your head round Matlab's syntax. I would advise to start with a 2d grid instead
[XX, YY] = meshgrid(xpts, ypts);
and compute some functions of that like x^2.y or x^2 - y^2. Get used to plotting them using quiver or plotting the coordinates separately in intensity maps or surfaces.
I am trying to build a receiver operating characteristic (ROC) curves to evaluate the discriminating ability of my classifier to correctly classify diseased and non-diseased subjects.
I understand that the closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test. My experiments gave me quite desirable value of area under curve (auc), i.e. 0.86458. However, the ROC curve (in which I included the cut-off points for tracing purposes) seems quite strange as it gave me straight lines as below:
... and not a curve I expected and as I normally see from any references like this:
Does it hav something to do with the number of observations used? (in this case I only have 50 samples). Or is this just fine as long as the the auc value is high and that the 'curve' comes above the 45-degree diagonal of the ROC space? I would be glad if someone can share their thoughts about it. Thank you!
By the way, I used the perfcurve() function in matlab:
% ROC comparison between the proposed approach and the baseline
[X1,Y1,T1,auc1,OPTROCPT1,SUBY5,SUBYNAMES1] = perfcurve(testLabel,predlabel_prop,1);
[X2,Y2,T2,auc2,OPTROCPT2,SUBY2,SUBYNAMES2] = perfcurve(testLabel,predLabel_base,1);
figure;
plot(X1,Y1,'-r*',X2,Y2,'--ko');
legend('proposed approach','baseline','Location','east');
xlabel('False positive rate'); ylabel('True positive rate')
title('ROC comparison of the proposed approach and the baseline')
text(0.6,0.3,{'* - proposed method',strcat('Area Under Curve = ',...
num2str(auc1))},'EdgeColor','r');
text(0.6,0.15,{'o - baseline',strcat('Area Under Curve = ',num2str(auc2))},'EdgeColor','k');
You probably have too litte data.
You curve indicates your data set has 13 negative and 5 positive examples (in your test set?)
Furthermore, all but 4 have exactly the same score (maybe 0)? Or is that your cutoff?
Given this small sample size, I would not accept the hypothesis that your proposed method is better than the baseline, but accept the alternative - the methods perform as good as the other: the difference of 0.04 is much too small for this tiny sample size, the results are virtually identical. Any variation within the cut-off area (the diagonal part) can be much larger than this 0.04... On a different run, a different test set, the results may be the other way around.
Shape of your curve is just a result of high explanatory power of your model and limited number of observations (e.g. take a look at the example here http://nl.mathworks.com/help/stats/perfcurve.html).
In my project i have hige surfaces of 20.000 points computed by a algorithm. This algorithm, sometimes, has an error, computing 1 or more points in an small area incorrectly.
This error can not be solved in the algorithm, but needs to be detected afterwards.
The error can be seen in the next figure:
As you can see, there is a point wrongly computed that not only breaks the full homogeneous surface, but also destroys the aestetics of the plot (wich is also important in the project.)
Sometimes it can be more than a point, in general no more than 5 or 6. The error is allways the Z axis, so no need to check X and Y
I have been squeezing my mind to find a bit "generic" algorithm to detect this poitns.
I thougth that maybe taking patches of surface and meaning the Z, then detecting the points out of the variance... but I dont think it will work allways.
Any ideas?
NOTE: I dont want someone to write code for me, just an idea.
PD: relevant code for the avobe image:
[x,y] = meshgrid([-2:.07:2]);
Z = x.*exp(-x.^2-y.^2);
subplot(1,2,1)
surf(x,y,Z,gradient(Z))
subplot(1,2,2)
Z(35,35)=Z(35,35)+0.3;
surf(x,y,Z,gradient(Z))
The standard trick is to use a Laplacian, looking for the largest outliers. (This is not unlike what Mohsen posed for an answer, but is actually a bit easier.) You could even probably do it with conv2, so it would be pretty efficient.
I could offer a few ways to implement the idea. A simple one is to use my gridfit tool, found on the File Exchange. (Gridfit essentially uses a Laplacian for its smoothing operation.) Fit the surface with all points included, then look for the single point that was perturbed the most by the fit. Exclude it, then rerun the fit, again looking for the largest outlier. (With gridfit, you can use weights to give points a zero weight, a simple way to exclude a point or list of points.) When the largest perturbation that was needed is small enough, you can decide to stop the process. A nice thing is gridfit will also impute new values for the outliers, filling in all of the holes.
A second approach is to use the Laplacian directly, in more of a filtering approach. Here, you simply compute a value at each point that is the average of each neighbor to the left, right, above, and below. The single value that is most largely in disagreement with its computed average is replaced with a new value. Or, you can use a weighted average of the new value with the old one there. Again, iterate until the process does not generate anything larger than some tolerance. (This is the basis of an old outlier detection and correction scheme that I recall from the Fortran IMSL libraries, but probably dates back to roughly 30 years ago.)
Since your functions seems to vary smoothly these abrupt changes can be detected by looking into the derivatives. You can
Take the derivative in one direction
Calculate mean and standard deviation of derivative
Find the points by looking for points that are further from mean by certain multiple of standard deviation.
Here is the code
U=diff(Z);
V=(U-mean(U(:)))/std(U(:));
surf(x(2:end,:),y(2:end,:),V)
V=[zeros(1,size(V,2)); V];
V(abs(V)<10)=0;
V=sign(V);
W=cumsum(V);
[I,J]=find(W);
outliers = [I, J];
For your example you get this plot for V with a peak at around 21.7 while second peak is at around 1.9528, so maybe a threshold of 10 is ok.
and running the code returns
outliers =
35 35
The need for cumsum is for the cases that you have a patch of points next to each other that are incorrect.