I am plotting bar plot with standard deviation in Matlab data are following
y = [0.776 0.707 1.269; 0.749 0.755 1.168; 0.813 0.734 1.270; 0.845 0.844 1.286];
std_dev = [0.01 0.055 0.052;0.067 0.119 0.106;0.036 0.077 0.060; 0.029 0.055 0.051];
I am writing following code
figure
hold on
bar(y)
errorbar(y,std_dev,'.')
But I am not getting standard deviation bar in the correct position.
If all the bars have the same color:
x=1:15;
y = [0.776 0.707 1.269 0 0.749 0.755 1.168 0 0.813 0.734 1.270 0 0.845 0.844 1.286];
std_dev = [0.01 0.055 0.052 0 0.067 0.119 0.106 0 0.036 0.077 0.060 0 0.029 0.055 0.051];
figure
hold on
bar(x,y)
errorbar(y,std_dev ,'.')
XTickLabel={'1' ; '2'; '3' ; '4'};
XTick=2:4:15
set(gca, 'XTick',XTick);
set(gca, 'XTickLabel', XTickLabel);
Related
I am trying to generate a three phase saturation plot in Matlab.
I found this code in matlab and it does generate something similar but not exactly the same
As much it looks simple, I dont know how to adjust the values to match the plot in the figure. As I dont have a specific values.
Even if there is a way to generate the shading of the colors that changes from one edge to another that would be fine
Any suggestions please
% Main file for ternary plot
close all; clear all; clc;
A = [...
1.000 0.000 0.000
0.000 1.000 0.000
0.000 0.000 1.000
0.330 0.330 0.340
0.340 0.000 0.660
0.000 0.340 0.660
0.000 0.160 0.840
0.160 0.000 0.840
0.000 0.153 0.847
];
l=length(A);
% A(l+1,:)=[1 0 0 6];
% A(l+2,:)=[0 1 0 30];
% A(l+3,:)=[0 0 1 1];
% ... and the GPR velocity
% v=0.29./sqrt(A(:,4));
data = [...
0.0
0.0
0.0
0.419
0.273
0.090
0.014
0.010
0.00
];
v = data;
figure;
% Plot the data
% First set the colormap (can't be done afterwards)
colormap(jet)
[hg,htick,hcb]=tersurf(A(:,1),A(:,2),A(:,3),v);
% Add the labels
hlabels=terlabel('Gas','Water','Oil');
set(hg(:,3),'color','m')
set(hg(:,2),'color','c')
set(hg(:,1),'color','y')
%-- Modify the labels
set(hlabels,'fontsize',12)
set(hlabels(3),'color','m')
set(hlabels(2),'color','c')
set(hlabels(1),'color','y')
%-- Modify the tick labels
set(htick(:,1),'color','y','linewidth',3)
set(htick(:,2),'color','c','linewidth',3)
set(htick(:,3),'color','m','linewidth',3)
%-- Change the colorbar
set(hcb,'xcolor','w','ycolor','w')
%-- Modify the figure color
set(gcf,'color',[0 0 0.3])
%-- Change some defaults
set(gcf,'paperpositionmode','auto','inverthardcopy','off')
I have astraightline with the following data
xinter=[1.13 1.36 1.62 1.81 2.00 2.30 2.61 2.83 3.05 3.39]
yinter=[0.10 0.25 0.40 0.50 0.60 0.75 0.90 1.00 1.10 1.25]
and I want to find the intersection with a result of an interpolated data
of a curve such as below
a50= [0.77 0.73 0.77 0.85 0.91 0.97 1.05 1.23 1.43 1.53 1.62 1.71 1.89 2.12 2.42];
a25= [0.51 0.60 0.70 0.80 0.85 0.90 0.96 1.09 1.23 1.30 1.36 1.41 1.53 1.67];
vel25=[0.43 0.35 0.30 0.27 0.25 0.24 0.22 0.21 0.22 0.24 0.25 0.27 0.30 0.35];
vel50=[0.68 0.57 0.49 0.43 0.40 0.38 0.36 0.34 0.36 0.38 0.40 0.43 0.49 0.57 0.68 ];
% back up original data, just for final plot
bkp_a50 = a50 ; bkp_vel50 = vel50 ;
% make second x vector monotonic
istart = find( diff(a50)>0 , 1 , 'first') ;
a50(1:istart-1) = [] ;
vel50(1:istart-1) = [] ;
% prepare a 3rd dimension vector (from 25 to 50)
T = [repmat(25,size(a25)) ; repmat(40,size(a50)) ] ;
% merge all observations together
A = [ a25 ; a50] ;
V = [vel25 ; vel50] ;
% find the minimum domain on which data can be interpolated
% (anything outside of that will return NaN)
Astart = max( [min(a25) min(a50)] ) ;
Astop = min( [max(a25) max(a50)] ) ;
% use the function 'griddata'
[TI,AI] = meshgrid( 25:40 , linspace(Astart,Astop,10) ) ;
VI = griddata(T,A,V,TI,AI) ;
% plot all the intermediate curves
%plot(AI,VI)
hold on
% the original curves
%plot(a25,vel25,'--k','linewidth',2)
%plot(bkp_a50,bkp_vel50,'--k','linewidth',2)
% Highlight the curve at T = 30 ;
c30 = find( TI(1,:) == 40 ) ;
plot(AI(:,c30),VI(:,c30),'--r','linewidth',2)
xinter=[1.13 1.36 1.62 1.81 2.00 2.30 2.61 2.83 3.05 3.39]
yinter=[0.10 0.25 0.40 0.50 0.60 0.75 0.90 1.00 1.10 1.25]
x1inter=(AI(:,c30))';
y1inter=(VI(:,c30))';
yy2 = interp1(xinter, yinter, x1inter,'spline')
plot(xinter,yinter, '--k','linewidth',2)
idx = find((y1inter - yy2) < eps, 1); %// Index of coordinate in array
px = x1inter(idx)
py = y1inter(idx)
plot(px, py, 'ro', 'MarkerSize', 18)
But there is an error in the result when I modify x1inter
You can use piecewise polynomial curvefitting and the fzero function to find the intersection point:
pp1 = pchip(xinter,yinter); % Curve 1
pp2 = pchip(AI(:,c30),VI(:,c30)); % Curve 2
fun = #(x) ppval(pp1,x) - ppval(pp2,x); % Curve to evaluate
xzero = fzero(fun,mean(xinter)) % intersection x value
yzero = ppval(pp1,xzero)
plot(xzero, yzero, 'bo', 'MarkerSize', 18)
import weka.core.Instances.*
filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv';
loader = weka.core.converters.CSVLoader();
loader.setFile(java.io.File(filename));
data = loader.getDataSet();
data.setClassIndex(data.numAttributes()-1);
%% classification
classifier = weka.classifiers.trees.J48();
classifier.setOptions( weka.core.Utils.splitOptions('-C 0.25 -M 2') );
classifier.buildClassifier(data);
classifier.toString()
ev = weka.classifiers.Evaluation(data);
v(1) = java.lang.String('-t');
v(2) = java.lang.String(filename);
v(3) = java.lang.String('-split-percentage');
v(4) = java.lang.String('66');
prm = cat(1,v(1:4));
ev.evaluateModel(classifier, prm)
Result:
Time taken to build model: 0.04 seconds
Time taken to test model on training split: 0.01 seconds
=== Error on training split ===
Correctly Classified Instances 767 99.2238 %
Incorrectly Classified Instances 6 0.7762 %
Kappa statistic 0.9882
Mean absolute error 0.0087
Root mean squared error 0.0658
Relative absolute error 1.9717 %
Root relative squared error 14.042 %
Total Number of Instances 773
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon
1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony
0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon
Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999
=== Confusion Matrix ===
a b c <-- classified as
306 0 2 | a = Nikon
0 258 0 | b = Sony
4 0 203 | c = Canon
=== Error on test split ===
Correctly Classified Instances 358 89.9497 %
Incorrectly Classified Instances 40 10.0503 %
Kappa statistic 0.8482
Mean absolute error 0.0656
Root mean squared error 0.2464
Relative absolute error 14.8485 %
Root relative squared error 52.2626 %
Total Number of Instances 398
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon
0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony
0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon
Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867
=== Confusion Matrix ===
a b c <-- classified as
123 0 16 | a = Nikon
0 145 1 | b = Sony
23 0 90 | c = Canon
import weka.core.Instances.*
filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv';
loader = weka.core.converters.CSVLoader();
loader.setFile(java.io.File(filename));
data = loader.getDataSet();
data.setClassIndex(data.numAttributes()-1);
%% classification
classifier = weka.classifiers.trees.J48();
classifier.setOptions( weka.core.Utils.splitOptions('-C 0.1 -M 1') );
classifier.buildClassifier(data);
classifier.toString()
ev = weka.classifiers.Evaluation(data);
v(1) = java.lang.String('-t');
v(2) = java.lang.String(filename);
v(3) = java.lang.String('-split-percentage');
v(4) = java.lang.String('66');
prm = cat(1,v(1:4));
ev.evaluateModel(classifier, prm)
Result:
Time taken to build model: 0.04 seconds
Time taken to test model on training split: 0 seconds
=== Error on training split ===
Correctly Classified Instances 767 99.2238 %
Incorrectly Classified Instances 6 0.7762 %
Kappa statistic 0.9882
Mean absolute error 0.0087
Root mean squared error 0.0658
Relative absolute error 1.9717 %
Root relative squared error 14.042 %
Total Number of Instances 773
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon
1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony
0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon
Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999
=== Confusion Matrix ===
a b c <-- classified as
306 0 2 | a = Nikon
0 258 0 | b = Sony
4 0 203 | c = Canon
=== Error on test split ===
Correctly Classified Instances 358 89.9497 %
Incorrectly Classified Instances 40 10.0503 %
Kappa statistic 0.8482
Mean absolute error 0.0656
Root mean squared error 0.2464
Relative absolute error 14.8485 %
Root relative squared error 52.2626 %
Total Number of Instances 398
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon
0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony
0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon
Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867
=== Confusion Matrix ===
a b c <-- classified as
123 0 16 | a = Nikon
0 145 1 | b = Sony
23 0 90 | c = Canon
Same Result with both split options which is the result for default options i.e. -C 0.25 -M 2 for J48 classifier
please help!!! stuck here for a long time.Tried Different means but nothing worked for me
Using MATLAB, I have a matrix (data) and am plotting using imagesc(data) to produce a heatmap:
data = [1 1 1 1 1 1 1 1 1 1; 1 1.04 1.04 1.04 1.03 1 1.01 1.01 1.03 1.01; 1.36 1.3 1.25 1.2 1.15 1.1 1.2 1.13 1.07 1.11; 3.65 3.16 2.94 2.68 2.39 2.22 2.17 1.95 1.79 1.81; 5.91 5.75 5.47 5.3 4.98 4.79 4.62 4.55 4.38 4.19; 6 6 5.99 5.83 5.49 5.33 5.14 4.94 4.77 4.74];
imagesc(data)
Is there a way to 'smooth' the pixels in order to produce something more like this:
interp2 may be of use here. Use the data as key points, then create a finer grid of points that span the same width and height and interpolate in between the key points.
Something like this:
%// Define your data
data = [1 1 1 1 1 1 1 1 1 1; 1 1.04 1.04 1.04 1.03 1 1.01 1.01 1.03 1.01; 1.36 1.3 1.25 1.2 1.15 1.1 1.2 1.13 1.07 1.11; 3.65 3.16 2.94 2.68 2.39 2.22 2.17 1.95 1.79 1.81; 5.91 5.75 5.47 5.3 4.98 4.79 4.62 4.55 4.38 4.19; 6 6 5.99 5.83 5.49 5.33 5.14 4.94 4.77 4.74];
%// Define integer grid of coordinates for the above data
[X,Y] = meshgrid(1:size(data,2), 1:size(data,1));
%// Define a finer grid of points
[X2,Y2] = meshgrid(1:0.01:size(data,2), 1:0.01:size(data,1));
%// Interpolate the data and show the output
outData = interp2(X, Y, data, X2, Y2, 'linear');
imagesc(outData);
%// Cosmetic changes for the axes
set(gca, 'XTick', linspace(1,size(X2,2),size(X,2)));
set(gca, 'YTick', linspace(1,size(X2,1),size(X,1)));
set(gca, 'XTickLabel', 1:size(X,2));
set(gca, 'YTickLabel', 1:size(X,1));
%// Add colour bar
colorbar;
The code that's at the bottom is required because defining the finer grid ultimately increases the size of the image. I need to relabel the axes to go back to the original size.
We get this:
Small Note
I'm using MATLAB R2014a, and the default colour map is jet. You're using R2014b+ and the default colour map is parula. You won't get the same colour distribution as me, but you will get the smoothness you desire.
You can also use the pcolor(data) function with shading interp to make the color resolution smooth. No need to interpolate the data before plotting.
Consider two curves, for example:
x = [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20];
y1 = [0 0 -0.3 -0.8 -1.1 -1 -0.5 1 1.1 1 -0.3 -0.8 -1.1 -1 -0.5 0.1 0.05 0 0 0];
y2 = [0 -0.2 -0.3 -0.8 -2 1 2.8 2.4 1.5 1.1 2.3 -0.4 -0.2 1 1.1 1.2 1.3 0.5 -0.1 0];
I'd like to write a generalized algorithm that takes in x, y1, and y2, and scales y1 by a global scale factor, f, such that the new value of y2-y1 is as close as possible to 0. That is, y2-f*y1 is as close to 0 as possible.
How can I do this?
Try this:
% Create a function that you want to minimize
func = #(f, y1, y2)abs(sum(y2 - f*y1));
% Your example data
x = [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20];
y1 = [0 0 -0.3 -0.8 -1.1 -1 -0.5 1 1.1 1 -0.3 -0.8 -1.1 -1 -0.5 0.1 0.05 0 0 0];
y2 = [0 -0.2 -0.3 -0.8 -2 1 2.8 2.4 1.5 1.1 2.3 -0.4 -0.2 1 1.1 1.2 1.3 0.5 -0.1 0];
% Plot the before
figure()
plot(x, y2); hold all;
plot(x, y1)
% Find the optimum scale factor
f_start = 0; % May want a different starting point
f = fminsearch(#(f) func(f, y1, y2), f_start);
disp(['Scale factor = ' num2str(f)]) % print to the output
% Plot the after (scaled data)
figure()
plot(x, y2); hold all;
plot(x, f*y1)
For more information see the docs on anonymous functions and fminsearch (see example #2).
EDIT
Here is the output of the above script:
Scale factor = -2.9398
Before
After
As you can see the difference between the functions is minimized (area where y1 is greater than y2 is about the same as the area where y1 is less than y2). If you want the lines to match up as close as possible then you need to modify the minimization function like so:
func = #(f, y1, y2)sum(abs(y2 - f*y1));
I had to modify the test data for this case as it appears the data was already lined up optimally.
y1 = [0 0 -0.3 -0.8 -1.1 -1 -0.5 1 1.1 1 -0.3 -0.8 -1.1 -1 -0.5 0.1 0.05 0 0 0];
y2 = -2*y1 +1;
which gives the following output:
Scale factor = -2.9091
Before
After