three phase diagram plot in matlab? - matlab
I am trying to generate a three phase saturation plot in Matlab.
I found this code in matlab and it does generate something similar but not exactly the same
As much it looks simple, I dont know how to adjust the values to match the plot in the figure. As I dont have a specific values.
Even if there is a way to generate the shading of the colors that changes from one edge to another that would be fine
Any suggestions please
% Main file for ternary plot
close all; clear all; clc;
A = [...
1.000 0.000 0.000
0.000 1.000 0.000
0.000 0.000 1.000
0.330 0.330 0.340
0.340 0.000 0.660
0.000 0.340 0.660
0.000 0.160 0.840
0.160 0.000 0.840
0.000 0.153 0.847
];
l=length(A);
% A(l+1,:)=[1 0 0 6];
% A(l+2,:)=[0 1 0 30];
% A(l+3,:)=[0 0 1 1];
% ... and the GPR velocity
% v=0.29./sqrt(A(:,4));
data = [...
0.0
0.0
0.0
0.419
0.273
0.090
0.014
0.010
0.00
];
v = data;
figure;
% Plot the data
% First set the colormap (can't be done afterwards)
colormap(jet)
[hg,htick,hcb]=tersurf(A(:,1),A(:,2),A(:,3),v);
% Add the labels
hlabels=terlabel('Gas','Water','Oil');
set(hg(:,3),'color','m')
set(hg(:,2),'color','c')
set(hg(:,1),'color','y')
%-- Modify the labels
set(hlabels,'fontsize',12)
set(hlabels(3),'color','m')
set(hlabels(2),'color','c')
set(hlabels(1),'color','y')
%-- Modify the tick labels
set(htick(:,1),'color','y','linewidth',3)
set(htick(:,2),'color','c','linewidth',3)
set(htick(:,3),'color','m','linewidth',3)
%-- Change the colorbar
set(hcb,'xcolor','w','ycolor','w')
%-- Modify the figure color
set(gcf,'color',[0 0 0.3])
%-- Change some defaults
set(gcf,'paperpositionmode','auto','inverthardcopy','off')
Related
extanded kalman filter for tdoa 3D positioning
I am trying to use the Extended Kalman filter to locate my mobile device in the spice using the TDoA method. The problem I get that is my cone doesn't converge to the ground truth position. Sometimes the matrix P, Xh, and X have complex values is that OK? Have I to modify something or to add stop conditions. a part of my code is like %% inititialisation % Covarience matrix of process noise Q=[ 0.01 0 0;0 0.01 0;0 0 0.01]; % Covarience matrix of measurment noise R=[ 0.001 0.0005 0.0005 ; 0.0005 0.001 0.0005 ; 0.0005 0.0005 0.001]; % System Dynamics A=[1 0 0 ;0 1 0 ;0 0 1 ]; %Assumed initial conditions Xh(:,1)=[0 0 0]'; B1=[2 2 3; 1.5 2 3; 2.5 2 3; 2 1.5 3; 2 2.5 3]; %the position of 5 beacons installes in the ceiling %inital value of covarience of estimation error P(:,:,1)=[0 0 0 ; 0 0 0 ; 0 0 0 ]; Xb=[2 2 2]; X(:,1)=Xb;% Z(:,1)=Dd20(:,1);%Dd20 is a matrix of TDOA 3*n ofsize for n=1:2000 % PROCESS AND OBSERVATION PROCESS WITH GAUSSINA NOISE X(:,n+1)=A*X(:,n)+[sqrt(((Q(1,1))*randn(1)));sqrt((Q(2,2))*randn(1));sqrt((Q(3,3))*randn(1))]; % State process % w generating process noise Z(:,n+1)=Z(:,n)+[sqrt(R(1,1))*randn(1);sqrt(R(1,1))*randn(1);sqrt(R(1,1))*randn(1)];%mesurment matrix hsn(:,n+1)=[(sqrt(((X(1,n+1)-B1(5,1))^2+(X(2,n+1)-B1(5,2))^2+(X(3,n+1)-B1(5,3))^2))-(sqrt(((X(1,n+1)-B1(1,1))^2+(X(2,n+1)-B1(2,1))^2+(X(3,n+1)-B1(1,3))^2)))); (sqrt(((X(1,n+1)-B1(2,1))^2+(X(2,n+1)-B1(2,2))^2+(X(3,n+1)-B1(2,3))^2))-(sqrt(((X(1,n+1)-B1(1,1))^2+(X(2,n+1)-B1(2,1))^2+(X(3,n+1)-B1(1,3))^2)))); (sqrt(((X(1,n+1)-B1(4,1))^2+(X(2,n+1)-B1(4,2))^2+(X(3,n+1)-B1(4,3))^2))-(sqrt(((X(1,n+1)-B1(1,1))^2+(X(2,n+1)-B1(2,1))^2+(X(3,n+1)-B1(1,3))^2))))] +[sqrt(((R(1,1))*randn(1)));sqrt(((R(1,1))*randn(1)));sqrt(((R(1,1))*randn(1)))]; % prediction of next state Xh(:,n+1)=A*Xh(:,n);% ESTIMATE P(:,:,n+1)=A*P(:,:,n)*A'+Q;% PRIORY ERROR COVARIENCE % CORRECTION EQUTIONS H(:,:,n+1)=[(Xh(1,n+1)-B1(5,1))/((sqrt(((Xh(1,n+1)-B1(5,1)^2)+(Xh(2,n+1)-B1(5,2)^2)+(Xh(3,n+1)-B1(5,3)^2))))-(Xh(1,n+1)-B1(5,1))/(sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))),... (Xh(2,n+1)-B1(5,2))/((sqrt(((Xh(1,n+1)-B1(5,1)^2)+(Xh(2,n+1)-B1(5,2)^2)+(Xh(3,n+1)-B1(5,3)^2))))-(Xh(2,n+1)-B1(5,2))/(sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))), (Xh(3,n+1)-B1(5,3))/((sqrt(((Xh(1,n+1)-B1(5,1)^2)+(Xh(2,n+1)-B1(5,2)^2)+(Xh(3,n+1)-B1(5,3)^2))))-(Xh(3,n+1)-B1(5,3))/(sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))); (Xh(1,n+1)-B1(2,1))/(sqrt(((Xh(1,n+1)-B1(2,1)^2)+(Xh(2,n+1)-B1(2,2)^2)+(Xh(3,n+1)-B1(2,3)^2))))-(Xh(1,n+1)-B1(1,1))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))), (Xh(2,n+1)-B1(2,2))/(sqrt(((Xh(1,n+1)-B1(2,1)^2)+(Xh(2,n+1)-B1(2,2)^2)+(Xh(3,n+1)-B1(2,3)^2))))-(Xh(2,n+1)-B1(1,2))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))), (Xh(3,n+1)-B1(2,3))/(sqrt(((Xh(1,n+1)-B1(2,1)^2)+(Xh(2,n+1)-B1(2,2)^2)+(Xh(3,n+1)-B1(2,3)^2))))-(Xh(3,n+1)-B1(1,3))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))); (Xh(1,n+1)-B1(4,1))/(sqrt(((Xh(1,n+1)-B1(4,1)^2)+(Xh(2,n+1)-B1(4,2)^2)+(Xh(3,n+1)-B1(4,3)^2))))-(Xh(1,n+1)-B1(1,1))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))), (Xh(2,n+1)-B1(4,2))/(sqrt(((Xh(1,n+1)-B1(4,1)^2)+(Xh(2,n+1)-B1(4,2)^2)+(Xh(3,n+1)-B1(4,3)^2))))-(Xh(2,n+1)-B1(1,2))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))), (Xh(3,n+1)-B1(4,3))/(sqrt(((Xh(1,n+1)-B1(4,1)^2)+(Xh(2,n+1)-B1(4,2)^2)+(Xh(3,n+1)-B1(4,3)^2))))-(Xh(3,n+1)-B1(1,3))/((sqrt(((Xh(1,n+1)-B1(1,1)^2)+(Xh(2,n+1)-B1(1,2)^2)+(Xh(3,n+1)-B1(1,3)^2))))); ]; % Jacobian matrix %THIS SUBROTINE COMPUTES KALMAN GAIN K(:,:,n+1)=P(:,:,n+1)*H(:,:,n+1)'*(R+H(:,:,n+1)*P(:,:,n+1)*H(:,:,n+1)')^(-1); Inov=Z(:,n+1)-hsn(:,n);% INNOVATION Xh(:,n+1)=Xh(:,n+1)+ K(:,:,n+1)*Inov; %computes final estimate P(:,:,n+1)=(eye(3)-K(:,:,n+1)*H(:,:,n+1))*P(:,:,n+1);% %computes covarience of estimation error end
Bar plot with standard deviation
I am plotting bar plot with standard deviation in Matlab data are following y = [0.776 0.707 1.269; 0.749 0.755 1.168; 0.813 0.734 1.270; 0.845 0.844 1.286]; std_dev = [0.01 0.055 0.052;0.067 0.119 0.106;0.036 0.077 0.060; 0.029 0.055 0.051]; I am writing following code figure hold on bar(y) errorbar(y,std_dev,'.') But I am not getting standard deviation bar in the correct position.
If all the bars have the same color: x=1:15; y = [0.776 0.707 1.269 0 0.749 0.755 1.168 0 0.813 0.734 1.270 0 0.845 0.844 1.286]; std_dev = [0.01 0.055 0.052 0 0.067 0.119 0.106 0 0.036 0.077 0.060 0 0.029 0.055 0.051]; figure hold on bar(x,y) errorbar(y,std_dev ,'.') XTickLabel={'1' ; '2'; '3' ; '4'}; XTick=2:4:15 set(gca, 'XTick',XTick); set(gca, 'XTickLabel', XTickLabel);
classifier.setOptions( weka.core.Utils.splitOptions()) is taking only default values even if other values provided in matlab
import weka.core.Instances.* filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv'; loader = weka.core.converters.CSVLoader(); loader.setFile(java.io.File(filename)); data = loader.getDataSet(); data.setClassIndex(data.numAttributes()-1); %% classification classifier = weka.classifiers.trees.J48(); classifier.setOptions( weka.core.Utils.splitOptions('-C 0.25 -M 2') ); classifier.buildClassifier(data); classifier.toString() ev = weka.classifiers.Evaluation(data); v(1) = java.lang.String('-t'); v(2) = java.lang.String(filename); v(3) = java.lang.String('-split-percentage'); v(4) = java.lang.String('66'); prm = cat(1,v(1:4)); ev.evaluateModel(classifier, prm) Result: Time taken to build model: 0.04 seconds Time taken to test model on training split: 0.01 seconds === Error on training split === Correctly Classified Instances 767 99.2238 % Incorrectly Classified Instances 6 0.7762 % Kappa statistic 0.9882 Mean absolute error 0.0087 Root mean squared error 0.0658 Relative absolute error 1.9717 % Root relative squared error 14.042 % Total Number of Instances 773 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class 0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon 1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony 0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999 === Confusion Matrix === a b c <-- classified as 306 0 2 | a = Nikon 0 258 0 | b = Sony 4 0 203 | c = Canon === Error on test split === Correctly Classified Instances 358 89.9497 % Incorrectly Classified Instances 40 10.0503 % Kappa statistic 0.8482 Mean absolute error 0.0656 Root mean squared error 0.2464 Relative absolute error 14.8485 % Root relative squared error 52.2626 % Total Number of Instances 398 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class 0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon 0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony 0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867 === Confusion Matrix === a b c <-- classified as 123 0 16 | a = Nikon 0 145 1 | b = Sony 23 0 90 | c = Canon import weka.core.Instances.* filename = 'C:\Users\Girish\Documents\MATLAB\DRESDEN_NSC.csv'; loader = weka.core.converters.CSVLoader(); loader.setFile(java.io.File(filename)); data = loader.getDataSet(); data.setClassIndex(data.numAttributes()-1); %% classification classifier = weka.classifiers.trees.J48(); classifier.setOptions( weka.core.Utils.splitOptions('-C 0.1 -M 1') ); classifier.buildClassifier(data); classifier.toString() ev = weka.classifiers.Evaluation(data); v(1) = java.lang.String('-t'); v(2) = java.lang.String(filename); v(3) = java.lang.String('-split-percentage'); v(4) = java.lang.String('66'); prm = cat(1,v(1:4)); ev.evaluateModel(classifier, prm) Result: Time taken to build model: 0.04 seconds Time taken to test model on training split: 0 seconds === Error on training split === Correctly Classified Instances 767 99.2238 % Incorrectly Classified Instances 6 0.7762 % Kappa statistic 0.9882 Mean absolute error 0.0087 Root mean squared error 0.0658 Relative absolute error 1.9717 % Root relative squared error 14.042 % Total Number of Instances 773 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class 0.994 0.009 0.987 0.994 0.990 0.984 0.999 0.999 Nikon 1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Sony 0.981 0.004 0.990 0.981 0.985 0.980 0.999 0.997 Canon Weighted Avg. 0.992 0.004 0.992 0.992 0.992 0.988 1.000 0.999 === Confusion Matrix === a b c <-- classified as 306 0 2 | a = Nikon 0 258 0 | b = Sony 4 0 203 | c = Canon === Error on test split === Correctly Classified Instances 358 89.9497 % Incorrectly Classified Instances 40 10.0503 % Kappa statistic 0.8482 Mean absolute error 0.0656 Root mean squared error 0.2464 Relative absolute error 14.8485 % Root relative squared error 52.2626 % Total Number of Instances 398 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class 0.885 0.089 0.842 0.885 0.863 0.787 0.908 0.832 Nikon 0.993 0.000 1.000 0.993 0.997 0.995 0.997 0.996 Sony 0.796 0.060 0.841 0.796 0.818 0.749 0.897 0.744 Canon Weighted Avg. 0.899 0.048 0.900 0.899 0.899 0.853 0.938 0.867 === Confusion Matrix === a b c <-- classified as 123 0 16 | a = Nikon 0 145 1 | b = Sony 23 0 90 | c = Canon Same Result with both split options which is the result for default options i.e. -C 0.25 -M 2 for J48 classifier please help!!! stuck here for a long time.Tried Different means but nothing worked for me
find exact numbers of true postive in weka
In WEKA, I can easily find the TP Rate and total True Classified Instances from Confusion Matrix but is there any way to see exact number of tp and/or tn? And do you know any way to find these values in matlab-anfis?
Since you are mentioning MATLAB, I'm assuming you are using the Java API to the Weka library to programmatically build classifiers. In that case, you can evaluate the model using the weka.classifiers.Evaluation class, which provides all sorts of statistics. Assuming you already have weka.jar file on the java class path (see javaaddpath function), here is an example in MATLAB: %# data fName = 'C:\Program Files\Weka-3-7\data\iris.arff'; loader = weka.core.converters.ArffLoader(); loader.setFile( java.io.File(fName) ); data = loader.getDataSet(); data.setClassIndex( data.numAttributes()-1 ); %# classifier classifier = weka.classifiers.trees.J48(); classifier.setOptions( weka.core.Utils.splitOptions('-C 0.25 -M 2') ); classifier.buildClassifier( data ); %# evaluation evl = weka.classifiers.Evaluation(data); pred = evl.evaluateModel(classifier, data, {''}); %# display disp(classifier.toString()) disp(evl.toSummaryString()) disp(evl.toClassDetailsString()) disp(evl.toMatrixString()) %# confusion matrix and other stats cm = evl.confusionMatrix(); %# number of TP/TN/FP/FN with respect to class=1 (Iris-versicolor) tp = evl.numTruePositives(1); tn = evl.numTrueNegatives(1); fp = evl.numFalsePositives(1); fn = evl.numFalseNegatives(1); %# class=XX is a zero-based index which maps to the following class values classValues = arrayfun(#(k)char(data.classAttribute.value(k-1)), ... 1:data.classAttribute.numValues, 'Uniform',false); The output: J48 pruned tree ------------------ petalwidth <= 0.6: Iris-setosa (50.0) petalwidth > 0.6 | petalwidth <= 1.7 | | petallength <= 4.9: Iris-versicolor (48.0/1.0) | | petallength > 4.9 | | | petalwidth <= 1.5: Iris-virginica (3.0) | | | petalwidth > 1.5: Iris-versicolor (3.0/1.0) | petalwidth > 1.7: Iris-virginica (46.0/1.0) Number of Leaves : 5 Size of the tree : 9 Correctly Classified Instances 147 98 % Incorrectly Classified Instances 3 2 % Kappa statistic 0.97 Mean absolute error 0.0233 Root mean squared error 0.108 Relative absolute error 5.2482 % Root relative squared error 22.9089 % Coverage of cases (0.95 level) 98.6667 % Mean rel. region size (0.95 level) 34 % Total Number of Instances 150 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class 1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Iris-setosa 0.980 0.020 0.961 0.961 0.961 0.955 0.990 0.969 Iris-versicolor 0.960 0.010 0.980 0.980 0.980 0.955 0.990 0.970 Iris-virginica Weighted Avg. 0.980 0.010 0.980 0.980 0.980 0.970 0.993 0.980 === Confusion Matrix === a b c <-- classified as 50 0 0 | a = Iris-setosa 0 49 1 | b = Iris-versicolor 0 2 48 | c = Iris-virginica
X axis scaling with matlab plotting
My data is sparse therefore when I plot my graph I get the following result As you can see the first x axis tick starts at 500(s), but most of my data is around 30(s). Can I change the scaling of the x axis?
How about this? X = [1 3 6 10 25 30 235 678 1248]; Y = [0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.8 0.9]; plot(X,Y,'-b.') figure semilogx(X,Y,'-b.') I see the following output:
If you want to display data from 0 to 30s only you can either plot only those like this: idcs=Xdata <30; %# find indices where X is less than 30s plot(Xdata(idcs),Ydata(idcs),'b'); %#plot only these data. or you can just express XLimits on the figure. plot(Xdata,Ydata,'b'); %# plot everything set(gca,XLim,[0 30]); %# limit display on X axis