I have a code that load cell array and convert them to matrix.
now this matrix shows 4 numbers after floating point for example
0 5 15 1 51,9000 3,4000
0 5 15 1 51,9000 3,4000
0 5 15 1 51,9000 3,4000
how can I change all af the rows to just show 2 numbers after the floating point ?
please consider that I want to change the matrix not print it in command window !
If you want to see it in the command window/editor for debugging purposes, use bank format:
format bank;
Example:
A =[ 51.213123 6.132434]
format bank
disp(A);
Will result in :
A =
51.21 6.13
Also, you can use sprintf
A = [51.900 3.4000];
disp(sprintf('%2.2f ',A));
x = [0 5 15 1 51.9000 3.4000
0 5 15 1 51.9000 3.4000
0 5 15 1 51.9000 3.4000];
fprintf([repmat('%.2f ',1,size(x,2)) '\n'], x')
0.00 5.00 15.00 1.00 51.90 3.40
0.00 5.00 15.00 1.00 51.90 3.40
0.00 5.00 15.00 1.00 51.90 3.40
Related
How can one compute weighted median in KDB?
I can see that there is a function med for a simple median but I could not find something like wmed similar to wavg.
Thank you very much for your help!
For values v and weights w, med v where w gobbles space for larger values of w.
Instead, sort w into ascending order of v and look for where cumulative sums reach half their sum.
q)show v:10?100
17 23 12 66 36 37 44 28 20 30
q)show w:.001*10?1000
0.418 0.126 0.077 0.829 0.503 0.12 0.71 0.506 0.804 0.012
q)med v where "j"$w*1000
36f
q)w iasc v / sort w into ascending order of v
0.077 0.418 0.804 0.126 0.506 0.012 0.503 0.12 0.71 0.829
q)0.5 1*(sum;sums)#\:w iasc v / half the sum and cumulative sums of w
2.0525
0.077 0.495 1.299 1.425 1.931 1.943 2.446 2.566 3.276 4.105
q).[>]0.5 1*(sum;sums)#\:w iasc v / compared
1111110000b
q)v i sum .[>]0.5 1*(sum;sums)#\:w i:iasc v / weighted median
36
q)\ts:1000 med v where "j"$w*1000
18 132192
q)\ts:1000 v i sum .[>]0.5 1*(sum;sums)#\:w i:iasc v
2 2576
q)wmed:{x i sum .[>]0.5 1*(sum;sums)#\:y i:iasc x}
Some vector techniques worth noticing:
Applying two functions with Each Left (sum;sums)#\: and using Apply . and an operator on the result, rather than setting a variable, e.g. (0.5*sum yi)>sums yi:y i or defining an inner lambda {sums[x]<0.5*sum x}y i
Grading one list with iasc to sort another
Multiple mappings through juxtaposition: v i sum ..
You could effectively weight the median by duplicating (using where):
q)med 10 34 23 123 5 56 where 4 1 1 1 1 1
10f
q)med 10 34 23 123 5 56 where 1 1 1 1 1 4
56f
q)med 10 34 23 123 5 56 where 1 2 1 3 2 1
34f
If your weights are percentages (e.g. 0.15 0.10 0.20 0.30 0.25) then convert them to equivalent whole/counting numbers
q)med 1 2 3 4 5 where "i"$100*0.15 0.10 0.20 0.30 0.25
4f
I have a csv file with experiment results that goes like this:
64 4 8 1 1 2 1 ttt 62391 4055430 333 0.0001 10 161 108 288 0
64 4 8 1 1 2 1 ttt 60966 3962810 322 0.0001 10 164 112 295 0
64 4 8 1 1 2 1 ttt 61530 3999475 325 0.0001 10 162 112 291 0
64 4 8 1 1 2 1 ttt 61430 4054428 332 0.0001 10 158 110 286 0
64 4 8 1 1 2 1 ttt 63891 4152938 339 0.0001 9 149 109 274 0
64 4 32 1 1 2 1 ttt 63699 4204182 345 0.0001 4 43 179 240 0
64 4 32 1 1 2 1 ttt 63326 4116218 336 0.0001 4 45 183 248 0
64 4 32 1 1 2 1 ttt 62654 4135211 340 0.0001 4 48 178 248 0
64 4 32 1 1 2 1 ttt 63192 4107506 339 0.0001 4 49 175 245 0
64 4 32 1 1 2 1 ttt 62707 4138666 345 0.0001 4 46 179 245 0
64 4 64 1 1 2 1 ttt 60968 3962929 323 0.0001 4 46 191 256 0
64 4 64 1 1 2 1 ttt 58765 3819787 305 0.0001 4 50 196 267 0
64 4 64 1 1 2 1 ttt 58946 3831499 308 0.0001 5 52 187 260 0
64 4 64 1 1 2 1 ttt 60646 3942047 321 0.0001 4 47 187 254 0
64 4 64 1 1 2 1 ttt 59723 3882044 311 0.0001 4 46 201 269 0
64 8 8 1 1 2 1 ttt 63414 4185382 382 0.0001 33 517 109 643 0
64 8 8 1 1 2 1 ttt 62429 4057899 372 0.0001 33 538 110 667 0
64 8 8 1 1 2 1 ttt 60622 3940452 384 0.0001 33 556 115 689 0
64 8 8 1 1 2 1 ttt 64433 4188192 369 0.0001 33 519 110 644 0
My goal is to be able to plot various combinations (choose which, in different charts) of the columns before the "ttt", with the average and standard deviation of the columns (choose which) after "ttt" (by grouping them by the before "ttt" columns).
Is this possible in GNUPlot and if yes how? If not, do you have any alternate suggestions regarding my problem?
Here is a completely revised and more general version.
Since you want to filter by 3 columns you need to have 3 properties to distinguish the data in the plot. This would be for example color, x-position and pointtype. What the script basically does:
Generates random data for testing (take your file instead)
$Data looks like this:
8 64 57773 0
4 32 64721 2
8 32 56757 1
4 16 56226 2
8 8 56055 1
8 64 59874 0
8 32 58733 0
4 16 55525 2
8 32 58869 0
8 64 64470 0
4 32 60930 1
8 64 57073 2
...
the variables ColX, ColC, ColP, and ColS define which columns are taken for x-position, color, pointtype and statistics.
find unique values of ColX, ColC, ColP, (check help smooth frequency) and put them to datablocks $ColX, $ColC, and $ColP.
put the unique values to arrays ArrX, ArrC, ArrP
loop all possible combinations and do statistics on ColS and put it to $Data2. Add 3 columns at the beginning for color, x-position and pointtype.
$Data2 looks like this:
1 1 1 0 8 4 61639.4 2788.4
1 1 2 0 8 8 59282.1 2740.2
1 2 1 0 16 4 59372.3 2808.6
1 2 2 0 16 8 60502.3 2825.0
1 3 1 0 32 4 59850.7 2603.8
1 3 2 0 32 8 60617.7 1979.8
1 4 1 0 64 4 60399.4 3273.6
1 4 2 0 64 8 59930.7 2919.8
2 1 1 1 8 4 59172.6 2288.2
2 1 2 1 8 8 58992.2 2888.0
2 2 1 1 16 4 59350.1 2364.6
2 2 2 1 16 8 61034.0 2368.5
2 3 1 1 32 4 59920.8 2867.6
2 3 2 1 32 8 59711.9 3464.2
2 4 1 1 64 4 60936.7 3439.7
2 4 2 1 64 8 61078.7 2349.3
3 1 1 2 8 4 58976.0 2376.3
3 1 2 2 8 8 61731.5 1635.7
3 2 1 2 16 4 58276.0 2101.7
3 2 2 2 16 8 58594.5 3358.5
3 3 1 2 32 4 60471.5 3737.6
3 3 2 2 32 8 59909.1 2024.0
3 4 1 2 64 4 62044.2 1446.7
3 4 2 2 64 8 60454.0 3215.1
Finally, plot the data. I couldn't figure out how plotting style with yerror works properly together with variable pointtypes. So, instead I split it into two plot commands with vectors and with points. The third one keyentry is just to get an empty line in the legend and the forth one is to get the pointtype into the legend.
I hope you can figure out all the other details and adapt it to your data.
Code:
### grouped statistics on filtered (unsorted) data
reset session
set colorsequence classic
# generate some random test data
rand1(n) = 2**(int(rand(0)*2)+2) # values 4,8
rand2(n) = 2**(int(rand(0)*4)+3) # values 8,16,32,64
rand3(n) = int(rand(0)*10000)+55000 # values 55000 to 65000
rand4(n) = int(rand(0)*3) # values 0,1,2
set print $Data
do for [i=1:200] {
print sprintf("% 3d% 4d% 7d% 3d", rand1(0), rand2(0), rand3(0), rand4(0))
}
set print
print $Data # (just for test purpose)
ColX = 2 # column for x
ColC = 4 # column for color
ColP = 1 # column for pointtype
ColS = 3 # column for statistics
# get unique values of the columns
set table $ColX
plot $Data u (column(ColX)) smooth freq
unset table
set table $ColC
plot $Data u (column(ColC)) smooth freq
unset table
set table $ColP
plot $Data u (column(ColP)) smooth freq
unset table
# put unique values into arrays
set table $Dummy
array ArrX[|$ColX|-6] # gnuplot creates 6 extra lines
array ArrC[|$ColC|-6]
array ArrP[|$ColP|-6]
plot $ColX u (ArrX[$0+1]=$1)
plot $ColC u (ArrC[$0+1]=$1)
plot $ColP u (ArrP[$0+1]=$1)
unset table
print ArrX, ArrC, ArrP # just for test purpose
# define filter function
Filter(c,x,p) = ArrX[x]==column(ColX) && ArrC[c]==column(ColC) && \
ArrP[p]==column(ColP) ? column(ColS) : NaN
# loop all values and do statistics, write data into $Data2
set print $Data2
do for [c=1:|ArrC|] {
do for [x=1:|ArrX|] {
do for [p=1:|ArrP|] {
undef var STATS*
stats $Data u (Filter(c,x,p)) nooutput
if (exists('STATS_mean') && exists('STATS_stddev')) {
print sprintf("% 3d% 3d% 3d% 3d% 3d% 3d% 9.1f % 7.1f", c, x, p, ArrC[c], ArrX[x], ArrP[p], STATS_mean, STATS_stddev)
}
}
}
print ""; print ""
}
set print
# print $Data2 # just for testing purpose
set xlabel sprintf("Column %d", ColX)
set ylabel sprintf("Column %d", ColS)
set xrange[0.5:|ArrX|+1]
set xtics () # remove all xtics
do for [x=1:|ArrX|] { set xtics add (sprintf("%d",ArrX[x]) x)} # set xtics "manually"
# function for x position and offsets,
# actually not dependent on 'n' but to shorten plot command
# columns in $Data2: 1=color, 2=x, 3=pointtype
width = 0.5 # float number!
step = width/(|ArrC|-1)
PosX(n) = column(2) - width/2.0 + step*(column(1)-1) + (column(3)-1)*step*0.3
plot \
for [c=1:|ArrC|] $Data2 u (PosX(0)):($7-$8):(0):(2*$8) index c-1 w vectors \
heads size 0.04,90 lw 2 lc c ti sprintf("%g",ArrC[c]),\
for [c=1:|ArrC|] '' u (PosX(0)):7:($3*2+4):(c) index c-1 w p ps 1.5 pt var lc var not, \
keyentry w p ps 0 ti "\n", \
for [p=1:|ArrP|] '' u (0):(NaN) w p pt p*2+4 ps 1.5 lc rgb "black" ti sprintf("%g",ArrP[p])
### end of code
Result:
I do not think gnuplot can produce exactly what you are asking for in a single plot command. I will show you two alternatives in the hope that one or both is a useful starting point.
Alternative 1: standard boxplot
spacing = 1.0
width = 0.25
unset key
set xlabel "Column 3"
set ylabel "Column 9"
plot 'data' using (spacing):9:(width):3 with boxplot lw 2
This collects points based on the value in column 3 and for each such value it produces a boxplot. This is a widely used method of showing the distribution of point values in a category, but it is a quartile analysis not a display of mean + standard deviation.
Alternative 2: calculate mean and standard deviation for categories known in advance
The boxplot analysis has the advantage that you do not need to know in advance what values may be present in column 3. Gnuplot can calculate mean and standard deviation based on a column 3 value, but you need to specify in advance what that value is. Here is a set of commands tailored to the specific example file you provided. It calculates, but does not plot, the requested categorical mean and standard deviation. You can use these numbers to construct a plot, but that will require additional commands. You could, for example, save the values for each category in a new file, or array, or datablock and then go back and plot these together.
col3entry = "8 32 64"
do for [i in col3entry] {
stats "data" using ($3 == real(i) ? $9 : NaN) name "Condition".i nooutput
print i, ": ", value("Condition".i."_mean"), value("Condition".i."_stddev")
}
output:
8: 62345.1111111111 1259.34784220021
32: 63115.6 392.552977316438
64: 59809.6 881.583711283279
I am able to plot a trisurf chart, but surf does not work.
What am I doing wrong?
pkg load statistics;
figure (1,'name','Matrix Map');
colormap('hot');
t = dlmread('C:\Map3D.csv');
tx =t(:,1);ty=t(:,2);tz=t(:,3);
tri = delaunay(tx,ty);
handle = surf(tx,ty,tz); #This does NOT work
#handle = trisurf(tri,tx,ty,tz); #This does work
`error: surface: rows (Z) must be the same as length (Y) and columns (Z) must be the same as length
(X)
My data is in a CSV (commas not shown here)
1 2 -0.32
2 2 0.33
3 2 0.39
4 2 0.09
5 2 0.14
1 2.5 -0.19
2 2.5 0.13
3 2.5 0.15
4 2.5 0.24
5 2.5 0.33
1 3 0.06
2 3 0.44
3 3 0.36
4 3 0.45
5 3 0.51
1 3.5 0.72
2 3.5 0.79
3 3.5 0.98
4 3.5 0.47
5 3.5 0.55
1 4 0.61
2 4 0.13
3 4 0.44
4 4 0.47
5 4 0.58
1 4.5 0.85
surf error message is different in Matlab or in Octave.
Error message from Matlab:
Z must be a matrix, not a scalar or vector.
The problem is pretty clear here since you specified Z (for you tz) as a vector.
Error message from Octave:
surface: rows (Z) must be the same as length (Y) and columns (Z) must be the same as length (X)
You are wrong here since on your example, columns (Z) = 1, but length (X) = 26, so here is the mistake.
One of the consequences of that is that with surf you cannot have "holes" or undefined points on your grid. On your case you have a X-grid from 1 to 5 and a Y-grid from 2 to 4.5 but point of coordinate (2, 4.5) is not defined.
#Luis Mendo, Matlab and Octave do allow the prototype surf(matrix_x, matrix_y, matrix_z) but the third argument matrix_z still have to be a matrix (not a scalar or vector). Apparently, a matrix of only one line or column is not considered as a matrix.
To solve the issue, I suggest something like:
tx = 1:5; % tx is a vector of length 5
ty = 2:0.5:4.5; % ty is a vector of length 6
tz = [-0.32 0.33 0.39 0.09 0.14;
-0.19 0.13 0.15 0.24 0.33;
0.06 0.44 0.36 0.45 0.51;
0.72 0.79 0.98 0.47 0.55;
0.61 0.13 0.44 0.47 0.58;
0.85 0. 0. 0. 0.]; % tz is a matrix of size 6*5
surf(tx,ty,tz);
Note that I had to invent some values at the points where your grid was not defined, I put 0. but you can change it with the value you prefer.
EDIT: To help clarify my question, I'm looking to get a fit to the following data:
I can get a fit using the cftool function, but using a least squares approach doesn't make sense with my binary data. Just to illustrate...
So, my goal is to fit this data using the fmincon function.
ORIGINAL POST:
I have data from a movement control experiment in which participants were timed while they performed a task, and given a score (failure or success) based on their performance. As you might expect, we assume participants will make less errors as they have more time to perform the task.
I'm trying to fit a function to this data using fmincon, but get the error "Error using fmincon (line 609)
Supplied objective function must return a scalar value."
I don't understand a) what this means, or b) how I can fix it.
I provide some sample data and code below. Any help greatly appreciated.
%Example Data:
time = [12.16 11.81 12.32 11.87 12.37 12.51 12.63 12.09 11.25
7.73 8.18 9.49 10.29 8.88 9.46 10.12 9.76 9.99 10.08
7.48 7.88 7.81 6.7 7.68 8.05 8.23 7.84 8.52 7.7
6.26 6.12 6.19 6.49 6.25 6.51 6 6.79 5.89 5.93 3.97 4.91 4.78 4.43
3.82 4.72 4.72 4.31 4.81 4.32 3.62 3.71 4.29 3.46 3.9 3.73 4.15
3.92 3.8 3.4 3.7 2.91 2.84 2.7 2.83 2.46 3.19 3.44 2.67 3.49 2.71
3.17 2.97 2.76 2.71 2.88 2.52 2.86 2.83 2.64 2.02 2.37 2.38
2.53 3.03 2.61 2.59 2.59 2.44 2.73 ]
error = [0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
%Code:
% initial parameters - a corresponds to params(1), b corresponds to params(2)
a = 3.0;
b = -0.01;
LL = #(params) 1/1+params(1)*(log(time).^params(2));
LL([a b]);
pOpt = fmincon(LL,[a b],[],[]);
The mistakes comes from the function LL, that returns a number of values equal to the length of time.
To properly use fmincon, you need to have a function that returns only one value.
I believe logistic regression would fit your data and purposes nicely. In that case, why not simply use Matlab's built-in function for multinomial logistic regression?
B = mnrfit(time,error)
Regarding your function LL, are you sure you have entered the function correctly and are not missing a parentheses?
LL = #(params) 1/(1+params(1)*(log(time).^params(2)));
Without the parentheses, you function is equivalent to 1 + a*log(x)^b
I've two matrix output from two Matlab scripts and I'd like to write both results in different columns of the same GUI output in a txt file.
Could you please help me?
I tried some different methods (try to create cell array, or use fprintf for arrays of different sizes) and understood that #GameOfThrows's method is really works.
I realize it in this way:
x = [1 2 3 4 5];
y = [10 20 30 40 50 60 70 80 90];
[m,i] = max( [numel(x) numel(y)]);
if i == 1
y(end+1:numel(x))=NaN;
else
x(end+1:numel(y))=NaN;
end
a = [x; y];
fileID = fopen('data1.txt','w');
fprintf(fileID,'%6.2f %12.2f\r\n',a);
My data1.txt:
1.00 10.00
2.00 20.00
3.00 30.00
4.00 40.00
5.00 50.00
NaN 60.00
NaN 70.00
NaN 80.00
NaN 90.00