How to multiply the subsets from table? - matlab

I have two figures of table.
One table is call H and another one call C. Both table is 4 by 3 table.
So if the user insert a value in two edit box. For example:
A = *value*
B = *value*
Then the user insert the data in H table. The user only use 2 rows. Let say this is the data:
ALPHA BETA GAMMA
H1
H2
H3
H4
So the user want to get the subset of H1 and multiply with A and subsets of H2 multiply with B. This is how it will be:
C1 = (ALPHA VALUE)*A (BETA VALUE)*A (GAMMA VALUE)*A
C2 = (ALPHA VALUE)*B (BETA VALUE)*B (GAMMA VALUE)*B
Then the user wants to display the answer on C table where it will become like this:
ALPHA BETA GAMMA
C1 NEW VALUE NEW VALUE NEW VALUE
C2 NEW VALUE NEW VALUE NEW VALUE
C3
C4
How can i make the coding of this problem?
I have already try this coding but it seems i failed. CAN ANYONE HELP ME PLEASE!!
H = cell2mat(get(handles.Mytable3,'Data'));
cost1 = str2num(get(handles.input2_editText,'String'));
cost2 = str2num(get(handles.input3_editText,'String'));
H1 = H(1,:)*cost1;
H2 = H(2,:)*cost2;
H = mat2cell([H1 H2]);
cost = get(H,'Data');
set(handles.Mytable2,'Data',cost)

Try:
H = num2cell([H1 H2]);
set(handles.Mytable2,'Data',H)

Related

How do I stack an array variable and plot three of these stacked array variables on a barh?

% S
a1 = [2015/07/23 2015/11/25 2016/01/20];
b1 = [2011/06/22 2014/10/14 2015/03/01];
c1 = [2012/04/16 2013/06/23 2015/04/08];
d1 = [2013/09/15 2014/01/19 2016/09/13];
e1 = [2015/04/01 2016/04/04 2018/08/04];
% H
a2 = [2012/07/23 2015/06/25 2016/05/20];
b2 = [2009/06/22 2014/09/14 2015/11/01];
c2 = [2006/04/16 2013/12/23 2015/06/08];
d2 = [2008/09/15 2014/05/19 2016/02/13];
e2 = [2011/04/01 2016/05/04 2018/03/04];
% HS
a3 = [2009/07/23 2010/06/25 2018/02/20];
b3 = [2011/06/22 2014/07/14 2016/09/01];
c3 = [2013/04/16 2016/09/23 2019/05/08];
d3 = [2013/09/15 2018/05/19 2019/06/13];
e3 = [2014/04/01 2019/01/04 2019/12/04];
% T
t = [1 2 3 4 5];
dates = [a1 a2 a3; b1 b2 b3; c1 c2 c3; d1 d2 d3; e1 e2 e3];
% Plotted
figure
barh(t, dates, 'hist')
title('Script')
xlabel('Time')
ylabel('Tail')
legend({'S','H','HS'})
legend('Location', 'southoutside')
legend('Orientation','horizontal')
If you plot this, you will notice that there are 9 bar graphs associated with each 't'. There should be only three as stated in the 'dates' variable per t. How do I stack 'a1,a2,a3, then b1,b2,b3 ...... and e1,e2,e3 each individually' to accomplish this feat?
My script result:
What I want the output to look like
Note: 1. The y axis contains the 5 different elements 't'
2. The x axis should contain the date elements 'dates'
3. When you plot these values, there are 9 bars. There should be three per
't'.
4. On the x axis, I would like to have the dates represented.
5. I would like to eventually be able to create a user prompted system that
allows people to enter in dates for a corresponding array, and have that
date be stacked onto the chart.
The following bit of code does a bit of what I ask for but with the bars stacked vertically not horizontally, and also takes in different user inputs.
https://www.mathworks.com/matlabcentral/fileexchange/32884-plot-groups-of-stacked-bars
You are very close to a solution. Download that file from MATLAB file exchange, open it, replace bar with barh, you got your own horizontally stacked bar plot.
Your input does not fit the expected format, the function expects a 3d matrix. A minor change to your code:
dates = [cat(3,a1,a2,a3);cat(3,b1,b2,b3); cat(3,c1,c2,c3); cat(3,d1,d2,d3); cat(3,e1,e2,e3)];
plotBarHStackGroups(dates,t)

error with an assignment A(I) = B, the number of elements in B and I must be the same

i have data set saved as .mat and i am trying to solve for a system of non-linear equations for unknown variables Ga and Ta. I'm using fsolve to solve it and the part of the relevant code is:
function F = msabase(x)
load ('matlab.mat');
Ta = x(1);
Ga = x(2);
util_a = exp(lamda.*(alpha_a - cost - w.*log(Ga)));
util_t = exp(lamda.*( - 2.5 - w.*log(2*0.80)));
F(1) = Ga - c0.*(1.+c1.*(Ta./cap).^c2).*d;
F(2) = Ta - sum.*(util_a/(util_a+util_t));
in each rows of the data set the values for all the other variables i.e lamda,alpha_a,cost, etc. are given. in line 7 of the code given, i'm getting the error "In an assignment A(I) = B, the number of elements in B and
I must be the same"
i'm not been able to understand why because it should be an element by element operation.
You are getting that error because you are trying to assign a vector / matrix of elements to a single slot in F. There is a dimension mismatch because you are trying to map more than one value into a single space in F, and that's ultimately why you are getting the error.
One suggestion I have is to either use cell arrays or create a 2D matrix that stores your values. If you prefer the cell array approach, each cell stores the desired calculation like so:
F = cell(2,1);
F{1} = Ga - c0.*(1.+c1.*(Ta./cap).^c2).*d;
F{2} = Ta - sum.*(util_a/(util_a+util_t));
Then to access the right slot, do either F{1} or F{2}. If you want the 2D matrix approach, you can concatenate both calculations into a single matrix by doing this:
F = [Ga - c0.*(1.+c1.*(Ta./cap).^c2).*d; Ta - sum.*(util_a/(util_a+util_t))];
This is assuming that each result is a single row vector, and so this will produce a 2D matrix where each row is the desired result. I'm not sure what size each computation is, and so to make things consistent, I'll make sure that both lines of code are row vectors:
F1 = Ga - c0.*(1.+c1.*(Ta./cap).^c2).*d;
F2 = Ta - sum.*(util_a/(util_a+util_t));
F = [F1(:).'; F2(:).'];
Try preallocating F before assignment. If you know that F is a 2-by-1 vector, insert F = zeros(2,1) somewhere before line 7. If you know nothing about the dimensions of F, initialise it as an empty matrix and append to it:
F = []
F = [F; (Ga - c0.*(1.+c1.*(Ta./cap).^c2).*d)];
F = [F; (Ta - sum.*(util_a/(util_a+util_t)))];
Beware that MATLAB is not particularly efficient at appending vectors/matrices, so preallocate if possible.

Iteration of matrix-vector multiplication which stores specific index-positions

I need to solve a min distance problem, to see some of the work which has being tried take a look at:
link: click here
I have four elements: two column vectors: alpha of dim (px1) and beta of dim (qx1). In this case p = q = 50 giving two column vectors of dim (50x1) each. They are defined as follows:
alpha = alpha = 0:0.05:2;
beta = beta = 0:0.05:2;
and I have two matrices: L1 and L2.
L1 is composed of three column-vectors of dimension (kx1) each.
L2 is composed of three column-vectors of dimension (mx1) each.
In this case, they have equal size, meaning that k = m = 1000 giving: L1 and L2 of dim (1000x3) each. The values of these matrices are predefined.
They have, nevertheless, the following structure:
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
The min. distance problem I need to solve is given (mathematically) as follows:
d = min( (x-(alpha_p*t1_k - beta_q*t1_m)).^2 + (y-(alpha_p*t2_k - beta_q*t2_m)).^2 +
(z-(alpha_p*t3_k - beta_q*t3_m)).^2 )
the values x,y,z are three fixed constants.
My problem
I need to develop an iteration which can give me back the index positions from the combination of: alpha, beta, L1 and L2 which fulfills the min-distance problem from above.
I hope the formulation for the problem is clear, I have been very careful with the index notations. But if it is still not so clear... the step size for:
alpha is p = 1,...50
beta is q = 1,...50
for L1; t1, t2, t3 is k = 1,...,1000
for L2; t1, t2, t3 is m = 1,...,1000
And I need to find the index of p, index of q, index of k and index of m which gives me the min. distance to the point x,y,z.
Thanks in advance for your help!
I don't know your values so i wasn't able to check my code. I am using loops because it is the most obvious solution. Pretty sure that someone from the bsxfun-brigarde ( ;-D ) will find a shorter/more effective solution.
alpha = 0:0.05:2;
beta = 0:0.05:2;
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
idx_smallest_d =[1,1,1,1];
smallest_d = min((x-(alpha(1)*t1(1) - beta(1)*t1(1))).^2 + (y-(alpha(1)*t2(1) - beta(1)*t2(1))).^2+...
(z-(alpha(1)*t3(1) - beta(1)*t3(1))).^2);
%The min. distance problem I need to solve is given (mathematically) as follows:
for p=1:1:50
for q=1:1:50
for k=1:1:1000
for m=1:1:1000
d = min((x-(alpha(p)*t1(k) - beta(q)*t1(m))).^2 + (y-(alpha(p)*t2(k) - beta(q)*t2(m))).^2+...
(z-(alpha(p)*t3(k) - beta(q)*t3(m))).^2);
if d < smallest_d
smallest_d=d;
idx_smallest_d= [p,q,k,m];
end
end
end
end
end
What I am doing is predefining the smallest distance as the distance of the first combination and then checking for each combination rather the distance is smaller than the previous shortest distance.

Denormalize results of curve fit on normalized data

I am fitting an exponential decay function with lsqvurcefit in Matlab. To do this I first normalize my data because they differ several orders of magnitude. However Im not sure how to denormalize my fitted parameters.
My fitting model is s = O + A * exp(-t/T) where t and s are known and t is in the order of 10^-3 and s in the order of 10^5. So I subtract from them their mean and divide them by their standarddeviation. My goal is to find the best A, O and T that at the given times t will result most near s. However I dont know how to denormalize my resulting A O and T.
Might somebody know how to do this? I only found this question on SO about normalisation, but does not really address the same problem.
When you normalize, you must record the means and standard deviations for each of your featuers. Then you can easily use those values to denormalize.
e.g.
A = [1 4 7 2 9]';
B = 100 475 989 177 399]';
So you could just normalize right away:
An = (A - mean(A)) / std(A)
but then you can't get back to the original A. So first save the means and stds.
Am = mean(A); Bm = mean(B);
As = std(A); Bs = std(B);
An = (A - Am)/As;
Bn = (B - Bm)/Bs;
now do whatever processing you want and then to denormalize:
Ad = An*As + Am;
Bd = Bn*Bs + Bm;
I'm sure you can see that that's going to be an issue if you have a lot of features (i.e. you have to type code out for each feature, what a mission!) so lets assume your data is arranged as a matrix, data, where each sample is a row and each column is a feature. Now you can do it like this:
data = [A, B]
means = mean(data);
stds = std(data);
datanorm = bsxfun(#rdivide, bsxfun(#minus, data, means), stds);
%// Do processing on datanorm
datadenorm = bsxfun(#plus, bsxfun(#times, datanorm, stds), means);
EDIT:
After you have fit your model parameters (A,O and T) using normalized t and f then your model will expect normalized inputs and produce normalized outputs. So to use it you should first normalize t and then denormalize f.
So to find a new f by running the model on a normalized new t. So f(tn) where tn = (t - tm)/ts and tm is the mean of your training (or fitting) t set and ts the std. Then to get your correct magnitude f you must denormalize only f, so the full solution would be
f(tn)*fs + fm
So once again, all you need to do is save the mean and std you used to normalize.

MATLAB save multiple outputs from a function many times over

In MATLAB, I am trying to create a matrix of the outputs of the built-in function [r, p] = corr(X1,Y1); after using this function on multiple X's and Y's. Then, I would like to consolidate all of the r and p into their respective matrices, R and P. For example, I can do this easily if I only call one output from corr:
R = [corr(X1,Y1), corr(X2,Y2); (...)
corr(X3,Y3), corr(X4,Y4)];
as corr returns the r value by default. Is there a way to achieve this for p as well? Below is the long way that I do it, I'm just wondering whether there is a shorter and easier method like above.
First find each r and p:
[r1, p1] = corr(X1,Y1);
[r2, p2] = corr(X2,Y2);
[r3, p3] = corr(X3,Y3);
....
Then combine them into the matrix:
R = [r1 r2; (...)
r3 r4; (...)
...];
P = [p1 p2; (...)
p3 p4; (...)
...];
Thanks.
You can try something along the lines of
for i=1:n,
[R(:,end+1), P(:,end+1)] = corr(X(:,i), Y(:,i));
end
Just make sure that R(:,1) and P(:,1) are sized correctly.
Assigning R(:,end+1) and P(:,end+1) will grow R and P automatically, without your having to combine them from temporary variables by hand.