Convert data to fuzzy data - matlab

I am beginner. I have a matrix in matlab and I want Convert matrix's number to fuzzy number and use these fuzzy number for my function's input .how can I do this?
is it correct to Convert number to double number between 0,1 by Dividing numbers by 1000 like this?
[256,12;3,56]--->[0.256,0.12;0.003,0.056]
but for double number what should I do?

What do you mean by fuzzy number?!! as far as I know MATLAB uses normal numbers for Fuzzy system. After that there are fuzzifiers that change the real numbers to the points on the membership functions. And then the fuzzy logic decides that how the number have to be selected, and so on...!
On the other hand, If you want to change the scale of the number to be in the range [-1 1] or [0 1] then it has nothing to do with fuzzy.
and to change from the range [0 1] to [a b] use this line of code:
r = a + (b-a)*z;
where the z is in the range [0 1], and the r is in the range [a b]
for example, changing z=0.5 from [0 1] to the range [0 10], r becomes:
r = 0 + (10-0)*0.5 = 5
to change from [a b] to [0 1] also you can do this:
z = (r - a)/(b-a);
so if r = 5 in the range [0 10], then z = 0.5 in the range [0 1];
In addition, for the real fuzzy operation, try something like this:
point_n = 101; % Determines MF's resolution
min_x = -20; max_x = 20; % Universe is [min_x, max_x]
x = linspace(min_x, max_x, point_n)';
A = trapmf(x, [-10 -2 1 3]); % Trapezoidal fuzzy set A
B = gaussmf(x, [2 5]); % Gaussian fuzzy set B
C1 = fuzarith(x, A, B, 'sum');
subplot(2,1,1);
plot(x, A, 'b--', x, B, 'm:', x, C1, 'c');
title('fuzzy addition A+B');
C2 = fuzarith(x, A, B, 'sub');
subplot(2,1,2);
plot(x, A, 'b--', x, B, 'm:', x, C2, 'c');
title('fuzzy subtraction A-B');
C3 = fuzarith(x, A, B, 'prod');
That's how you perform fuzzy arithmetic. According to MathWorks:
Using interval arithmetic, C = fuzarith(X, A, B, operator) returns a fuzzy set C as the result of applying the function represented by the string, operator, which performs a binary operation on the sampled convex fuzzy sets A and B. The elements of A and B are derived from convex functions of the sampled universe, X:
A, B, and X are vectors of the same dimension.
operator is one of the following strings: 'sum', 'sub', 'prod', and
'div'.
The returned fuzzy set C is a column vector with the same length as
X.
And Finally you can perform fuzzy inference calculation using 'evalfis' function in MATLAB. The inputs and outputs to this function are real numbers as well:
fismat = readfis('tipper');
out = evalfis([2 1; 4 9],fismat)
This syntax generates the response
out =
7.0169
19.6810

Related

Matlab - Fit a Curve with Constrained Parameters

For a (x,y) dataset, let have a curve given by an expression in a, b,c... etc, such as f='a*exp(b*x)+c', to be fitted as cfit=fit(x,y,f).
Suppose we have a set of constraint such as b>0, c+b>a/2. How should i use the fit command in this case?.
While you could set a lower boundary to enforce b>0, I don't think it is somehow possible to properly enforce c+b>a/2 with fit(). But ultimately every fitting problem can also be regarded as a "minimize the distance from the curve to the data" problem, so fmincon() can be used to achieve your goal:
%some sample x values
xdata = rand(1000,1);
%some parameters a,b,c
a = 2;
b = 3;
c = 4;
%resulting y values + some noise
ydata=a*exp(b*xdata)+c+rand(1000,1)*10-5;
plot(xdata,ydata,'o')
%function to minimize. It returns the sum of squared distances between the polynom and the data.
fun = #(coefs) sum((coefs(1)*exp(coefs(2).*xdata)+coefs(3)-ydata).^2);
%nonlinear constaint to enforce c+b>a/2, which is the same as -(c+b-a/2)<0
nonlcon = #(coefs)deal(-(coefs(3)+coefs(2)-coefs(1)/2), 0);
% lower bounds to enforce b>0
lb = [-inf 0 -inf];
%starting values
x0 = [1 1 1];
%finally find the coefficients (which should approximately be the values of a, b and c)
coefs = fmincon(fun,x0,[],[],[],[],lb,[],nonlcon)
For constraints that are just numeric values, such as b > 0, you can use the 'Lower' and 'Upper' bounds arguments to specify those. For more complex relationships, like c+b>a/2, you'll have to take an approach like James suggests, setting the function output to a high value like flintmax to generate a large error. For example, let's say I define my function like this:
function y = my_fcn(a, b, c, x)
if (c+b > a/2)
y = a.*exp(b.*x)+c;
else
y = flintmax().*ones(size(x));
end
end
I can create a set of noisy test data as follows:
a = 4;
b = 2;
c = 1;
x = (0:0.01:2).';
y = my_fcn(a, b, c, x) + 40.*(rand(size(x))-0.5);
And then fit a curve (note you have to use an anonymous function, since a function handle won't work for some reason):
params = fit(x, y, #(a, b, c, x) my_fcn(a, b, c, x), ...
'StartPoint', [1 1 1], ... % Starting guesses for [a b c]
'Lower', [-Inf 0 -Inf]); % Set bound for 'b'
params =
General model:
params(x) = my_fcn(a,b,c,x)
Coefficients (with 95% confidence bounds):
a = 4.297 (2.985, 5.609)
b = 1.958 (1.802, 2.113)
c = 0.1908 (-4.061, 4.442)
Note that the fitted values are close to the original values, but don't match exactly due to the noise. We can visualize the fit like so:
plot(x, y);
hold on;
plot(x, my_fcn(params.a, params.b, params.c, x), 'r');
One simplistic method is to have the fitted function return a very large value, with resulting very large error, if the parameter values are outside of the constraints. This "brick wall" method is not optimal and will cause problems when the fitted parameter values are close to the boundary conditions. It is worth a try because it is quick to implement and can work in simple cases. Take care to start with initial parameter values within the boundary limits.

Using matlabs regress like polyfit

I have:
x = [1970:1:2000]
y = [data]
size(x) = [30,1]
size(y) = [30,1]
I want:
% Yl = kx + m, where
[k,m] = polyfit(x,y,1)
For some reason i have to use "regress" for this.
Using k = regress(x,y) gives some totally random value that i have no idea where it comes from. How do it?
The number of outputs you get in "k" is dependant on the size of input X, so you will not get both m and k just by putting in your x and y straight. From the docs:
b = regress(y,X) returns a p-by-1 vector b of coefficient estimates for a multilinear regression of the responses in y on the predictors in X. X is an n-by-p matrix of p predictors at each of n observations. y is an n-by-1 vector of observed responses.
It is not exactly stated, but the example in the help docs using the carsmall inbuilt dataset shows you how to set this up. For your case, you'd want:
X = [ones(size(x)) x]; % make sure this is 30 x 2
b = regress(y,X); % y should be 30 x 1, b should be 2 x 1
b(1) should then be your m, and b(2) your k.
regress can also provide additional outputs, such as confidence intervals, residuals, statistics such as r-squared, etc. The input remains the same, you'd just change the outputs:
[b,bint,r,rint,stats] = regress(y,X);

Getting the N-dimensional product of vectors

I am trying to write code to get the 'N-dimensional product' of vectors. So for example, if I have 2 vectors of length L, x & y, then the '2-dimensional product' is simply the regular vector product, R=x*y', so that each entry of R, R(i,j) is the product of the i'th element of x and the j'th element of y, aka R(i,j)=x(i)*y(j).
The problem is how to elegantly generalize this in matlab for arbitrary dimensions. This is I had 3 vectors, x,y,z, I want the 3 dimensional array, R, such that R(i,j,k)=x(i)*y(j)*z(k).
Same thing for 4 vectors, x1,x2,x3,x4: R(i1,i2,i3,i4)=x1(i1)*x2(i2)*x3(i3)*x4(i4), etc...
Also, I do NOT know the number of dimensions beforehand. The code must be able to handle an arbitrary number of input vectors, and the number of input vectors corresponds to the dimensionality of the final answer.
Is there any easy matlab trick to do this and avoid going through each element of R specifically?
Thanks!
I think by "regular vector product" you mean outer product.
In any case, you can use the ndgrid function. I like this more than using bsxfun as it's a little more straightforward.
% make some vectors
w = 1:10;
x = w+1;
y = x+1;
z = y+1;
vecs = {w,x,y,z};
nvecs = length(vecs);
[grids{1:nvecs}] = ndgrid(vecs{:});
R = grids{1};
for i=2:nvecs
R = R .* grids{i};
end;
% Check results
for i=1:10
for j=1:10
for k=1:10
for l=1:10
V(i,j,k,l) = R(i,j,k,l) == w(i)*x(j)*y(k)*z(l);
end;
end;
end;
end;
all(V(:))
ans = 1
The built-in function bsxfun is a fast utility that should be able to help. It is designed to perform 2 input functions on a per-element basis for two inputs with mismatching dimensions. Singletons dimensions are expanded, and non-singleton dimensions need to match. (It sounds confusing, but once grok'd it useful in many ways.)
As I understand your problem, you can adjust the dimension shape of each vector to define the dimension that it should be defined across. Then use nested bsxfun calls to perform the multiplication.
Example code follows:
%Some inputs, N-by-1 vectors
x = [1; 3; 9];
y = [1; 2; 4];
z = [1; 5];
%The computation you describe, using nested BSXFUN calls
bsxfun(#times, bsxfun(#times, ... %Nested BSX fun calls, 1 per dimension
x, ... % First argument, in dimension 1
permute(y,2:-1:1) ) , ... % Second argument, permuited to dimension 2
permute(z,3:-1:1) ) % Third argument, permuted to dimension 3
%Result
% ans(:,:,1) =
% 1 2 4
% 3 6 12
% 9 18 36
% ans(:,:,2) =
% 5 10 20
% 15 30 60
% 45 90 180
To handle an arbitrary number of dimensions, this can be expanded using a recursive or loop construct. The loop would look something like this:
allInputs = {[1; 3; 9], [1; 2; 4], [1; 5]};
accumulatedResult = allInputs {1};
for ix = 2:length(allInputs)
accumulatedResult = bsxfun(#times, ...
accumulatedResult, ...
permute(allInputs{ix},ix:-1:1));
end

Polynomial fit matlab with some constraints on the coefficients

I have data that I should interpolate with a function which must be of the following kind:
f(x) = ax4 + bx2 + c
with a > 0 and b ≤ 0. Unfortunately, MATLAB's polyfit does not allow any constraints on the coefficients of the polynomial. Does anybody know if there is a MATLAB function to do this? Otherwise, how can I implement it?
Thank you very much in advance,
Elisabetta
You can try using fminsearch, fminunc defining your objective function manually.
Alternatively, you can define your problem slightly different:
f(x) = a2x4 - b2x2 + c
Now, the new a and b can be optimized for without constraints, while ensuring that the final a and b you are looking for are positive (negative resp.).
Without constraints, the problem can be written and solved as a simple linear system:
% Your design matrix ([4 2 0] are the powers of the polynomial)
A = bsxfun(#power, your_X_data(:), [4 2 0]);
% Best estimate for the coefficients, [a b c], found by
% solving A*[a b c]' = y in a least-squares sense
abc = A\your_Y_data(:)
Those constraints will of course automatically be satisfied iff that constrained model indeed underlies your data. For example,
% some example factors
a = +23.9;
b = -15.75;
c = 4;
% Your model
f = #(x, F) F(1)*x.^4 + F(2)*x.^2 + F(3);
% generate some noisy XY data
x = -1:0.01:1;
y = f(x, [a b c]) + randn(size(x));
% Best unconstrained estimate a, b and c from the data
A = bsxfun(#power, x(:), [4 2 0]);
abc = A\y(:);
% Plot results
plot(x,y, 'b'), hold on
plot(x, f(x, abc), 'r')
xlabel('x (nodes)'), ylabel('y (data)')
However, if you impose constraints on data that are not accurately described by that constrained model, things might go wrong:
% Note: same data, but flipped signs
a = -23.9;
b = +15.75;
c = 4;
f = #(x, F) F(1)*x.^4 + F(2)*x.^2 + F(3);
% generate some noisy XY data
x = -1:0.01:1;
y = f(x, [a b c]) + randn(size(x));
% Estimate a, b and c from the data, Forcing a>0 and b<0
abc = fmincon(#(Y) sum((f(x,Y)-y).^2), [0 0 0], [-1 0 0; 0 +1 0; 0 0 0], zeros(3,1));
% Plot results
plot(x,y, 'b'), hold on
plot(x, f(x, abc), 'r')
xlabel('x (nodes)'), ylabel('y (data)')
(this solution has a == 0, indicative of an incorrect model choice).
If the exact equality of a == 0 is a problem: there is of course no difference if you set a == eps(0). Numerically, this will not be noticeable for real-world data, but it's nonzero nonetheless.
Anyway, I have a suspicion that your model is not well chosen and the constraints are a "fix" to get everything to work, or your data should actually be unbiased/rescaled before trying to make any fit, or that some similar preconditions apply (I've often seen people do this sort of thing, so yes, I'm a bit biased in this respect :).
So...what are the real reasons behind those constraints?
If you have the curve fitting toolbox then fit does allow for setting constraints using the 'upper' and 'lower' options. You would want something like.
M=fit(x, f, 'poly4', 'upper', [-inf, 0, -inf, 0, -inf], 'lower', [0, 0, 0, 0, -inf]);
Note use -inf to set a particular coefficient to be unconstrained.
This will give a cfit object with the relevant coefficients. You can access these using for example M.p1 for the x^4 term. Alternatively you can evaluate the function at whatever points you want using feval.
I think you can do a similar thing using lsqcurvefit in the optimization toolbox as well.

How to find a fitting function for [n x n] matrix values

Given 100x100 matrix where each element represents a function value in space, I would like to find parameter values A, B, C, D, E for a function f(x, y) = A + Bx + Cy + DX^2+Ey^2 that fits the best the given matrix values, where x represents a row number and y represents a column number
To illustrate the aim on a smaller example, let's say we have a 3x3 matrix T:
T = [0.1 0.2 0.1; 0.8, 0.6, 0.5; 0.1, 0, 1]
in this case f(1,1) = 0.1 and f(3,2)= 0.
Concretely the matrix values for which I would like to find a fitting function (surface) are displayed in the image below:
I would be very thankful if anyone suggested a way to find the 3D function that fits (best) the given matrix.
Edit
Is it possible to find a fit directly or is it neccesary (or better) fo first represent the data as matrix [X, Y, f(X,Y)]:
vals = []
for(i = 1: 100)
for j = 1 : 100
if(T(i,j) ~= 0)
vals = [vals;i, j, T(i,j)];
end;
end;
end;
These guys seemed to have done it in one line:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/134076
x = % vector of x values
y = % vector of y values
z = % f(x,y)
V = [1 x y x.^2 x.*y y.^2];
a = V \ z ;
From the help page:
If A is a rectangular m-by-n matrix with m ~= n, and B is a column vector with m elements or a matrix with m rows, then A\B returns a least-squares solution to the system of equations A*x= B.