I have to find value of x that minimizes norm of C*exp(2x)-d subject to x >= 0. I am trying to solve this problem in MATLAB. Both C and d are defined as vectors of dimension 1x180. Does anyone have any hint or suggestion on how to solve this problem using lsqnonneg function(or some other technique) in MATLAB? If it was just x instead of exp(2x) it could easily be solved by lsqnonneg! But because of exponential term in the problem I am not able to proceed further. I would appreciate suggestions.
First some math:
Define y = exp(2*x) then the constraint x >=0 is equivalent to y >= 1. An equivalent minimization problem then is:
minimize(over y) norm(c.*y - d)
subject to y >= 1
There are a number of ways to do this in MATLAB. One cool way is to use CVX, if you download the cvx convex optimization package, the code would be:
n = 180;
cvx_begin
variable y(n)
minimize(norm(c .* y - d))
subject to:
1 <= y
cvx_end
Then of course you can do x = log(y) / 2 to get the final value of x
Related
Suppose i have an equation that looks like this: x = y + zx
I have the values for y and z, I want to calculate x. How would I do this on Matlab without having to solve for x on paper?
Probably a simple equation but I am new, thank you!
I tried just writing the equation out and coding disp(x), but of course it did not work.
The function fzero() will give you a numeric answer if you move all of one side of the equation over to the other side, so the first side is equal to zero.
y = 4;
z = 2;
% Move everything to the right side
zero_left_side = #(x)(y+z*x) - (x);
initial_guess = 1;
x_solution = fzero(zero_left_side, initial_guess);
I'm trying to find the line which best fits to the data. I use the following code below but now I want to have the data placed into an array sorted so it has the data which is closest to the line first how can I do this? Also is polyfit the correct function to use for this?
x=[1,2,2.5,4,5];
y=[1,-1,-.9,-2,1.5];
n=1;
p = polyfit(x,y,n)
f = polyval(p,x);
plot(x,y,'o',x,f,'-')
PS: I'm using Octave 4.0 which is similar to Matlab
You can first compute the error between the real value y and the predicted value f
err = abs(y-f);
Then sort the error vector
[val, idx] = sort(err);
And use the sorted indexes to have your y values sorted
y2 = y(idx);
Now y2 has the same values as y but the ones closer to the fitting value first.
Do the same for x to compute x2 so you have a correspondence between x2 and y2
x2 = x(idx);
Sembei Norimaki did a good job of explaining your primary question, so I will look at your secondary question = is polyfit the right function?
The best fit line is defined as the line that has a mean error of zero.
If it must be a "line" we could use polyfit, which will fit a polynomial. Of course, a "line" can be defined as first degree polynomial, but first degree polynomials have some properties that make it easy to deal with. The first order polynomial (or linear) equation you are looking for should come in this form:
y = mx + b
where y is your dependent variable and X is your independent variable. So the challenge is this: find the m and b such that the modeled y is as close to the actual y as possible. As it turns out, the error associated with a linear fit is convex, meaning it has one minimum value. In order to calculate this minimum value, it is simplest to combine the bias and the x vectors as follows:
Xcombined = [x.' ones(length(x),1)];
then utilized the normal equation, derived from the minimization of error
beta = inv(Xcombined.'*Xcombined)*(Xcombined.')*(y.')
great, now our line is defined as Y = Xcombined*beta. to draw a line, simply sample from some range of x and add the b term
Xplot = [[0:.1:5].' ones(length([0:.1:5].'),1)];
Yplot = Xplot*beta;
plot(Xplot, Yplot);
So why does polyfit work so poorly? well, I cant say for sure, but my hypothesis is that you need to transpose your x and y matrixies. I would guess that that would give you a much more reasonable line.
x = x.';
y = y.';
then try
p = polyfit(x,y,n)
I hope this helps. A wise man once told me (and as I learn every day), don't trust an algorithm you do not understand!
Here's some test code that may help someone else dealing with linear regression and least squares
%https://youtu.be/m8FDX1nALSE matlab code
%https://youtu.be/1C3olrs1CUw good video to work out by hand if you want to test
function [a0 a1] = rtlinreg(x,y)
x=x(:);
y=y(:);
n=length(x);
a1 = (n*sum(x.*y) - sum(x)*sum(y))/(n*sum(x.^2) - (sum(x))^2); %a1 this is the slope of linear model
a0 = mean(y) - a1*mean(x); %a0 is the y-intercept
end
x=[65,65,62,67,69,65,61,67]'
y=[105,125,110,120,140,135,95,130]'
[a0 a1] = rtlinreg(x,y); %a1 is the slope of linear model, a0 is the y-intercept
x_model =min(x):.001:max(x);
y_model = a0 + a1.*x_model; %y=-186.47 +4.70x
plot(x,y,'x',x_model,y_model)
When I did my paper researching, I got the painful part to overcome:
I want to calculate the variance matrix with specific mean matrix not the real mean matrix.
Could I implement it with some simple function in Matlab?
Welcome any suggestions!
If you have some matrix n by k matrix X and 1 by k vectoru, you could do:
X_demeaned = X - ones(n,1) * u;
COV_X = X_demeaned' * X_demeaned / (n - 1);
Typically u is the sample mean: u = mean(X), but if your particular problem gives you special knowledge about the true population mean, it would make sense to use that for u instead.
Anyway, that's what I think you're asking!
I have an unconstrained quadratic optimization problem- I have to find u that will minimize norm (u^H * A_k *u - d_k)) where A_k is 4x4 matrix (k=1,2...180) and u is 4x1 vector(H denotes hermitian).
There are several functions in MATLAB to solve optimization problems but I am not able to figure out the method I need to use for my OP. I would highly appreciate if anyone provides me some hint or suggestion to solve this problem in MATLAB.
Math preliminaries:
If A is symmetric, positive definite, then the the matrix A defines a norm. This norm, called the A norm, is written ||x||_A = x'Ax. Your problem, minimize (over x) |x'*A*x - d| is equivalent to searching for a vector x whose 'A norm' is equal to d. Such a vector is trivial to find by simply scaling up or down any non-zero vector until it has the appropriate magnitude.
Pick arbitrary y (can't be all zero). eg. y = [1; zeros(n-1, 1)]; or y = rand(n, 1);
Solve for some scalar c such that x = c*y and x'Ax=d. Substituting we get c^2 y'Ay=d hence:.
Trivial procedure to find an answer:
y = [1; zeros(n-1)]; %Any arbitrary y will do as long as its not all zero
c = sqrt(d / (y'*A*y))
x = c * y`
x now solves your problem. No toolbox required!
options = optimoptions('fminunc','GradObj','on','TolFun',1e-9,'TolX',1e-9);
x = fminunc(#(x)qfun(x,A,d),zeros(4,1),options);
with
function [y,g]=qfun(x,A,d)
y=(x'*A*x-d);
g=2*y*(A+A')*x;
y=y*y;
Seems to work.
This is an evolution of the question asked by Immo here
We have a function f(x,y) and we are trying to find the value of x (x*) for a function f(x,y) that produces the function value 0 for a given Y, where Y is a large vector of values.
x* = arrayfun(#(i) fzero(#(x) minme(y(i),x),1),1:numel(y))
What if now I would like to use the x* solution of the arrayfun to find another fzero of the second equation, lets say minme2, that also depends on the same large vector of values Y.
minme2 = #(y,x*, x) x - Y + x* + x* ./ (const1 * (const2 - x*))
Moreover I would like my solutions to dynamically depend on the interval I am selecting for the fzero solution, rather than the initial guess. Instead of 1 I would like my solutions to be found in between:
x0 = [0, min(const1, x*)];
I would appreciate answers how can I solve the minme2 as currently I am facing errors.