I'm using octave 3.8.1 which works like matlab.
I have an array of thousands of values I've only included three groupings as an example below:
(amp1=0.2; freq1=3; phase1=1; is an example of one grouping)
t=0;
amp1=0.2; freq1=3; phase1=1; %1st grouping
amp2=1.4; freq2=2; phase2=1.7; %2nd grouping
amp3=0.8; freq3=5; phase3=1.5; %3rd grouping
The Octave / Matlab code below solves for Y so I can plug it back into the equation to check values along with calculating values not located in the array.
clear all
t=0;
Y=0;
a1=[.2,3,1;1.4,2,1.7;.8,5,1.5]
for kk=1:1:length(a1)
Y=Y+a1(kk,1)*cos ((a1(kk,2))*t+a1(kk,3))
kk
end
Y
PS: I'm not trying to solve for Y since it's already solved for I'm trying to solve for Phase
The formulas located below are used to calculate Phase but I'm not sure how to put it into a for loop that will work in an array of n groupings:
How would I write the equation / for loop for finding the phase if I want to find freq=2.5 and amp=.23 and the phase is unknown I've looked online and it may require writing non linear equations which I'm not sure how to convert what I'm trying to do into such an equation.
phase1_test=acos(Y/amp1-amp3*cos(2*freq3*pi*t+phase3)/amp1-amp2*cos(2*freq2*pi*t+phase2)/amp1)-2*freq1*pi*t
phase2_test=acos(Y/amp2-amp3*cos(2*freq3*pi*t+phase3)/amp2-amp1*cos(2*freq1*pi*t+phase1)/amp2)-2*freq2*pi*t
phase3_test=acos(Y/amp3-amp2*cos(2*freq2*pi*t+phase2)/amp3-amp1*cos(2*freq1*pi*t+phase1)/amp3)-2*freq2*pi*t
Image of formula below:
I would like to do a check / calculate phases if given a freq and amp values.
I know I have to do a for loop but how do I convert the phase equation into a for loop so it will work on n groupings in an array and calculate different values not found in the array?
Basically I would be given an array of n groupings and freq=2.5 and amp=.23 and use the formula to calculate phase. Note: freq will not always be in the array hence why I'm trying to calculate the phase using a formula.
Ok, I think I finally understand your question:
you are trying to find a set of phase1, phase2,..., phaseN, such that equations like the ones you describe are satisfied
You know how to find y, and you supply values for freq and amp.
In Matlab, such a problem would be solved using, for example fsolve, but let's look at your problem step by step.
For simplicity, let me re-write your equations for phase1, phase2, and phase3. For example, your first equation, the one for phase1, would read
amp1*cos(phase1 + 2 freq1 pi t) + amp2*cos(2 freq2 pi t + phase2) + amp3*cos(2 freq3 pi t + phase3) - y = 0
Note that ampX (X is a placeholder for 1, 2, 3) are given, pi is a constant, t is given via Y (I think), freqX are given.
Hence, you are, in fact, dealing with a non-linear vector equation of the form
F(phase) = 0
where F is a multi-dimensional (vector) function taking a multi-dimensional (vector) input variable phase (comprised of phase1, phase2,..., phaseN). And you are looking for the set of phaseX, where all of the components of your vector function F are zero. N.B. F is a shorthand for your functions. Therefore, the first component of F, called f1, for example, is
f1 = amp1*cos(phase1+...) + amp2*cos(phase2+...) + amp3*cos(phase3+...) - y = 0.
Hence, f1 is a one-dimensional function of phase1, phase2, and phase3.
The technical term for what you are trying to do is find a zero of a non-linear vector function, or find a solution of a non-linear vector function. In Matlab, there are different approaches.
For a one-dimensional function, you can use fzero, which is explained at http://www.mathworks.com/help/matlab/ref/fzero.html?refresh=true
For a multi-dimensional (vector) function as yours, I would look into using fsolve, which is part of Matlab's optimization toolbox (which means I don't know how to do this in Octave). The function fsolve is explained at http://www.mathworks.com/help/optim/ug/fsolve.html
If you know an approximate solution for your phases, you may also look into iterative, local methods.
In particular, I would recommend you look into the Newton's Method, which allows you to find a solution to your system of equations F. Wikipedia has a good explanation of Newton's Method at https://en.wikipedia.org/wiki/Newton%27s_method . Newton iterations are very simple to implement and you should find a lot of resources online. You will have to compute the derivative of your function F with respect to each of your variables phaseX, which is very simple to compute since you're only dealing with cos() functions. For starters, have a look at the one-dimensional Newton iteration method in Matlab at http://www.math.colostate.edu/~gerhard/classes/331/lab/newton.html .
Finally, if you want to dig deeper, I found a textbook on this topic from the society for industrial and applied math: https://www.siam.org/books/textbooks/fr16_book.pdf .
As you can see, this is a very large field; Newton's method should be able to help you out, though.
Good luck!
Related
I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1
I am trying to evaluate a triple integral using the 'trapz' command in MATLAB. My integrand is function of gamma1,gamma2,s... I want to evaluate the integral numerically and not symbolically.
syms gamma1; syms gamma2; syms s
f=gamma1*gamma2*exp(-s*10);
x= -20:0.3:20;
y=linspace(0.1,100000,length(x));
z=linspace(0.1,100000,length(x));
[s,gamma1,gamma2] = ndgrid(x,y,z);
mat=eval(f).* (gamma2>gamma1); %MY QUESTION IS HERE
out = trapz(x,trapz(y,trapz(z,mat,3),2),1);
My question is I have a condition on my integrand, it should be evaluated ONLY when gamma2 >gamma1. Is my code correct above i.e is the way I add the logical statement correct?
Yes, the logic is correct: mat will have zeros where the condition gamma2>gamma1 fails.
You can test correctness yourself by using an example where you know the answer already. For instance, integration of f=gamma1*gamma2*s within [0,2] in each variable gives 8 (with paper and pencil), and if the condition gamma2>gamma1 is added, the result is halved (by symmetry), so it becomes 4. And indeed, your code returns approximately 4 for this function.
As an aside: you may want to reconsider the desirability of using the same number of sample points in every dimension, given the disparity of sizes (40 by 100000 by 100000).
I'm trying to teach myself how to use MATLAB for solving state-space systems, I have what seems to be a pretty straight-forward system but have been unable to find any decent straight-forward examples for a novice thus far.
I'd like a simple walk-through of how to translate the system into MATLAB, what variables to set, and how to solve for about 50(?) seconds (from t=0 to 50 or any value really).
I'd like to use ode45 since it's a 4th order method using a Runge-Kutta variant.
Here's the 2nd-order equation:
θ''+0.03|θ'|θ'+4pi^2*sinθ=0
The state-space:
x_1'=x_2
x_2'=-4pi^2*sin(x_1)-0.03|x_2|x_2
x_1 = θ, x_2 = θ'
θ(0)=pi/9 rad, θ'(0)=0, h(step)=1
You need a derivative function function, which, given the current state of the system and the current time, returns the derivative of all of the state variables. Generally this function is of the form
function xDash=derivative(t,x)
and xDash is a vector with the derivative of each element, and x is a vector of the state variables. If your variables are called x_1, x_2 etc. it's a good idea to put x_1 in x(1), etc. Then you need a formula for the derivative of each state variable in terms of the other state variables, for example you could have xDash_1=x_1-x_2 and you would code this as xDash(1)=x(1)-x(2). Hopefully that clears something up.
For your example, the derivative function will look like
function xDash=derivative(t,x)
xDash=zeros(2,1);
xDash(1)=x(2);
xDash(2)=-4*pi^2*sin(x(1))-0.03*abs(x(2))*x(2);
end
and you would solve the system using
[T,X]=ode45(#derivative,0:50,[pi/9 0]);
This gives output at t=0,1,2,...,50.
I'm trying to compute a rather ugly integral using MATLAB. What I'm having problem with though is a part where I multiply a very big number (>10^300) with a very small number (<10^-300). MATLAB returns 'inf' for this even though it should be in the range of 0-0.0005. This is what I have
besselFunction = #(u)besseli(qb,2*sqrt(lambda*(theta + mu)).*u);
exponentFuncion = #(u)exp(-u.*(lambda + theta + mu));
where qb = 5, lambda = 12, theta = 10, mu = 3. And what I want to find is
besselFunction(u)*exponentFunction(u)
for all real values of u. The problem is that whenever u>28 it will be evaluated as 'inf'. I've heared, and tried, to use MATLAB function 'vpa' but it doesn't seem to work well when I want to use functions...
Any tips will be appreciated at this point!
I'd use logarithms.
Let x = Bessel function of u and y = x*exp(-u) (simpler than your equation, but similar).
Since log(v*w) = log(v) + log(w), then log(y) = log(x) + log(exp(-u))
This simplifies to
log(y) = log(x) - u
This will be better behaved numerically.
The other key will be to not evaluate that Bessel function that turns into a large number and passing it to a math function to get the log. Better to write your own that returns the logarithm of the Bessel function directly. Look at a reference like Abramowitz and Stegun to try and find one.
If you are doing an integration, consider using Gauss–Laguerre quadrature instead. The basic idea is that for equations of the form exp(-x)*f(x), the integral from 0 to inf can be approximated as sum(w(X).*f(X)) where the values of X are the zeros of a Laguerre polynomial and W(X) are specific weights (see the Wikipedia article). Sort of like a very advanced Simpson's rule. Since your equation already has an exp(-x) part, it is particularly suited.
To find the roots of the polynomial, there is a function on MATLAB Central called LaguerrePoly, and from there it is pretty straightforward to compute the weights.
I'm trying to do some parameter estimation and want to choose parameter estimates that minimize the square error in a predicted equation over about 30 variables. If the equation were linear, I would just compute the 30 partial derivatives, set them all to zero, and use a linear-equation solver. But unfortunately the equation is nonlinear and so are its derivatives.
If the equation were over a single variable, I would just use Newton's method (also known as Newton-Raphson). The Web is rich in examples and code to implement Newton's method for functions of a single variable.
Given that I have about 30 variables, how can I program a numeric solution to this problem using Newton's method? I have the equation in closed form and can compute the first and second derivatives, but I don't know quite how to proceed from there. I have found a large number of treatments on the web, but they quickly get into heavy matrix notation. I've found something moderately helpful on Wikipedia, but I'm having trouble translating it into code.
Where I'm worried about breaking down is in the matrix algebra and matrix inversions. I can invert a matrix with a linear-equation solver but I'm worried about getting the right rows and columns, avoiding transposition errors, and so on.
To be quite concrete:
I want to work with tables mapping variables to their values. I can write a function of such a table that returns the square error given such a table as argument. I can also create functions that return a partial derivative with respect to any given variable.
I have a reasonable starting estimate for the values in the table, so I'm not worried about convergence.
I'm not sure how to write the loop that uses an estimate (table of value for each variable), the function, and a table of partial-derivative functions to produce a new estimate.
That last is what I'd like help with. Any direct help or pointers to good sources will be warmly appreciated.
Edit: Since I have the first and second derivatives in closed form, I would like to take advantage of them and avoid more slowly converging methods like simplex searches.
The Numerical Recipes link was most helpful. I wound up symbolically differentiating my error estimate to produce 30 partial derivatives, then used Newton's method to set them all to zero. Here are the highlights of the code:
__doc.findzero = [[function(functions, partials, point, [epsilon, steps]) returns table, boolean
Where
point is a table mapping variable names to real numbers
(a point in N-dimensional space)
functions is a list of functions, each of which takes a table like
point as an argument
partials is a list of tables; partials[i].x is the partial derivative
of functions[i] with respect to 'x'
epilson is a number that says how close to zero we're trying to get
steps is max number of steps to take (defaults to infinity)
result is a table like 'point', boolean that says 'converged'
]]
-- See Numerical Recipes in C, Section 9.6 [http://www.nrbook.com/a/bookcpdf.php]
function findzero(functions, partials, point, epsilon, steps)
epsilon = epsilon or 1.0e-6
steps = steps or 1/0
assert(#functions > 0)
assert(table.numpairs(partials[1]) == #functions,
'number of functions not equal to number of variables')
local equations = { }
repeat
if Linf(functions, point) <= epsilon then
return point, true
end
for i = 1, #functions do
local F = functions[i](point)
local zero = F
for x, partial in pairs(partials[i]) do
zero = zero + lineq.var(x) * partial(point)
end
equations[i] = lineq.eqn(zero, 0)
end
local delta = table.map(lineq.tonumber, lineq.solve(equations, {}).answers)
point = table.map(function(v, x) return v + delta[x] end, point)
steps = steps - 1
until steps <= 0
return point, false
end
function Linf(functions, point)
-- distance using L-infinity norm
assert(#functions > 0)
local max = 0
for i = 1, #functions do
local z = functions[i](point)
max = math.max(max, math.abs(z))
end
return max
end
You might be able to find what you need at the Numerical Recipes in C web page. There is a free version available online. Here (PDF) is the chapter containing the Newton-Raphson method implemented in C. You may also want to look at what is available at Netlib (LINPack, et. al.).
As an alternative to using Newton's method the Simplex Method of Nelder-Mead is ideally suited to this problem and referenced in Numerical Recpies in C.
Rob
You are asking for a function minimization algorithm. There are two main classes: local and global. Your problem is least squares so both local and global minimization algorithms should converge to the same unique solution. Local minimization is far more efficient than global so select that.
There are many local minimization algorithms but one particularly well suited to least squares problems is Levenberg-Marquardt. If you don't have such a solver to hand (e.g. from MINPACK) then you can probably get away with Newton's method:
x <- x - (hessian x)^-1 * grad x
where you compute the inverse matrix multiplied by a vector using a linear solver.
Since you already have the partial derivatives, how about a general gradient-descent approach?
Maybe you think you have a good-enough solution, but for me, the easiest way to think about this is to understand it in the 1-variable case first, and then extend it to the matrix case.
In the 1-variable case, if you divide the first derivative by the second derivative, you get the (negative) step size to your next trial point, e.g. -V/A.
In the N-variable case, the first derivative is a vector and the second derivative is a matrix (the Hessian). You multiply the derivative vector by the inverse of the second derivative, and the result is the negative step-vector to your next trial point, e.g. -V*(1/A)
I assume you can get the 2nd-derivative Hessian matrix. You will need a routine to invert it. There are plenty of these around in various linear algebra packages, and they are quite fast.
(For readers who are not familiar with this idea, suppose the two variables are x and y, and the surface is v(x,y). Then the first derivative is the vector:
V = [ dv/dx, dv/dy ]
and the second derivative is the matrix:
A = [dV/dx]
[dV/dy]
or:
A = [ d(dv/dx)/dx, d(dv/dy)/dx]
[ d(dv/dx)/dy, d(dv/dy)/dy]
or:
A = [d^2v/dx^2, d^2v/dydx]
[d^2v/dxdy, d^2v/dy^2]
which is symmetric.)
If the surface is parabolic (constant 2nd derivative) it will get to the answer in 1 step. On the other hand, if the 2nd derivative is very not-constant, you could encounter oscillation. Cutting each step in half (or some fraction) should make it stable.
If N == 1, you'll see that it does the same thing as in the 1-variable case.
Good luck.
Added: You wanted code:
double X[N];
// Set X to initial estimate
while(!done){
double V[N]; // 1st derivative "velocity" vector
double A[N*N]; // 2nd derivative "acceleration" matrix
double A1[N*N]; // inverse of A
double S[N]; // step vector
CalculateFirstDerivative(V, X);
CalculateSecondDerivative(A, X);
// A1 = 1/A
GetMatrixInverse(A, A1);
// S = V*(1/A)
VectorTimesMatrix(V, A1, S);
// if S is small enough, stop
// X -= S
VectorMinusVector(X, S, X);
}
My opinion is to use a stochastic optimizer, e.g., a Particle Swarm method.