Say I want to find the value of a stieltjes integral f(x) dg(x) from a to b. In other words, integrate f(x) with respect to g(x). I know the variable and function values and I'm looking for a numerical result.
Is there a standard function in Matlab that does this? I've been calculating it somewhat manually by rectangle method, would any Matlab function be faster and/or more accurate?
I haven't had much experience with Matlab, and I can't find the solution in the documentation. Any help would be appreciated! :)
There is no function to support this but if you have the derivative of either functions you can use quad (or other members of quad family). If you have the derivative of g(x) then
integral(a,b) f(x) dg(x) = integral(a,b) f(x) g'(x) dx [[if g'(x) is bounded ]]
and if you have the derivative of f(x) you can use integration by parts to get
integral(a,b) f(x) g'(x) dx = f(b)g(b) - f(a)g(a) - integral(a,b) f'(x) g(x) dx
Related
I have to fit the dots, results of measurements, by an exponential function on Matlab. My profesor asked me to use only
fminsearch
polyval
polyfit
One of them or both. I have to find the parameters a and b (the value) which are fitting it.
There is the lines I wrote :
x=[1:10:70]
y=[0:10:70]
x=[12.5,11.8,10.8,10.9,6.5,6.2,6.1,5.423,4.625]
y=[0,0.61,1.3,1.4,14.9,18.5,20.1,29.7,58.2]
xlabel('Conductivité')
ylabel('Inductance')
The function has the form a*e^(-b*x) +c
Well polyfit and polyval are only usefull for working with polynomials. So you would have to write a minimization problem of the form min(f(x)).
functionToMinimize = #(pars, x, y)(norm(pars(1).*exp(-pars(2).*x) - y));
targetFunctionForFminseardch = #(pars)(functionToMinimize(pars, x, y));
minPars = fminsearch(targetFunctionForFminseardch, [0, 1])
Read up on anonymous functions and the use of vector norms if you have questions how to construct such a minimization problem.
Your code also has some flaws. Why are you defining x and y twice when you only want to use the actual measured data?
I want to minimize a function like below:
Here, n can be 5,10,50 etc. I want to use Matlab and want to use Gradient Descent and Quasi-Newton Method with BFGS update to solve this problem along with backtracking line search. I am a novice in Matlab. Can anyone help, please? I can find a solution for a similar problem in that link: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html .
But, I really don't know how to create a vector-valued function in Matlab (in my case input x can be an n-dimensional vector).
You will have to make quite a leap to get where you want to be -- may I suggest to go through some basic tutorial first in order to digest basic MATLAB syntax and concepts? Another useful read is the very basic example to unconstrained optimization in the documentation. However, the answer to your question touches only basic syntax, so we can go through it quickly nevertheless.
The absolute minimum to invoke the unconstraint nonlinear optimization algorithms of the Optimization Toolbox is the formulation of an objective function. That function is supposed to return the function value f of your function at any given point x, and in your case it reads
function f = objfun(x)
f = sum(100 * (x(2:end) - x(1:end-1).^2).^2 + (1 - x(1:end-1)).^2);
end
Notice that
we select the indiviual components of the x vector by matrix indexing, and that
the .^ notation effects that the operand is to be squared elementwise.
For simplicity, save this function to a file objfun.m in your current working directory, so that you have it available from the command window.
Now all you have to do is to call the appropriate optimization algorithm, say, the quasi Newton method, from the command window:
n = 10; % Use n variables
options = optimoptions(#fminunc,'Algorithm','quasi-newton'); % Use QM method
x0 = rand(n,1); % Random starting guess
[x,fval,exitflag] = fminunc(#objfun, x0, options); % Solve!
fprintf('Final objval=%.2e, exitflag=%d\n', fval, exitflag);
On my machine I see that the algorithm converges:
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the optimality tolerance.
Final objval=5.57e-11, exitflag=1
I would like to solve the following problem:
x* = argmin f( max_{j=1,...,J} g_j(x)),
where f and g_j are known smooth, convex functions, and x is a scalar. I also have analytical gradients of f and g_j. Is it possible to use fminunc (using Newton method) or another solver in MatLab to find the solution? Obviously max_{j=1,...,J} g_j(x) is not differentiable (unless there is a k such that g_k(x) >= g_j(x) for all j), but it is sub-differentiable. Therefore, I cannot use fminunc directly, as it requests the gradient of the objective function. If I can use fminunc, how would I proceed?
As an example, consider f(x) = x, J=1,2, g_1(x) = (x-1)^2, g_j(x) = (x+1)^2. Then max g_1, g_2 is the upper envelope of g_1 and g_2. The minimum is located at x=0.
Note: The function fminimax is close to what I want. fminimax solves
x* = argmin max_{j=1,...,J} g_j(x).
I have one potential solution to my problem. We can replace the maximum function with a smoothed or soft maximum, see this article by John Cook. In particular, we can approximate max(x,y) with the function g(x,y;k) = log(exp(kx)+exp(ky))/y (k is a tuning parameter). Now the objective function is differentiable, so the Newton method can be applied (i.e., fminunc).
i'm new in matlab. I didn't understand how to derive a dirac delta function and then shift it using symbolic toolbox.
syms t
x = dirac(t)
why can't i see the dirac delta function using ezplot(x,[-10,10]) for example?
As others have noted, the Dirac delta function is not a true function, but a generalized function. The help for dirac indicates this:
dirac(X) is not a function in the strict sense, but rather a
distribution with int(dirac(x-a)*f(x),-inf,inf) = f(a) and
diff(heaviside(x),x) = dirac(x).
Strictly speaking, it's impossible for Matlab to plot the Dirac delta function in the normal way because part of it extends to infinity. However, there are numerous workarounds if you want a visualization. A simple one is to use the stem plot function and the > operator to convert the one Inf value to something finite. This produces a unit impulse function (or Kronecker delta):
t = -10:10;
x = dirac(t) > 0;
stem(t,x)
If t and x already exist as symbolic variables/expressions rather than numeric ones you can use subs:
syms t
x = dirac(t);
t2 = -10:10;
x2 = subs(x,t,t2)>0;
stem(t2, x2)
You can write your own plot routine if you want something that looks different. Using ezplot is not likely to work as it doesn't offer as much control.
First, I've not met ezplot before; I had to read up on it. For things that are functionals like your x, it's handy, but you still have to realize it's exactly giving you what it promises: A plot.
If you had the job of plotting the dirac delta function, how would you go about doing it correctly? You can't. You must find a convention of annotating your plot with the info that there is a single, isolated, infinite point in your plot.
Plotting something with a line plot hence is unsuitable for anything but smooth functions (that's a well-defined term). Dirac Delta definitely isn't amongst the class of functions that are smooth. You would typically use a vertical line or something to denote the point where your functional is not 0.
My question refers to the Matlab symbolic toolbox.
I'm trying to derive a symbolic function that is a function of another symbolic function. Say that I have a function x that is an unspecified function x=x(y(theta)). I'd like to take the derivative of x with respect to theta: dx/dtheta=dx/dy * dy/dtheta
In Matlab I write
syms theta y(theta);
x=sym('x(y(theta))');
diff(x,theta)
The answer I get is 0. I really cannot figure out what is wrong with the code.
Any help is greatly appreciated. Thanks!
Derivating f(g(y)):
syms x,y
f = symfun(sym('f(x)'), [x])
g = symfun(sym('g(y)'), [y])
diff(f(g(y)),y)