How to pass a function to pdepe initial condition function - matlab

I have created a function from sets of points using curve fitting toolbox. I used generate code function and generated function called createFit(a,b), where a and b are sets of points used for interpolation. As a result createFit returns my interpolated function.
Now I want to use this function as the u0 (initial conditions) of my PDE equation (I am using pdepe to solve PDE). To do that, in function where I need to establish u0 I need to invoke a createFit function, which is not a problem, I have access to it. A problem is that I cannot pass a and b as the parameters to this function. I tried to make them global but it did not worked. How to do that?

From the pdepe documentation:
Parameterizing Functions explains how to provide additional parameters to the functions pdefun, icfun, or bcfun, if necessary.
Essentially, use nested functions or anonymous functions to handle extra parameters.

Related

Matlab Use ode23 ode solver with variable

I would like to use the ode23 solver for systems. My system has three equations and the first one depends on an variable 'z' which is declared in the code above.
HereĀ“s the sample code:
clc; clear;
z=1;
function Fv=funsys(t,Y,z);
Fv(1,1)=2*Y(1)+Y(2)+5*Y(3)+exp(-2*t) + z;
Fv(2,1)=-3*Y(1)-2*Y(2)-8*Y(3)+2*exp(-2*t)-cos(3*t);
Fv(3,1)=3*Y(1)+3*Y(2)+2*Y(3)+cos(3*t);
end
[tv,Yv]=ode23('funsys',[0 pi/2],[1;-1;0]);
The get the error that the variable 'z' is not declared.
So, is there any possibility to solve this?
(The function 'funsys' needs to be dependent on 'z')
Yes, have a look at Pass Extra Parameters to ODE Function in the documentation:
you can pass in extra parameters by defining them outside the function
and passing them in when you specify the function handle
In your case, it would look something like this:
[tv,Yv]=ode23(#(t,y) funsys(t,y,z),[0 pi/2],[1;-1;0]);

Spark (Scala) - how to optimize objective function parameters

I have function f going from R2 to R which takes 2 parameters (a and b) and returns a scalar.
I would like to use an optimizer to estimate the values of a and b for which the value returned by f is maximized (or minimized, I can work with -f).
I have looked into the LBFGS optimizer from mllib, see:
the doc at https://spark.apache.org/docs/2.1.0/api/scala/index.html#org.apache.spark.mllib.optimization.LBFGS and https://spark.apache.org/docs/2.1.0/api/scala/index.html#org.apache.spark.mllib.optimization.LBFGS$
an example for logistic regression at https://spark.apache.org/docs/2.1.0/mllib-optimization.html
My issue is that I am not sure I fully understand how this optimizer works.
The optimizers I have seen before in python and R usually expect the following parameters: an implementation of the objective function, a set of initial values for the parameters of the objective function (and optionally: additional arguments for the objective function, boundaries for the domain within which the parameters should be searched...).
Usually, the optimizer invokes the function iteratively using a set of inital parameters provided by the user and calculates a gradient until the value returned by the objective function has converged (or the loss). It then returns the best set of parameters and corresponding value of the objective function. Pretty standard stuff.
In this case, I see org.apache.spark.mllib.optimization.LBFGS.runLBFGS expects to be given an RDD of labeled data and a gradient.
What is this data RDD argument the optimizer is expecting?
Is the argument gradient an implementation of the gradient of the objective function?
If I am to code my own gradient for my own objective function, how should the loss function be calculated (ratio of the value returned by the objective function at iteration n / (n-1)?
What is the argument initialWeights? Is it an array containing the initial values of the parameters to be optimized?
Ideally, would you be able to provide a very simple code example showing how a simple objective function can be optimized using org.apache.spark.mllib.optimization.LBFGS.runLBFGS?
Finally, could Breeze be an interesting alternative? https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/package.scala
Thanks!

Syms Function overwrite Matlab

I'm working on an script that uses sym functions within a loop, because of the way my functions are defined. Also I need to use their derivatives. I cannot just write down the explicit matlab function for each, so defining each individual function and derivatives is not an option.
the code is this:
[out]=sym_script(n)
syms x;
out=[];
for i=1:n
Function=sin(x)+i*x;
out=[out Some_operation(Function,vec)];
end
(min example, actual sym function more complicated ) the problem is that matlab seems to be unable to overwrite Function if it is syms.
I have tried the script in Matlab 2015a for pc and mac and get the same error in both.
Never mind the trouble was inside other function I called in the loop. It had a variable named "diff" which caused conflict with the matlab's function diff() to calculate derivatives.

Solving ODEs inside a Subsystem in Simulink

I'm trying to figure out how to solve a system of ODEs inside a subsystem in a Simulink model. Basically, each call to this subsystem, which happens at each tick of the simulation clock (fixed-step), entails solving the ODEs. So there's like a different "clock" for the subsystem.
I have an M-file that implements the function for the system of ODEs. Currently, I have a MATLAB Function block for that. It needs a lot of parameters that I can get from the base workspace (via evalin and using coder.extrinsic('evalin') at the beginning). But I'm not allowed to define function_handle objects or inner functions to parameterize the function that is used by ode*. I think that if I'm able to solve the ODEs in this block, I'll have solved my problem. But those restrictions are "ruining" it.
I'd appreciate if you have any ideas of how to accomplish this. I welcome different approaches.
Thank you.
EDIT
A simple example is given below. It attempts to solve the van der Pol equation by changing the mu parameter randomly. This is the main idea I have at the moment, which doesn't work because of the problems mentioned above.
This is the main model with the subsystem:
This is the subsystem:
This is the MATLAB Function block implementation (note that there's an error in the # symbol, since defining function_handle objects isn't allowed):
Just use the MATLAB Function block as a wrapper. Put the bulk of your code into a "standard" MATLAB function (i.e. one callable from MATLAB, not the MATLAB Function block) and call that function (after defining it as coder.extrinsic) from the MATLAB Function block.
This will be a bit more complex than Phil Goddard's solution. The advantage is that it will allow you to generate standalone code, if necessary, whereas extrinsic functions are not compatible with standalone code generation.
The functions ode23 and ode45 are supported for code generation as of MATLAB R2014b so this applies if your MATLAB release is at least that new. Supposing so, the main limitation you are seeing is that anonymous functions are not supported for code generation.
Simulate Anonymous Function Parameters with Persistent Variables
However, these parameterized anonymous functions can be simulated using a normal function with a persistent. To simulate your function with the parameter mu, make a MATLAB file odefcn.m:
function x = odefcn(t,y)
%#codegen
persistent mu;
if isempty(mu)
% Adjust based on actual size, type and complexity
mu = 0;
end
if ischar(t) && strcmp(t,'set')
% Syntax to set parameter
mu = y;
else
x = [y(2); mu*(1-y(1)^2)*y(2)-y(1)];
end
Then in the MATLAB Function Block, use:
function y = fcn(mu)
%#codegen
% Set parameter
odefcn('set',mu);
% Solve ODE
[~,Y] = ode45(#odefcn,[0, 20], [2; 0]);
y = Y(end,1);
That should work both for simulation and for code generation. You can just add more arguments to odefcn if you need more parameters.

Matlab, SCIP and Opti Toolbox

I am using the Opti Toolbox, a free optimization toolbox for Matlab. I am solving a Mixed Integer Nonlinear Program, a MINLP. Inside the Opti Toolbox, the MINLP solver used is SCIP.
I define my own objective as a separate function (fun argument in Opti), and this function needs to call other matlab functions which take double arguments.
The problem is that whenever Opti invokes my function to evaluate the objective, it first calls it using a vector of 'scipvar' objects and then it calls it again using a vector of 'double' objects. My obj function does not work with the scipvar objects, it returns an error.
I tried (just for testing) setting the output of my function for something fixed when the type is 'scipvar', and for the actual real thing when the type is 'double', and this doesn't work, changing the fixed value actually changes the final optimal value.
I basically need to convert a scipvar object to double, is this possible? Or is there any other alternative?
Thank you.
Ok, so after enlightenment by J. Currie, an Opti toolbox developer, I understood the cause of the problem above.
The first call to the objective with a vector of scipvar variables is actually a parser sweeping the objective function to see if it can be properly mapped to something that can be handled by SCIP. I reimplemented the objective function to use only methods allowed by scip - obtained by typing methods(scipvar) in matlab:
abs dot log minus mrdivide norm power rdivide sqrt times
display exp log10 mpower mtimes plus prod scipvar sum uminus
Once the objective could be parsed by scip my problem worked fine.