I have a task of creating a matlab program, and I was searching for "switch" and "if" functions for the task.
And the user will define any two of the variables, then matlab will solve for the other one. Normally, I would use "if" to account for different scenarios, but the increasing number of variables in the equation would increase the number of lines as well.
Updated with a demonstration:
% Mach Number after Shockwaves
M2=sqrt((((gamma-1).*M1.^2)+2)./(2*gamma.*M1.^2-(gamma-1)));
% Temperature Ratio
TR=((2*gamma.*M1.^2-(gamma-1)).*(((gamma-1).*M1.^2)+2))./(((gamma+1)^2).*M1.^2);
% Pressure Ratio
PR=(2*gamma.*M1.^2-(gamma-1))./(gamma+1);
% Density Ratio
rhoR=((gamma+1).*M1.^2)./(((gamma-1).*M1.^2)+2);
%Stagnant Pressure Ratio Before Shockwaves
P0R=(((1+0.2.*M2.^2)./(1+0.2.*M1.^2)).^(1.4/0.1)).*((2*gamma.*M1.^2-(gamma-1))./(gamma+1));
%Stagnant Pressure Ratio After Shockwaves
P1R=((1+0.2.*M2.^2).^(1.4/0.4)).*((2*gamma.*M1.^2-(gamma-1))./(gamma+1));
Is there any alternative? Also, the matlab running in my campus has no symbolic toolbox, so it would best to avoid it. I am at my wit's end now, for I am sure there is a simple solution to account for such scenario.
Further Updated with the exact equation:
The program will be able to do like, let's say the user enter "TR" and "gamma", and then Matlab will find the "M1". This would further be carried into subsequent equations, where I would get "M2","PR","rhoR","P0R" and "P1R". Also, I realised that since Matlab will read from top to bottom, is there any way to account for this?
As we already wrote in the comments, what you try to do is very complicated and rather infeasible without the symbolic toolbox.
I have written a hacky solution which only accounts for the case:
2 out of the 3 variables {TR, gamma, M1} are given, the third one is then automatically calculated. These 3 variables can then be used to solve the remaining equations.
This solution assumes that you have at least once access to the symbolic toolbox, but you won't need it when you use generated code.
We first generate MATLAB functions based on the symbolic expression for each of the following cases:
{TR, gamma} given, M1 missing
{TR, M1} given, gamma missing
{gamma, M1} given, TR missing
This results in 6 m-files to be written, sol_TR.m, cond_TR.m, etc.
syms gamma M1 TR;
assume(gamma, 'real');
assume(gamma > 0);
assume(M1, 'real');
assume(M1 > 0);
assume(TR, 'real');
assume(TR > 0);
eq = TR ==(((gamma - 1)*M1^2 + 2)*(2*gamma*M1^2 - gamma + 1))/(M1^2*(gamma + 1)^2);
vars = {gamma, M1, TR};
num_vars = size(vars,2);
for i=1:num_vars
current_var = vars{i};
[sol, ~, cond] = solve(eq,current_var, 'ReturnConditions', true);
matlabFunction(sol, 'File', sprintf('sol_%s',char(current_var)),'Vars', vars, 'Optimize', false);
matlabFunction(cond, 'File', sprintf('cond_%s',char(current_var)),'Vars', vars, 'Optimize', false);
end
These functions can then be used to calculate the missing variable:
function [input_vector] = calc_third(varname_1, var_value_1, varname_2, var_value_2)
varnames = {'gamma', 'M1', 'TR'};
num_vars = size(varnames,2);
var_index = 1:num_vars;
var_name_map = containers.Map(varnames,var_index);
input_vector = zeros(1,num_vars);
input_vector(var_index == var_name_map(varname_1)) = var_value_1;
input_vector(var_index == var_name_map(varname_2)) = var_value_2;
var_index(var_index == var_name_map(varname_1)) = [];
var_index(var_index == var_name_map(varname_2)) = [];
sol_func = sprintf('sol_%s(input_vector(1),input_vector(2),input_vector(3))', varnames{var_index});
cond_func = sprintf('cond_%s(input_vector(1),input_vector(2),input_vector(3))', varnames{var_index});
result = dot(eval(sol_func), eval(cond_func));
input_vector(var_index)= result;
end
Example run:
>> calc_third('gamma', 0.5, 'TR', 100)
ans =
0.5000 0.0669 100.0000
You could of course build upon that solution and create a symbolic system of equations which incorporates all your 8 variables. You would then have to generate 28 functions and select the approriate ones based on the given input variables.
However, I would not recommend that route. Try to get the symbolic toolbox where you need it, this should help you avoid a lot of headache.
You could use it then like this:
function [] = calc_third(varname_1, var_value_1, varname_2, var_value_2)
gamma = sym('gamma');
M1 = sym('M1');
TR = sym('TR');
eq = TR ==(((gamma - 1)*M1^2 + 2)*(2*gamma*M1^2 - gamma + 1))/(M1^2*(gamma + 1)^2);
subs_eq = (subs(eq,[sym(varname_1), sym(varname_2)],[var_value_1,var_value_2]));
missing_var = symvar(subs_eq)
solve(subs_eq,missing_var)
end
Sample run:
>> calc_third('gamma', 0.5, 'TR', 100)
missing_var =
M1
ans =
(2*2^(1/2))/(3*88609^(1/2) + 893)^(1/2)
Related
I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.
I have 16 non-linear equations which are independent of each other i.e. they are not a system of equations. One way is to create 16 separate sub-routines and use fsolve to solve which i generally do.
But i need to reduce the number of sub-routines from 16 to one. Let me try to give an example of what i'm doing so far:
u01 = .001;....u016 = .001;
options=optimset('Display','notify','MaxFunEvals',10^7,'TolX',1e-,'TolFun',1e-6,'MaxIter',10^5);
u1 = fsolve(#polsim1,u01,options);
..
..
u16 = fsolve(#polsim16,u016,options);
So, in the above example i have 16 subroutines i.e. polsim1-polsim16 where each includes 1 non-linear equation to solve for u's. This method is very cumbersome and messy. I need to do that in one subroutine. I believe i need to use index n = 1 to 16. But i'm not sure how to use it and where to use it.
Just create an array of functions and initial guesses, as shown in the first example of the documentation for fsolve.
% Array of equations, nx1 in size for n equations.
% Ensure that the input (x) is indexed in each equation, e.g. x(1) in eqn 1
polsim = #(x) [x(1).^2 - 1
2*x(2) + 3];
% Array of initial guesses, corresponding element-wise to eqn array
u0 = [0.01; 0.01];
% Solver options
options = optimset('Display','notify','MaxFunEvals',10^7,'TolX',1e-5,'TolFun',1e-6,'MaxIter',10^5);
% Solve
u = fsolve(F, u0, options);
>> u = [1.000; -1.500] % As expected for the equations in F and intial guesses
You should rewrite your code to use vectors/cell arrays as follows:
% Write the initial point as a vector
u0(1) = .001;
..
u0(16) = .001;
% Write the equations as a cell array
polsim{1} = #polsim1;
..
polsim{16} = #polsim16;
options=optimset('Display','notify','MaxFunEvals',10^7,'TolX',1e-1,'TolFun',1e-6,'MaxIter',10^5);
u = zeros(u0); % Allocate output space for efficiency
% loop over all the equations
for i=1:length(u0)
u(i) = fzero(polsim{i},u0(i),options);
end
Note that I used fzero instead of fsolve to improve the efficiency, see this post for more information.
Trick to automatically construct vector/cell array (Deprecated)
For completeness, I should mention that you can initialise u0 and polsim automatically using eval:
for i=1:16
eval(['u0(', num2str(i), ') = u0', num2str(i),';']);
eval(['polsim{', num2str(i), '} = polsim', num2str(i),';']);
end
Note that I do not recommend this method. It is much better to directly define them as a vector/cell array.
I wanted to compute a finite difference with respect to the change of the function in Matlab. In other words
f(x+e_i) - f(x)
is what I want to compute. Note that its very similar to the first order numerical partial differentiation (forward differentiation in this case) :
(f(x+e_i) - f(x)) / (e_i)
Currently I am using for loops to compute it but it seems that Matlab is much slower than I thought. I am doing it as follows:
function [ dU ] = numerical_gradient(W,f,eps)
%compute gradient or finite difference update numerically
[D1, D2] = size(W);
dU = zeros(D1, D2);
for d1=1:D1
for d2=1:D2
e = zeros([D1,D2]);
e(d1,d2) = eps;
f_e1 = f(W+e);
f_e2 = f(W-e);
%numerical_derivative = (f_e1 - f_e2)/(2*eps);
%dU(d1,d2) = numerical_derivative
numerical_difference = f_e1 - f_e2;
dU(d1,d2) = numerical_difference;
end
end
it seems that its really difficult to vectorize the above code because for numerical differences follow the definition of the gradient and partial derivatives which is:
df_dW = [ ..., df_dWi, ...]
where df_dWi assumes the other coordinates are fixed and it only worries about the change of the variable Wi. Thus, I can't just change all the coordinates at once.
Is there a better way to do this? My intuition tells me that the best way to do this is to implement this not in matlab but in some other language, say C and then have matlab call that library. Is that true? Does it mean that the best solution is some Matlab library that does this for me?
I did see:
https://www.mathworks.com/matlabcentral/answers/332414-what-is-the-quickest-way-to-find-a-gradient-or-finite-difference-in-matlab-of-a-real-function-in-hig
but unfortunately, it computes exact derivatives, which isn't what I am looking for. I am explicitly looking for differences or "bad approximation" to the gradient.
Since it seems this code is not easy to vectorize (in fact my intuition tells me its not possible to do so) my only other idea is to implement this finite difference function in C and then have C call the function. Is this a good idea? Anyone know how to do this?
I did try reading the following:
https://www.mathworks.com/help/matlab/matlab_external/standalone-example.html
but it was too difficult to understand for me because I have no idea what a mex file is, if I need to have a arrayProduct.c file as well as a mex.h file, if I also needed a matlab file, etc. If there just existed a way to simply download a working example with all the functions they suggest there and some instructions to compile it, then it would be super helpful. But just reading the hmtl/article like that its impossible for me to infer what they want me to do.
For the sake of completness it seems reddit has some comments in its discussion of this:
https://www.reddit.com/r/matlab/comments/623m7i/how_does_one_compute_a_single_finite_differences/
Here is a more efficient doing so:
function [ vNumericalGrad ] = CalcNumericalGradient( hInputFunc, vInputPoint, epsVal )
numElmnts = size(vInputPoint, 1);
vNumericalGrad = zeros([numElmnts, 1]);
refVal = hInputFunc(vInputPoint);
for ii = 1:numElmnts
% Set the perturbation vector
refInVal = vInputPoint(ii);
vInputPoint(ii) = refInVal + epsVal;
% Compute Numerical Gradient
vNumericalGrad(ii) = (hInputFunc(vInputPoint) - refVal) / epsVal;
% Reset the perturbation vector
vInputPoint(ii) = refInVal;
end
end
This code allocate less memory.
The above code performance will be totally controlled by the speed of the hInputFunction.
The small tricks compared to original code are:
No memory reallocation of e each iteration.
Instead of addition of vectors W + e there are 2 assignments to the array.
Decreasing the calls to hInputFunction() by half by defining the reference value outside the loop (This only works for Forward / Backward difference).
Probably this will be very close to C code unless you can code in C more efficiently the function which computes the value (hInputFunction).
A full implementation can be found in StackOverflow Q44984132 Repository (It was Posted in StackOverflow Q44984132).
See CalcFunGrad( vX, hObjFun, difMode, epsVal ).
A way better approach (numerically more stable, no issue of choosing the perturbation hyperparameter, accurate up to machine precision) is to use algorithmic/automatic differentiation. For this you need the Matlab Deep Learning Toolbox. Then you can use dlgradient to compute the gradient. Below you find the source code attached corresponding to your example.
Most importantly, you can examine the error and observe that the deviation of the automatic approach from the analytical solution is indeed machine precision, while for the finite difference approach (I choose second order central differences) the error is orders of magnitude higher. For 100 points and a range of $[-10, 10]$ this errors are somewhat tolerable, but if you play a bit with Rand_Max and n_points you observe that the errors become larger and larger.
Error of algorithmic / automatic diff. is: 1.4755528111219851e-14
Error of finite difference diff. is: 1.9999999999348703e-01 for perturbation 1.0000000000000001e-01
Error of finite difference diff. is: 1.9999999632850161e-03 for perturbation 1.0000000000000000e-02
Error of finite difference diff. is: 1.9999905867860374e-05 for perturbation 1.0000000000000000e-03
Error of finite difference diff. is: 1.9664569947425062e-07 for perturbation 1.0000000000000000e-04
Error of finite difference diff. is: 1.0537897883625319e-07 for perturbation 1.0000000000000001e-05
Error of finite difference diff. is: 1.5469326944467290e-06 for perturbation 9.9999999999999995e-07
Error of finite difference diff. is: 1.3322061696937969e-05 for perturbation 9.9999999999999995e-08
Error of finite difference diff. is: 1.7059535957436630e-04 for perturbation 1.0000000000000000e-08
Error of finite difference diff. is: 4.9702408787320664e-04 for perturbation 1.0000000000000001e-09
Source Code:
f2.m
function y = f2(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
f2_grad_analytic.m:
function grad = f2_grad_analytic(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
grad(:, 1) = 2*x1 + 2*x2;
grad(:, 2) = 4*x2 + 2*x1 + 2 * x3;
grad(:, 3) = 6*x3.^2 + 2*x2;
f2_grad_AD.m:
function grad = f2_grad_AD(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
grad = dlgradient(y, x);
CalcNumericalGradient.m:
function NumericalGrad = CalcNumericalGradient(InputPoints, eps)
% (Central, second order accurate FD)
NumericalGrad = zeros(size(InputPoints) );
for i = 1:size(InputPoints, 2)
perturb = zeros(size(InputPoints));
perturb(:, i) = eps;
NumericalGrad(:, i) = (f2(InputPoints + perturb) - f2(InputPoints - perturb)) / (2 * eps);
end
main.m:
clear;
close all;
clc;
n_points = 100;
Rand_Max = 20;
x_test_FD = rand(n_points, 3) * Rand_Max - Rand_Max/2;
% Calculate analytical solution
grad_analytic = f2_grad_analytic(x_test_FD);
grad_AD = zeros(n_points, 3);
for i = 1:n_points
x_test_dl = dlarray(x_test_FD(i,:) );
grad_AD(i,:) = dlfeval(#f2_grad_AD, x_test_dl);
end
Err_AD = norm(grad_AD - grad_analytic);
fprintf("Error of algorithmic / automatic diff. is: %.16e\n", Err_AD);
eps_range = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8, 1e-9];
for i = 1:length(eps_range)
eps = eps_range(i);
grad_FD = CalcNumericalGradient(x_test_FD, eps);
Err_FD = norm(grad_FD - grad_analytic);
fprintf("Error of finite difference diff. is: %.16e for perturbation %.16e\n", Err_FD, eps);
end
I have trained a neural network in Matlab (Using the neural network toolbox). Now I would like to export the calculated weights and biases to another platform (PHP) in order to make calculations with them. Is there a way to create a function or equation to do this?
I found this related question: Equation that compute a Neural Network in Matlab.
Is there a way to do what I want and port the results of my NN (29 inputs, 10 hidden layers, 1 output) to PHP?
Yes, the net properties also referenced in the other question are simple matrices:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
So you can write them to a file, say, as comma-separated-values.
csvwrite('W1.csv',W1)
Then, in PHP read this data and convert or use it as you like.
<?php
if (($handle = fopen("test.csv", "r")) !== FALSE) {
$data = fgetcsv($handle, 1000, ",");
}
?>
Than, to process the weights, you can use the formula from the other question by replacing the tansig function, which is calculated according to:
n = 2/(1+exp(-2*n))-1
This is mathematically equivalent to tanh(N)
Which exists in php as well.
source: http://dali.feld.cvut.cz/ucebna/matlab/toolbox/nnet/tansig.html
Transferring all of these is pretty trivial. You will need:
Write the code for matrix multiplication, which are a pretty simple couple of for loops.
Second, observe that according to the Matlab documentation tansig(n) = 2/(1+exp(-2*n))-1. I'm pretty sure that PHP has exp (and if not, it is has a pretty simple polynomial expansion which you can write yourself)
Read, understand and apply Jasper van den Bosch's excellent answer to your question.
Hence the solution becomes (after correcting all wrong parts)
Here I am giving a solution in Matlab, but if you have tanh() function, you may easily convert it to any programming language. For PHP, tanh() function exists: php tanh(). It is for just showing the fields from network object and the operations you need.
Assume you have a trained ann (network object) that you want to export
Assume that the name of the trained ann is trained_ann
Here is the script for exporting and testing.
Testing script compares original network result with my_ann_evaluation() result
% Export IT
exported_ann_structure = my_ann_exporter(trained_ann);
% Run and Compare
% Works only for single INPUT vector
% Please extend it to MATRIX version by yourself
input = [12 3 5 100];
res1 = trained_ann(input')';
res2 = my_ann_evaluation(exported_ann_structure, input')';
where you need the following two functions
First my_ann_exporter:
function [ my_ann_structure ] = my_ann_exporter(trained_netw)
% Just for extracting as Structure object
my_ann_structure.input_ymax = trained_netw.inputs{1}.processSettings{1}.ymax;
my_ann_structure.input_ymin = trained_netw.inputs{1}.processSettings{1}.ymin;
my_ann_structure.input_xmax = trained_netw.inputs{1}.processSettings{1}.xmax;
my_ann_structure.input_xmin = trained_netw.inputs{1}.processSettings{1}.xmin;
my_ann_structure.IW = trained_netw.IW{1};
my_ann_structure.b1 = trained_netw.b{1};
my_ann_structure.LW = trained_netw.LW{2};
my_ann_structure.b2 = trained_netw.b{2};
my_ann_structure.output_ymax = trained_netw.outputs{2}.processSettings{1}.ymax;
my_ann_structure.output_ymin = trained_netw.outputs{2}.processSettings{1}.ymin;
my_ann_structure.output_xmax = trained_netw.outputs{2}.processSettings{1}.xmax;
my_ann_structure.output_xmin = trained_netw.outputs{2}.processSettings{1}.xmin;
end
Second my_ann_evaluation:
function [ res ] = my_ann_evaluation(my_ann_structure, input)
% Works with only single INPUT vector
% Matrix version can be implemented
ymax = my_ann_structure.input_ymax;
ymin = my_ann_structure.input_ymin;
xmax = my_ann_structure.input_xmax;
xmin = my_ann_structure.input_xmin;
input_preprocessed = (ymax-ymin) * (input-xmin) ./ (xmax-xmin) + ymin;
% Pass it through the ANN matrix multiplication
y1 = tanh(my_ann_structure.IW * input_preprocessed + my_ann_structure.b1);
y2 = my_ann_structure.LW * y1 + my_ann_structure.b2;
ymax = my_ann_structure.output_ymax;
ymin = my_ann_structure.output_ymin;
xmax = my_ann_structure.output_xmax;
xmin = my_ann_structure.output_xmin;
res = (y2-ymin) .* (xmax-xmin) /(ymax-ymin) + xmin;
end
The code in question is here:
function k = whileloop(odefun,args)
...
while (sign(costheta) == originalsign)
y=y(:) + odefun(0,y(:),vars,param)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
...
end
and to clarify, odefun is F1.m, an m-file of mine. I pass it into the function that contains this while-loop. It's something like whileloop(#F1,args). Line 4 in the code-block above is the Euler method.
The reason I'm using a while-loop is because I want to trigger upon the vector "y" crossing a plane defined by a point, "normpt", and the vector normal to the plane, "normvec".
Is there an easy change to this code that will speed it up dramatically? Should I attempt learning how to make mex files instead (for a speed increase)?
Edit:
Here is a rushed attempt at an example of what one could try to test with. I have not debugged this. It is to give you an idea:
%Save the following 3 lines in an m-file named "F1.m"
function ydot = F1(placeholder1,y,placeholder2,placeholder3)
ydot = y/10;
end
%Run the following:
dt = 1.5e-12 %I do not know about this. You will have to experiment.
y0 = [.1,.1,.1];
normpt = [3,3,3];
normvec = [1,1,1];
originalsign = sign(dot(y0-normpt,normvec));
costheta = originalsign;
y = y0;
k = 0;
while (sign(costheta) == originalsign)
y=y(:) + F1(0,y(:),0,0)*(dt); % Line 4
costheta = dot(y-normpt,normvec);
k = k + 1;
end
disp(k);
dt should be sufficiently small that it takes hundreds of thousands of iterations to trigger.
Assume I must use the Euler method. I have a stochastic differential equation with state-dependent noise if you are curious as to why I tell you to take such an assumption.
I would focus on your actual ODE integration. The fewer steps you have to take, the faster the loop will run. I would only worry about the speed of the sign check after you've optimized the actual integration method.
It looks like you're using the first-order explicit Euler method. Have you tried a higher-order integrator or an implicit method? Often you can increase the time step significantly.