Define Nonconstant Boundary Conditions "partially" on an edge - MATLAB - matlab

I need to define Dirichlet and Neumann conditions in a plain stress problem only on part of one of the edges of a plate.
Matlab's help for defining nonconstant boundary conditions indicates that functions must be written such as:
applyBoundaryCondition(model,'edge',1,'r',#myrfun);
applyBoundaryCondition(model,'face',2,'g',#mygfun,'q',#myqfun);
applyBoundaryCondition(model,'edge',[3,4],'u',#myufun,'EquationIndex',[2,3]);
It moreover says that each function must have the following syntax.
function bcMatrix = myfun(region,state)
and finally that "region" is a structure containing the fields region.x (x-coordinate of the points), region.y (y-coordinate of the points), etc. if there are Neumann conditions (which is my case), then solvers pass the following data in the region structure: region.nx — (x-component of the normal vector at the evaluation points), etc. My questions are:
From where can I get the structure region?
How can I pass the argument that boundary conditions apply in one part of one of the edges?
Thanks!!

#Oliver,
1) I think you don't need to make the structure region, but make functions that are capable of using it. Since the boundary condition generally depends on the location, you will need region.x and region.y.
2) You can use region.x and region.y to make boundary conditions dependent on the location. This is one way of applying them "partially" if the type of boundary condition is the same. Otherwise, you will have to define the split in the boundary explicitly. This happens probably while defining the geometry of the problem.

Related

Division by zero depending on parameter

I am using the FixedRotation component and get a division by zero error. This happens in a translated expression of the form
var = nominator/fixedRotation.R_rel_inv.T[1,3]
because T[1,3] is 0 for the chosen parameters:
n={0,1,0}
angle=180 deg.
It seems that Openmodelica keeps the symbolic variable and tries to be generic but in this case this leads to division by zero because it chooses to put T[1,3] in the denominator.
What are the modifications in order to tell the compiler that the evaluated values T[1,3] for the compilation shall be considered as if the values were hard coded? R_rel is internally in fixedRotation not defined with Evaluate=true...
Should I use custom version of this block? (when I copy paste the source code to a new model and set the parameters R_rel and R_rel_inv to Evalute=true then the simulation works without division by zero)...
BUT is there a modifier to tell from outside that a parameter shall be Evaluate=true without the need to make a new model?
Any other way to prevent division by zero?
Try propagating the parameter at a higher level and setting annotation(Evaluate=true) on this.
For example:
model A
parameter Real a=1;
end A;
model B
parameter Real aPropagated = 2 annotation(Evaluate=true);
A Ainstance(a=aPropagated);
end B;
I don't understand how the Evaluate annotation should help here. The denominator is obviously zero and this is what shall be in fact treated.
To solve division by zero, there are various possibilities (e.g. to set a particular value for that case or to define a small offset to denominator, you can find examples in the Modelica Standard Library). You can also consider the physical meaning of the equation and handle this accordingly.
Since the denominator depends on a parameter, you can also set an assert() to warn the user there is wrong parameter value.
Btw. R_rel_inv is protected and shall, thus, not be used. Use R_rel instead. Also, to deal with rotation matrices, usage of functions from Modelica.Mechanics.MultiBody.Frames is a preferrable way.
And: to use custom version or own implementation depends on your preferences. Custom version is maintained by the comunity, own version is in your hands.

Coupled variables in hyperparameter optimization in MATLAB

I would like to find optimal hyperparamters for a specific function, I am using bayesopt routine in MATLAB.
I can set the variables to optimize like the following:
a = optimizableVariable('a',[0,1],'Type','integer');
But I have coupled variables, i.e, variables whose value depend on the existence of other variables, e.g., a={0,1}, b={0,1} iff a=1.
Meaning that b has an influence on the function if a==1.
I thought about creating a unique variables that encompasses all the possibilities, i.e., c=1 if a=0, c=2 if a=1,b=0, c=3 if a=1,b=0. The problem is that I am interested in optimizing continuous variables and the above approach does not hold anymore.
I tried something alone the line of
b = a * optimizableVariable('b',[0,1],'Type','integer');
But MATLAB threw an error.
Undefined operator '*' for input arguments of type 'optimizableVariable'.
After three months almost to the day, buried deep down in MATLAB documentation, the answer was to use constrained variables.
https://www.mathworks.com/help/stats/constraints-in-bayesian-optimization.html#bvaw2ar

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Suppress kinks in a plot matlab

I have a csv file which contains data like below:[1st row is header]
Element,State,Time
Water,Solid,1
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,6
Water,Solid,7
Water,Solid,8
Water,Solid,7
Water,Solid,6
Water,Solid,5
Water,Solid,4
Water,Solid,3
The similar pattern is repeated for State: "Solid" replaced with Liquid and Gas.
And moreover the Element "Water" can be replaced by some other element too.
Time as Integer's are in seconds (to simplify) but can be any real number.
Additionally there might by some comment line starting with # in between the file.
Problem Statement: I want to eliminate the first dip in Time values and smooth out using some quadratic or cubic or polynomial interpolation [please notice the first change from 5->2 --->8. I want to replace these numbers to intermediate values giving a gradual/smooth increase from 5--->8].
And I wish this to be done for all the combinations of Elements and States.
Is this possible through some sort of coding in Matlab etc ?
Any Pointers will be helpful !!
Thanks in advance :)
You can use the interp1 function for 1D-interpolation. The syntax is
yi = interp1(x,y,xi,method)
where x are your original coordinates, y are your original values, xi are the coordinates at which you want the values to be interpolated at and yi are the interpolated values. method can be 'spline' (cubic spline interpolation), 'pchip' (piece-wise Hermite), 'cubic' (cubic polynomial) and others (see the documentation for details).
You have alot of options here, it really depends on the nature of your data, but I would start of with a simple moving average (MA) filter (which replaces each data point with the average of the neighboring data points), and see were that takes me. It's easy to implement, and fine-tuning the MA-span a couple of times on some sample data is usually enough.
http://www.mathworks.se/help/curvefit/smoothing-data.html
I would not try to fit a polynomial to the entire data set unless I really needed to compress it, (but to do so you can use the polyfit function).

Choosing MATLAB Vector element based on Simulink model argument

I'm trying to parameterize one of my Simulink models, so that I will have a gain in the model whose value is equal to an element of a MATLAB workspace vector indexed by the model parameter. That is, I want to define a model argument WheelIndex and have a gain inside the model with a value AxelLoads(WheelIndex).
When I do it exactly as I've described above, I get "vector indices must be real and positive integers" error. When I change the model argument to AxelLoad(To be used directly in the gain component) and assign its value to be AxelLoads(1)(for the first wheel) I get:
Error in 'Overview/Wheel1'. Parameter '18000.0, 15000.0, 17000.0,
21000.0' setting: "18000.0, 15000.0, 17000.0, 21000.0" cannot be evaluated.
I've also tried importing the vector as a constant block into the model, and use a selector block parameterized by the WheelIndex argument to direct the right element to a multiplication block (thereby making an ugly gain block), but then Simulink complains that I'm trying to use a model argument to define an "un-tunable value".
I just want to somehow define the parameters in the MATLAB workspace to be used in each model instance, so that I can, say, calculate the total weight by adding the loads on each wheel. Simulink seems to block all workarounds I've been trying.
Thanks
Could you use a lookup table to obtain AxelLoads vs. WheelIndex?
The easiest way is if I just came over? :P
Perhaps this explaination of tunable parameters helps a little?