Design optimization of material cost of water tank - matlab

One way to improve engineering designs is by formulating the equations describing the design in the form of a minimization or maximization problem. This approach is called design optimization. Examples of quantities to be minimized are energy consumption and construction materials. Items to be maximized are useful life and capacity such as the vehicle weight that can be supported by a bridge. In this project, we consider the problem of minimizing the material cost associated with building a water tank. The water tank consists of a cylindrical part of radius r and height h, and a hemispherical top. The tank is to be constructed to hold 500 meter cubed when filled. The surface area of the cylindrical part is 2*pi*rh, and its volume is pi*r^2. The surface area of the hemispherical top is given by 2*pi*r^2, and is volume is given by 2*pi*r^3/3. The cost to construct the cylindrical part of the tank is $300 per square meter of surface area;the hemispherical part costs $400 per square meter. Use the fminbnd function to compute the radius that results in the least cost. Compute the corresponding height h.
I got the right answer but it is very chaotic. I created bunch of function. I wonder if I can created one function?... let's name it ONEFUN
function R = findR(x)
h = (1500-2.*pi*x.^3)./(3.*pi.*x.^2);
R = 2.*pi.*x.*(h) + 2.*pi.*x.^2+pi.*x.^2;
function H = findH(x)
H = (1500-2.*pi*x.^3)./(3.*pi.*x.^2);
function [Cc, Chs, Tc] = Costs(r,h) % Cc - Cost of Cylinder, Chs - Cost of Hemishpere,
%Tc - Total Cost
Cc = ((2.*pi.*r.*h) + (pi.*r.^2)).*300;
Chs = (2.*pi.*r.^2).*400;
Tc = Cc+Chc;
I thought of using switch, response but I have no idea how to do it.
function Anwsers
response = input('Type "find r", "find h", "costHS", "costC", "total": ','s');
response = lower(response);
switch response
case 'find r'
Radius = fminsearch(#ONEFUN, [1]);
case 'find h'
Hight = findH(r)
case 'costHS'
case 'costC'
case 'total'
otherwise
disp('You have not entered a proper choice.')
end
I would appreciate and help

Doing it in one function is a bad idea. Lot's of simple function that do one thing each is good.
Most of the chaos from my point of view seems to be terse names, magic numbers, relying on operator precedence and duplication.
h = (1500- (2.*pi*x.^3)./(3.*pi.*x.^2)); for instance, I think ...
Why aren't you using the function of the same name? same code twice.
Where in Cthulhu's name do the numbers 1500, 300 and 400 come from?
Never been keen on single character function names myself, but that might be my lack of familiarity with expressing a problem mathematically.

This is a typical problem of minimizing a function with constraints. That is, you want to minimize the Cost(R,H), while keeping the Volume(R,H) fixed, and you have a simple (two-variable) equation for each of these.
For this you could use the matlab function fmincon.
The above is the most direct computational approach, but there are other ways to solve it using various degrees of incorporating the constraint into the solution analytically. You could, for example, do a full analytic solution, or solve the Volume equation for H, and then put this into to the Cost equation (ie, Cost(R,H)->Cost(R)) and then just minimize over R, etc. The approach you used is within this partially analytic middle-ground, but it's a bit messier for it.

Related

Fitting data with Voigt profile in lmfit in python - huge errors

I am trying to fit some RIXS data with Voigt profiles (lmfit in Python), and I have defined the Voigt profile in the following way:
def gfunction_norm(x, pos, gwid):
gauss= (1/(gwid*(np.sqrt(2*np.pi))))*(np.exp((-1.0/2)*((((x-pos)/gwid))**2)))
return (gauss-gauss.min())/(gauss.max()-gauss.min())
def lfunction_norm(x,pos,lwid):
lorentz=(0.15915*lwid)/((x-pos)**2+0.25*lwid**2)
return (lorentz-lorentz.min())/(lorentz.max()-lorentz.min())
def voigt(x, pos, gwid, lwid, int):
step=0.005
x2=np.arange(pos-7,pos+7+step,step)
voigt3=np.convolve(gfunction_norm(x2, pos, gwid), lfunction_norm(x2, pos, lwid), mode='same')
norm=(voigt3-voigt3.min())/(voigt3.max()-voigt3.min())
y=np.interp(energy, x2, norm)
return y * int
I have used this definition instead of the popular Voigt profile definition in Python:
def voigt(x, alpha, cen, gamma):
sigma=alpha/np.sqrt(2*np.log(2))
return np.real(wofz((x-cen+1j*gamma)/sigma/np.sqrt(2)))/(sigma*2.51)
because it gives me more clarity on the intensity of the peaks etc.
Now, I have a couple of spectra with 9-10 peaks and I am trying to fit all of them with Voigt profiles (precisely in the way I defined it).
Now, I have a couple of questions:
Do you think my Voigt definition is OK? What (dis)advantages do I have by using the convolution instead of the approximative second definition?
As a result of my fit, sometimes I get crazy large standard deviations. For example, these are best-fit parameters for one of the peaks:
int8: 0.00986265 +/- 0.00113104 (11.47%) (init = 0.05)
pos8: -2.57960013 +/- 0.00790640 (0.31%) (init = -2.6)
gwid8: 0.06613237 +/- 0.02558441 (38.69%) (init = 0.1)
lwid8: 1.0909e-04 +/- 1.48706395 (1363160.91%) (init = 0.001)
(intensity, position, gaussian and lorentzian width respectively).
Does this output mean that this peak should be purely gaussian?
I have noticed that large errors usually happen when the best-fit parameter is very small. Does this have something to do with the Levenberg-Marquardt algorithm that is used by default in lmfit? I should note that I sometimes have the same problem even when I use the other definition of the Voigt profile (and not just for Lorentzian widths).
Here is a part of the code (it is a part of a bigger code and it is in a for loop, meaning I use same initial values for multiple similar spectra):
model = Model(final)
result = model.fit(spectra[:,nb_spectra], params, x=energy)
print(result.fit_report())
"final" is the sum of many voigt profiles that I previously defined.
Thank you!
This seems a duplicate or follow-up to Lmfit fit produces huge uncertainties - please use on SO question per topic.
Do you think my Voigt definition is OK? What (dis)advantages do I have by using the convolution instead of the approximative second definition?
What makes you say that the second definition is approximate? In some sense, all floating-point calculations are approximate, but the Faddeeva function from scipy.special.wofz is the analytic solution for the Voigt profile. Doing the convolution yourself is likely to be a bit slower and is also an approximation (at the machine-precision level).
Since you are using Lmfit, I would recommend using its VoigtModel which will make your life easier: it uses scipy.special.wofz and parameter names that make it easy to switch to other profiles (say, GaussianModel).
You did not give a very complete example of code (for reference, a minimal, working version of actual code is more or less expected on SO and highly recommended), but that might look something like
from lmfit.models import VoigtModel
model = VoigtModel(prefix='p1_') + VoigtModel(prefix='p2_') + ...
As a result of my fit, sometimes I get crazy large standard
deviations. For example, these are best-fit parameters for one of the
peaks:
int8: 0.00986265 +/- 0.00113104 (11.47%) (init = 0.05)
pos8: -2.57960013 +/- 0.00790640 (0.31%) (init = -2.6)
gwid8: 0.06613237 +/- 0.02558441 (38.69%) (init = 0.1)
lwid8: 1.0909e-04 +/- 1.48706395 (1363160.91%) (init = 0.001)
(intensity, position, gaussian and lorentzian width respectively).
Does this output mean that this peak should be purely gaussian?
First, that may not be a "crazy large" standard deviation - it sort of depends on the data and rest of the fit. Perhaps the value for int8 is really, really small, and heavily overlapped with other peaks -- it might be highly correlated with other variables. But, it may very well mean that the peak is more Gaussian-like.
Since you are analyzing X-ray scattering data, the use of a Voigt function is probably partially justified with the idea (assertion, assumption, expectation?) that the material response would give a Gaussian profile, while the instrumentation (including X-ray source) would give a Lorentzian broadening. That suggests that the Lorentzian width might be the same for the various peaks, or perhaps parameterized as a simple function of the incident and scattering wavelengths or q values. That is, you might be able to (and may be better off) constrain the values of the Lorentzian width (your lwid, I think, or gamma in lmfit.models.VoigtModel) to all be the same.

Simulating spatial PDEs in Modelica - Accessing variable values at specific times

This question is somewhat related to a previous question of mine, where I didn't quite get the right solution. Link: Earlier SO-thread
I am solving PDEs which are time variant with one spatial dimension (e.g. the heat equation - see link below). I'm using the numerical method of lines, i.e. discretizing the spatial derivatives yielding a system of ODEs which are readily solved in Modelica (using the Dymola tool). My problems arise when I simulate the system, or when I plot the results, to be precise. The equations themselves appear to be solved correctly, but I want to express the spatial changes in all the discretized state variables at specific points in time rather than the individual time-varying behavior of each discrete state.
The strategy leading up to my problems is illustrated in this Youtube tutorial, which by the way is not made by me. As you can see at the very end of the tutorial, the time-varying behavior of the temperature is plotted for all the discrete points in the rod, individually. What I would like is a plot showing the temperature through the rod at a specific time, that is the temperature as a function of the spatial coordinate. My strategy to achieve this, which I'm struggling with, is: Given a state vector of N entries:
Real[N] T "Temperature";
..I would use the plotArray Dymola function as shown below.
plotArray( {i for i in 1:N}, {T[i] for i in 1:N} )
Intuitively, this would yield a plot showing the temperature as a function of the spatial coordiate, or the number in the line of discrete units, to be precise. Although this command yields a result, all T-values appear to be 0 in the plot, which is definitely not the case. My question is: How can I successfully obtain and plot the temperatures at all the discrete points at a given time? Thanks in advance for your help.
The code for the problem is as indicated below.
model conduction
parameter Real rho = 1;
parameter Real Cp = 1;
parameter Real L = 1;
parameter Real k = 1;
parameter Real Tlo = 0;
parameter Real Thi = 100;
parameter Real Tinit = 30;
parameter Integer N = 10 "Number of discrete segments";
Real T[N-1] "Temperatures";
Real deltaX = L/N;
initial equation
for i in 1:N-1 loop
T[i] = Tinit;
end for;
equation
rho*Cp*der(T[1]) = k*( T[2] - 2*T[1] + Thi) /deltaX^2;
rho*Cp*der(T[N-1]) = k*( Tlo - 2*T[N-1] + T[N-2]) /deltaX^2;
for i in 2:N-2 loop
rho*Cp*der(T[i]) = k*( T[i+1] - 2*T[i] + T[i-1]) /deltaX^2;
end for
annotation (uses(Modelica(version="3.2")));
end conduction;
Additional edit: The simulations show clearly that for example T[3], that is the temperature of discrete segment no. 3, starts out from 30 and ends up at 70 degrees. When I write T[3] in my command window, however, I get T3 = 0.0 in return. Why is that? This is at the heart of the problem, because the plotArray function would be working if I managed to extract the actual variable values at specific times and not just 0.0.
Suggested solution: This is a rather tedious solution to achieve what I want, and I hope someone knows a better solution. When I run the simulation in Dymola, the software generates a .mat-file containing the values of the variables throughout the time of the simulation. I am able to load this file into MATLAB and manually extract the variables of my choice for plotting. For the problem above, I wrote the following command:
plot( [1:9]' , data_2(2:2:18 , 10)' )
This command will plot the temperatures (as the temperatures are stored together with their derivates in the data_2 array in the .mat-file) against the respetive number of the discrete segment/element. I was really hoping to do this inside Dymola, that is avoid using MATLAB for this. For this specific problem, the amount of variables was low on account of the simplicity of this problem, but I can easily image a .mat-file which is signifanctly harder to navigate through manually like I just did.
Although you do not mention it explicitly I assume that you enter your plotArray command in Dymola's command window. That won't work directly, since the variables you see there do not include your simulation results: If I simulate your model, and then enter T[:] in Dymola's command window, then the printed result is
T[:]
= {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}
I'm not a Dymola expert, and the only solution I've found (to actively store and load the desired simulation results) is quite cumbersome:
simulateModel("conduction", resultFile="conduction.mat")
n = readTrajectorySize("conduction.mat")
X = readTrajectory("conduction.mat", {"Time"}, n)
Y = readTrajectory("conduction.mat", {"T[1]", "T[2]", "T[3]"}, n)
plotArrays(X[1, :], transpose(Y))

fit function of Matlab is really slow

Why is the fitfunction from Matlab so slow? I'm trying to fit a gauss4 so I can get the means of the gaussians.
here's my plot,
I want to get the means from the blue data and red data.
I'm fitting a gaussian there but this function is really slow.
Is there an alternative?
fa = fit(fn', facm', 'gauss4');
acm = [fa.b1 fa.b2 fa.b3 fa.b4];
a_cm = sort(acm, 'ascend');
I would apply some of the options available with fit. These include smoothing by setting SmoothingParam (your data is quite noisy, the alternative of applying a time domain filter may also help*), and setting the values of your initial parameter estimates, with StartPoint. Your fits may also not be converging because you set your tolerances (TolFun, TolX) too low, although from inspection of your fits that does not appear to be the case, in fact the opposite is likely, you probably want to increase the MaxIter and/or MaxFunEvals.
To figure out how to get going you can also try the Spectr-O-Matic toolbox. It requires Matlab 7.12. It includes a script called GaussFit.m to fit gauss4 to data, but it also uses the fit routine and provides examples on how to set and get parameters.
Note that smoothing will of course broaden your peaks, but you can subtract the contribution after the fact. The effect on the mean should not be deleterious, on the contrary, since you are presumably removing noise this should be more accurate.
In general functions will be faster if you apply it to a shorter series. Hence, if speedup is really important you could downsample.
For example, if you have a vector that you want to downsample by a factor 2: (you may need to make sure it fits first)
n = 2;
x = sin(0.01:0.01:pi);
x_downsampled = x(1:n:end)+x(2:n:end);
You will now see that x_downsampled is much smaller (and should thus be easier to process), but will still have the same shape. In your case I think this is sufficient.
To see what you got try:
plot(x)
Now you can simply process x_downsampled and map your solution, for example
f = find(x_downsampled == max(x_downsampled));
location_of_maximum = f * n;
Needless to say this should be done in combination with the most efficient options that the fit function has to offer.

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Modelica - Modeling a slider element in OpenModelica

Rheological models are usually build using three (or four) basics elements, which are :
The spring (existing in Modelica.Mechanics.Translational.Components for example). Its equation is f = c * (s_rel - s_rel0);
The damper (dashpot) (also existing in Modelica.Mechanics.Translational.Components). Its equation is f = d * v_rel; for a linear damper, an could be easily modified to model a non-linear damper : f = d * v_rel^(1/n);
The slider, not existing (as far as I know) in this library... It's equation is abs(f)<= flim. Unfortunately, I don't really understand how I could write the corresponding Modelica model...
I think this model should extend Modelica.Mechanics.Translational.Interfaces.PartialCompliant, but the problem is that f (the force measured between flange_b and flange_a) should be modified only when it's greater than flim...
If the slider extends PartialCompliant, it means that it already follows the equations flange_b.f = f; and flange_a.f = -f;
Adding the equation f = if abs(f)>flim then sign(f)*flim else f; gives me an error "An independent subset of the model has imbalanced number of equations and variables", which I couldn't really explain, even if I understand that if abs(f)<=flim, the equation f = f is useless...
Actually, the slider element doesn't generate a new force (just like the spring does, depending on its strain, or just like the damper does, depending on its strain rate). The force is an input for the slider element, which is sometime modified (when this force becomes greater than the limit allowed by the element). That's why I don't really understand if I should define this force as an input or an output....
If you have any suggestion, I would greatly appreciate it ! Thanks
After the first two comments, I decided to add a picture that, I hope, will help you to understand the behaviour I'm trying to model.
On the left, you can see the four elements used to develop rheological models :
a : the spring
b : the linear damper (dashpot)
c : the non-linear damper
d : the slider
On the right, you can see the behaviour I'm trying to reproduce : a and b are two associations with springs and c and d are respectively the expected stress / strain curves. I'm trying to model the same behaviour, except that I'm thinking in terms of force and not stress. As i said in the comment to Marco's answer, the curve a reminds me the behaviour of a diode :
if the force applied to the component is less than the sliding limit, there is no relative displacement between the two flanges
if the force becomes greater than the sliding limit, the force transmitted by the system equals the limit and there is relative displacement between flanges
I can't be sure, but I suspect what you are really trying to model here is Coulomb friction (i.e. a constant force that always opposes the direction of motion). If so, there is already a component in the Modelica Standard Library, called MassWithStopAndFriction, that models that (and several other flavors of friction). The wrinkle is that it is bundled with inertia.
If you don't want the inertia effect it might be possible to set the inertia to zero. I suspect that could cause a singularity. One way you might be able to avoid the singularity is to "evaluate" the parameter (at least that is what it is called in Dymola when you set the Evaluate flat to be true in the command line). No promises whether that will work since it is model and tool dependent whether such a simplification can be properly handled.
If Coulomb friction is what you want and you really don't want inertia and the approach above doesn't work, let me know and I think I can create a simple model that will work (so long as you don't have inertia).
A few considerations:
- The force is not an input and neither an output, but it is just a relation that you add into the component in order to define how the force will be propagated between the two translational flanges of the component. When you deal with acausal connectors I think it is better to think about the degrees of freedom of your component instead of inputs and outputs. In this case you have two connectors and independently at which one of the two frames you will recieve informations about the force, the equation you implement will define how that information will be propagated to the other frame.
- I tested this:
model slider
extends
Modelica.Mechanics.Translational.Interfaces.PartialCompliantWithRelativeStates;
parameter Real flim = 1;
equation
f = if abs(f)>flim then sign(f)*flim else f;
end slider;
on Dymola and it works. It is correct modelica code so it should be work also in OpenModelica, I can't think of a reason why it should be seen as an unbalance mathematical model.
I hope this helps,
Marco