For a numerical simulation in MATLAB I have parameters defined in an .m file.
%; Parameters as simple definitons
amb.T = 273.15+25; ... ambient temperature [K]
amb.P = 101325; ... ambient pressure [Pa]
combustor.T = 273.15+800; ... [K]
combustor.P = 100000; ... [Pa]
combustor.lambda = 1.1;
fuel.x.CH4 = 0.5; ... [0..1]
fuel.n = 1;
air.x.O2 = 0.21;
%; more complex definitions consisting of other params
air.P = reactor.P;
air.T = amb.T;
air.n = fuel.x.CH4 * 2 * fuel.n * combustor.lambda / air.x.O2;
Consider this set as 'default' definitions. For running one simulation this definitions works fine.
It's getting more complicated if I want to change one of these parameters programmatically for a parameter study (the effect of changing parameters on the results), that is, to perform multiple simulations by using a for loop. In the script performing this I want to change the defintion of several parameters beforehand, i.e. overwrite default definitions. Is there a way to do this without touching the default definitions in-code (comment them/overwrite them literally)? It should be possible to change any parameter in the study-performing script and catch up on default definitions from the listing above (or the other way round).
Let me illustrate the problem with the following example: If I want to vary combustor.lambda (let's say running from 0.9 to 1.3) field air.n has to be evaluated again for the change to take place in the actual simulation. So, I could evaluate the listing again, but this way I would lose the study-defined combustor.lambda for the default one.
I am thinking about these solutions but I cannot get to how to do this:
Use references/handles in a way that the struct fields only hold the definitions, not the actual values. This allows for changing default definitions before 'parsing' the whole struct to get the actual values.
Evaluate the default definition set by a function considering (non-default) definitions defined preliminarily, i.e. skipping these lines of the default definition set during evaluation.
Any OOP approach. Of course, it is not limited to struct data types, but on the other hand, maybe there are useful functions for structs?
Edit:
The purpose of the default set is for the programmer to be as free as possible in choosing the varying parameters with any of the other parameters keeping their default definition which can be independent (= values) as well as dependent (= equations like air.n).
% one default parameter set
S = struct('T', 25, 'P', 101000, 'lambda', .5, 'fuel', .5);
GetNByLambda = #(fuel, lambda) fuel * 2 * lambda;
T = struct('P', S.P, 'n', GetNByLambda(S.fuel, S.lambda));
% add more sets
S(end+1) = struct('T', 200, 'P', 10000, 'lambda', .8, 'fuel', .7);
T(end+1) = struct('P', S.P, 'n', GetNByLambda(S(end+1).fuel, S(end+1).lambda));
% iterate over parameter sets
for ii = 1:length(S)
disp(S(end+1))
disp(T(end+1))
end
Related
I am working with Dymola, and try to use the functions provided by Modelica standard library in the command window, but it seems that I can't use them, and I couldn't claim a variable of a specific type either. I am wondering if there is some kind of limit of the command I could use in the command window of Dymola. Where should I find all the allowable commands?
I try to use some functions from Modelica.Media, it seems the input variables are out of range, but I tried a lot of times and different units system. I find that I can't declare a variable of pressure type in the command window, but Modelica.Media.Water.IF97_Utilities.h_pT() requires that I need to provide the variable as pressure and enthalpy type, is this the reason I can't use this function in the command window?
Modelica.Media.Water.IF97_Utilities.h_pT(1e6,800,1)
Failed to expand Modelica.Media.Water.IF97_Utilities.h_props_pT(
1000000.0,
800,
Modelica.Media.Common.IF97BaseTwoPhase(
phase = 1,
region = 1,
p = 1000000.0,
T = 800.0,
h = 9.577648835649013E+20,
R = 461.526,
cp = 1.8074392528071426E+20,
cv = -3.7247229288028774E+18,
rho = 5.195917767496603E-13,
s = 1.2052984524009106E+18,
pt = 645518.9415389205,
pd = 6.693617079374418E+18,
vt = 357209983199.2206,
vp = -553368.7088215105,
x = 0.0,
dpT = 645518.9415389205
)).
Failed to expand Modelica.Media.Water.IF97_Utilities.h_pT(1000000.0, 800, 1).
Assuming the inputs are valid there seems to be an issue specifically related to evaluating some media-functions interactively in Dymola (since they shouldn't be evaluated in models). It will be corrected in Dymola 2022x.
A temporary work-around is to first set the flag Advanced.SemiLinear = false; and then:
Modelica.Media.Water.IF97_Utilities.h_pT(1e6,800,1)
= 9.577648835649013E+20
(I'm not sure how valid the formulation is in that region.)
But please remember to set Advanced.SemiLinear = true; before translating and simulating any models - in particular models using media-functions.
The problem is that you are giving the function an invalid input. It seems Dymola does not give you the error-message for this based on the screenshot and logs you provided. I tried it in OpenModelica and got:
Modelica.Media.Water.IF97_Utilities.h_pT(100e5, 500e3)
[Modelica 4.0.0/Media/Water/IF97_Utilities.mo:2245:9-2246:77] Error: assert triggered: IF97 medium function g5: input temperature (= 500000 K) is higher than limit of 2273.15K in region 5
By using a value within the limits, it returns a value:
Modelica.Media.Water.IF97_Utilities.h_pT(100e5, 1e3)
I am trying to train a GAN a machine with 3GPUs using distributed data parallel.
before wrapping my model in the DDP everything works fine but when I wrap it, it givers me the following Runtime Error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128]] is at version 5; expected version 4 instead.
I cloned every related tensor to the gradient to solve the inplace operation (if it is any) but I could not find it.
the part of code with the problem is as follow:
Tensor = torch.cuda.FloatTensor
# ----------
# Training
# ----------
def train_gan(rank, world_size, opt):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
if rank == 0:
get_dataloader(rank, opt)
dist.barrier()
print(f"Rank {rank}/{world_size} training process passed data download barrier.\n")
dataloader = get_dataloader(rank, opt)
# Loss function
adversarial_loss = torch.nn.BCELoss()
# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()
# Initialize weights
generator.apply(weights_init_normal)
discriminator.apply(weights_init_normal)
generator.to(rank)
discriminator.to(rank)
generator_d = DDP(generator, device_ids=[rank])
discriminator_d = DDP(discriminator, device_ids=[rank])
# Optimizers
# Since we are computing the average of several batches at once (an effective batch size of
# world_size * batch_size) we scale the learning rate to match.
optimizer_G = torch.optim.Adam(generator_d.parameters(), lr=opt.lr * opt.world_size, betas=(opt.b1, opt.b2))
optimizer_D = torch.optim.Adam(discriminator_d.parameters(), lr=opt.lr * opt.world_size, betas=(opt.b1, opt.b2))
losses = []
for epoch in range(opt.n_epochs):
for i, (imgs, _) in enumerate(dataloader):
# Adversarial ground truths
valid = Variable(Tensor(imgs.shape[0], 1).fill_(1.0), requires_grad=False).to(rank)
fake = Variable(Tensor(imgs.shape[0], 1).fill_(0.0), requires_grad=False).to(rank)
# Configure input
real_imgs = Variable(imgs.type(Tensor)).to(rank)
# -----------------
# Train Generator
# -----------------
optimizer_G.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], opt.latent_dim)))).to(rank)
# Generate a batch of images
gen_imgs = generator_d(z)
# Loss measures generator's ability to fool the discriminator
g_loss = adversarial_loss(discriminator_d(gen_imgs), valid)
g_loss.backward()
optimizer_G.step()
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Measure discriminator's ability to classify real from generated samples
real_loss = adversarial_loss(discriminator_d(real_imgs), valid)
fake_loss = adversarial_loss(discriminator_d(gen_imgs.detach()), fake)
d_loss = ((real_loss + fake_loss) / 2).to(rank)
d_loss.backward()
optimizer_D.step()
I encountered a similar error when trying to train a GAN with DistributedDataParallel.
I noticed the problem was coming from BatchNorm layers in my discriminator.
Indeed, DistributedDataParallel synchronizes the batchnorm parameters at each forward pass (see the doc), thereby modifying the variable inplace, which causes problems if you have multiple forward passes in a row.
Converting my BatchNorm layers to SyncBatchNorm did the trick for me:
discriminator = torch.nn.SyncBatchNorm.convert_sync_batchnorm(discriminator)
discriminator = DPP(discriminator)
You probably want to do it anyway when using DistributedDataParallel.
Alternatively, if you don't want to use SyncBatchNorm, you can set the broadcast_buffers parameter to False, but I don't think you really want to do that, as it means your batch norm stats will not be synchronized among processes.
discriminator = DPP(discriminator, device_ids=[rank], broadcast_buffers=False)
I am new to Julia and trying to use the Julia package DifferentialEquations to simultaneously solve for several conditions of the same set of coupled ODEs. My system is a model of an experiment and in one of the conditions, I increase the amount of one of the dependent variables at mid-way through the process.
I would like to be able to adjust the condition of this single trajectory, however so far I am only able to adjust all the trajectories at once. Is it possible to access a single one using callbacks? If not, is there a better way to do this?
Here is a simplified example using the lorentz equations for what I want to be doing:
#Differential Equations setup
function lorentz!(du,u,p,t)
a,r,b=p
du[1]= a*(u[2]-u[1])
du[2]=u[1]*(r-u[3])-u[2]
du[3]=u[1]*u[2]-b*u[3];
end
#function to cycle through inital conditions
function prob_func(prob,i,repeat)
remake(prob; u0 = u0_arr[i]);
end
#inputs
t_span=[(0.0,100.0),(0.0,100.0)];
u01=[0.0;1.0;0.0];
u02=[0.0;1.0;0.0];
u0_arr = [u01,u02];
p=[10.,28.,8/3];
#initialising the Ensemble Problem
prob = ODEProblem(lorentz!,u0_arr[1],t_span[1],p);
CombinedProblem = EnsembleProblem(prob,
prob_func = prob_func, #-> (prob),#repeat is a count for how many times the trajectories had been repeated
safetycopy = true # determines whether a safetly deepcopy is called on the prob before the prob_func (sounds best to leave as true for user-given prob_func)
);
#introducing callback
function condition(u,t,repeat)
return 50 .-t
end
function affect!(repeat)
repeat.u[1]=repeat.u[1] +50
end
callback = DifferentialEquations.ContinuousCallback(condition, affect!)
#solving
sim=solve(CombinedProblem,Rosenbrock23(),EnsembleSerial(),trajectories=2,callback=callback);
# Plotting for ease of understanding example
plot(sim[1].t,sim[1][1,:])
plot!(sim[2].t,sim[2][1,:])
I want to produce something like this:
Example_desired_outcome
But this code produces:
Example_current_outcome
Thank you for your help!
You can make that callback dependent on a parameter and make the parameter different between problems. For example:
function f(du,u,p,t)
if p == 0
du[1] = 2u[1]
else
du[1] = -2u[1]
end
du[2] = -u[2]
end
condition(t,u,integrator) = u[2] - 0.5
affect!(integrator) = integrator.prob.p = 1
For more information, check out the FAQ on this topic: https://diffeq.sciml.ai/stable/basics/faq/#Switching-ODE-functions-in-the-middle-of-integration
When I simulate the model below, I get additional variables labelled $STATESET1, which are obviously auto-generated.
What is the purpose of these variables from the perspective of the user? Generally I am only interested in the solution, not in the specific strategies a specific solver achieved it with, right? So isn't this more like something that should be output only if one turns on model debugging of some kind rather than being something the average OpenModelica user can take advantage of? What if there is more than one "state set" (say $STATESET1 and $STATESET2): how am I supposed to know how these variables relate to my model, given their generic names? More specifically, what is $STATESET1.x[:]? Nothing in the original or flattened model gives a hint on this...
model StateSetTest
import SI = Modelica.SIunits;
Real[3] q(start = zeros(3), each fixed = true);
Real q4(start = 1);
Real[3] w(start = zeros(3), each fixed = true);
SI.Torque[3] TResult;
equation
q * q + q4 * q4 = 1;
w = 2.0 * (q4 * der(q) - der(q4) * q - cross(der(q), q));
der(w) = TResult;
TResult = zeros(3);
end StateSetTest;
They are used for dynamic state selection, i.e. changing the state during the simulation. And yes, they are not really needed for the user. I guess we could filter them out from OMEdit. I'll open a ticket about this.
In the following code I am trying to generate a NumericVector of values from a normal distribution, where every time rnorm() is called each time with a different mean and variance.
Here is the code:
// [[Rcpp::export]]
NumericVector generate_ai(NumericVector log_var) {
int log_var_length = log_var.size();
NumericVector temp(log_var_length);
for(int i = 0; i < log_var_length; i++) {
temp[i] = rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]));
}
return(temp);
}
The line that is giving me trouble is this one:
temp[i] = rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]));
It is causing the error:
assigning to 'typename storage_type<14>::type' (aka 'double') from
incompatible type 'NumericVector' (aka 'Vector<14>')
Since I'm returning one number from rnorm, is there a way to convert this NumericVector return type to a double?
Rcpp provides two methods to access RNG sampling schemes. The first option is a single draw and the second enables n draws using some sweet sweet Rcpp sugar. Under your current setup, you are opting for the later setup.
Option 1. Use just the scalar sampling scheme instead of sugar by accessing the RNG function through R::, e.g.
temp[i] = R::rnorm(-0.5 * log_var[i], sqrt(log_var[i]));
Option 2. Use the subset operator on the NumericVector to obtain the only element.
// C++ indices start at 0 instead of 1
temp[i] = Rcpp::rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]))[0];
The prior option will be faster and better. Why you might ask?
Well, Option 2 creates a new NumericVector, fills it with a call to Option 1, then requires a subset operation to retrieve the value before assigning it to the desired scalar.
In any case, RNG can be a bit confusing. Just make sure to always prefix the function call with the correct namespace (e.g. R:: or Rcpp::) so that you and perhaps future programmers avoid any ambiguity as to what kind of sampling scheme you've opted for.
(This is one of the downside of using namespace Rcpp;)