Is there any possibility to add a source term to Ansys Mechanical? - simulation

I am trying to do a transient thermal simulation in Ansys mechanical. To get a more appropriate simulation I want to "add" a source term. In special I want to add a reduction extent to my equation, which is temperature dependent.
I tried to do this with APDL commands but I am not very familiar with this programming language nor with the routines in Ansys.
Here is the "pseudo-code" I thought about so far:
! ############################################
! Getting values and material data
! ############################################
*GET, NumberOfMeshElements, VOLU, 0, COUNT, , Body 1, ! Number of Elements in Body 1
*GET, TimeStepSize, ACTIVE, 0, SOLU, DTIME
*SET, CeriaAmount, 1000 ! in mol
*SET, ReductionEnthalpy, 500 ! in kJ/mol O (
*SET, OxygenPartialPressure, 100 ! Setting oxygen partial pressure
*SET, Temperature, TEMP ! Setting Temperature as a variable
*SET, ReductionExtent, 0 ! previous reduction extent
! ############################################
! Calculation of Source Term
! ############################################
! Begin of Loop for each time step
*DO, t, 0, 10, TimeStepSize ! solve together with transient thermal analysis (convection, radiation, conduction)
ReductionExtent = 0.35 * EXP(Temperature) ! goal: storing of previous (t-1) value for later calculation of Rate of Change
! Begin of Loop for each element
*DO, i, 1, NumberOfMeshElements, 1 ! needed for each mesh element because each mesh element has a different temperature
*GET, MeshElementVolume, VOLU, i, VOLU, , Body 1,
ReductionExtentRateOfChange = ReductionExtent(t-1)- ReductionExtent(i) / TimeStepSize
S_reaction = - CeriaAmount / MeshElementVolume * ReductionEnthalpy * ReductionExtentRateOfChange ! usally a source term

Apparently there is a way to generate heat depending on a nodal temperature:
https://forum.ansys.com/discussion/1695/heat-generation
If you need this method as APDL you can prototype your solution in the workbench and than lookup the APDL commands in the generated ds.dat.
APDL for the internal heat generation
You can either fit the Arrehnius equation to the polynom Property = C0 + C1(T) + C2(T)^2 + C3(T)^3 + C4(T)^4 and put that as your material model with
MP, QRATE, MATNUMBER, C0, C1, C2, C3, C4 or reference to a table name (C0 = %QRATETABLE%) created by
*DIM,QRATETABLE,TABLE,5,,,TEMP ! Define QRATE with TEMP
! primary variable
QRATETABLE(1,0)=0.,50.,100.,150.,200. ! Assign temperature values
QRATETABLE(1,1)=1,2,3,4,5 ! Assign heat generation values !!! I am using dummy values !!!!
MP,QRATE, MATNUMBER,%QRATETABLE% ! Input QRATE on the MP command

Related

Is it possible to use callbacks to access a single trajectory in Julia's DifferentialEquations Ensemble Problems?

I am new to Julia and trying to use the Julia package DifferentialEquations to simultaneously solve for several conditions of the same set of coupled ODEs. My system is a model of an experiment and in one of the conditions, I increase the amount of one of the dependent variables at mid-way through the process.
I would like to be able to adjust the condition of this single trajectory, however so far I am only able to adjust all the trajectories at once. Is it possible to access a single one using callbacks? If not, is there a better way to do this?
Here is a simplified example using the lorentz equations for what I want to be doing:
#Differential Equations setup
function lorentz!(du,u,p,t)
a,r,b=p
du[1]= a*(u[2]-u[1])
du[2]=u[1]*(r-u[3])-u[2]
du[3]=u[1]*u[2]-b*u[3];
end
#function to cycle through inital conditions
function prob_func(prob,i,repeat)
remake(prob; u0 = u0_arr[i]);
end
#inputs
t_span=[(0.0,100.0),(0.0,100.0)];
u01=[0.0;1.0;0.0];
u02=[0.0;1.0;0.0];
u0_arr = [u01,u02];
p=[10.,28.,8/3];
#initialising the Ensemble Problem
prob = ODEProblem(lorentz!,u0_arr[1],t_span[1],p);
CombinedProblem = EnsembleProblem(prob,
prob_func = prob_func, #-> (prob),#repeat is a count for how many times the trajectories had been repeated
safetycopy = true # determines whether a safetly deepcopy is called on the prob before the prob_func (sounds best to leave as true for user-given prob_func)
);
#introducing callback
function condition(u,t,repeat)
return 50 .-t
end
function affect!(repeat)
repeat.u[1]=repeat.u[1] +50
end
callback = DifferentialEquations.ContinuousCallback(condition, affect!)
#solving
sim=solve(CombinedProblem,Rosenbrock23(),EnsembleSerial(),trajectories=2,callback=callback);
# Plotting for ease of understanding example
plot(sim[1].t,sim[1][1,:])
plot!(sim[2].t,sim[2][1,:])
I want to produce something like this:
Example_desired_outcome
But this code produces:
Example_current_outcome
Thank you for your help!
You can make that callback dependent on a parameter and make the parameter different between problems. For example:
function f(du,u,p,t)
if p == 0
du[1] = 2u[1]
else
du[1] = -2u[1]
end
du[2] = -u[2]
end
condition(t,u,integrator) = u[2] - 0.5
affect!(integrator) = integrator.prob.p = 1
For more information, check out the FAQ on this topic: https://diffeq.sciml.ai/stable/basics/faq/#Switching-ODE-functions-in-the-middle-of-integration

Translating glmer (binomial) into jags to include a correlated random effect (time)

Context:
I have a 12 item risk assessment where individuals are given a rating from 0-4 (4 being the highest risk).  The risk assessment can be done multiple times for each individual (max = 19, but most only have less than 5 measurements).
The baseline level of risk varies by individual so I am looking for a random intercepts model, but also need to reflect the dynamic nature of the risk ie adding 'time' as a random coefficient.
The outcome is binary:
further offending (FO.bin) which occurs at the measurement level and would mean that I am essentially looking at what dynamic changes have occurred within one or more of the 12 items and how they have contributed to the individual committing a further offence in the period between the measurements
Ultimately what I am essentially looking to do is to predict whether an individual will offend in the future, based on other's (who share the same characteristics) assessment history, contextual factors and factors which may change over time.  
Goal:
I wish to add to my 'basic' model by adding time-varying (level 1) and time-invariant (level 2) predictors:
Time varying include dummy variables around the criminal justice process such as non-compliance, going to court and spending time in custody.   These are reflected as being an 'event' which has occurred in the period between assessments
Time invariant includes dummy variables such as being female, being non-White, and continuous predictors such as age at time of first offence
I've managed to set this up OK using lmer4 and have some potentially interesting results from adding the level 1 and level 2 predictors including where there are interactions and cross-interactions.  However, the complexity of the enhanced models is throwing up all kinds of warning messages including ones about failing to converge.   I therefore feel that it would be appropriate to switch to a Bayesian framework using Rjags so that I can feel more confident about my findings. 
The Problem:
Basically it is one of translation.  This is my 'basic' model based on time and the 12 items in the risk assessment using lme4:
Basic_Model1 <- glmer(BinaryResponse ~ item1 + item2 + item3 + ... + item12 + time + (1+time|individual), data=data, family=binomial)
This is my attempt to translate this into a BUGS model:
# the number of Risk Assessments = 552
N <-nrow(data)                                                            
# number of Individuals (individual previously specified) = 88
J <- length(unique(Individual))                                           
# the 12 items (previously specified)
Z <- cbind(item1, item2, item3, item4, ... item12)                        
# number of columns = number of predictors, will increase as model enhanced
K <- ncol(Z)                                                              
## Store all data needed for the model in a list
jags.data1 <- list(y = FO.bin, Individual =Individual, time=time, Z=Z, N=N, J=J, K=K)                   
model1 <- function() {
    for (i in 1:N) {
    y[i] ~ dbern(p[i])
    logit(p[i]) <- a[Individual[i]] + b*time[i]
  }
 
  for (j in 1:J) {
    a[j] ~ dnorm(a.hat[j],tau.a)
    a.hat[j]<-mu.a + inprod(g[],Z[j,])
  }
  b ~ dnorm(0,.0001)
  tau.a<-pow(sigma.a,-2)
  sigma.a ~ dunif(0,100)
 
  mu.a ~ dnorm (0,.0001)
  for(k in 1:K) {
    g[k]~dnorm(0,.0001)
  }
}
write.model(model1, "Model_1.bug")
Looking at the output, my gut feeling is that I've not added the varying coefficient for time and that what I have done so far is only the equivalent of
Basic_Model2 <- glmer(BinaryResponse ~ item1 + item2 + item3 + ... + item12 + time + (1|individual), data=data, family=binomial)
How do I tweak my BUGS code to reflect time as a varying co-efficient ie Basic_Model1 ? 
Based on the examples I have managed to find, I know that I need to make an additional specification in the J loop so that I can monitor the U[j], and there is a need to change the second part of the logit statement involving time, but its got to the point where I can't see the wood for the trees!
I'm hoping that someone with a lot more expertise than me can point me in the right direction. Ultimately I am looking to expand the model by adding additional level 1 and level 2 predictors. Having looked at these using lme4, I anticipate having to specify interactions cross-level interactions, so I am looking for an approach which is flexible enough to expand in this way. I'm very new to coding so please be gentle with me!
For that kind of case you can use an autoregressive gaussian model (CAR) for the time. As your tag is winbugs (or openbugs), you can use function car.normal as follows. This code needs to be adapted to your dataset !
Data
y should be a matrix with observations in line and time in columns. If you do not have same number of time for each i, just put NA values.
You also need to define the parameters of the temporal process. This is the matrix of neighborhood with the weights. I am sorry, but I do not totally remember how to create it... For autoregressive of order one, this should be something like:
jags.data1 <- list(
# Number of time points
sumNumNeigh.tm = 14,
# Adjacency but I do not remember how it works
adj.tm = c(2, 1, 3, 2, 4, 3, 5, 4, 6, 5, 7, 6, 8, 7),
# Number of neighbours to look at for order 1
num.tm = c(1, 2, 2, 2, 2, 2, 2, 1),
# Matrix of data ind / time
y = FO.bin,
# You other parameters
Individual =Individual, Z=Z, N=N, J=J, K=K)
Model
model1 <- function() {
for (i in 1:N) {
for (t in 1:T) {
y[i,t] ~ dbern(p[i,t])
# logit(p[i]) <- a[Individual[i]] + b*time[i]
logit(p[i,t]) <- a[Individual[i]] + b*U[t]
}}
# intrinsic CAR prior on temporal random effects
U[1:T] ~ car.normal(adj.tm[], weights.tm[], num.tm[], prec.nu)
for(k in 1:sumNumNeigh.tm) {weights.tm[k] <- 1 }
# prior on precison of temporal random effects
prec.nu ~ dgamma(0.5, 0.0005)
# conditional variance of temporal random effects
sigma2.nu <- 1/prec.nu
for (j in 1:J) {
a[j] ~ dnorm(a.hat[j],tau.a)
a.hat[j]<-mu.a + inprod(g[],Z[j,])
}
b ~ dnorm(0,.0001)
tau.a<-pow(sigma.a,-2)
sigma.a ~ dunif(0,100)
mu.a ~ dnorm (0,.0001)
for(k in 1:K) {
g[k]~dnorm(0,.0001)
}
}
For your information, with JAGS, you would need to code yourself the CAR model using dmnorm.

Non-fatal iteration errors during initialization

The Modelica fluid library attempts to have the useful attribute of being able to initialize either from temperature or enthalpy. However, in the Simulation log several errors show up that are a bit mysterious.
The logged errors don't seem to impact the simulation but they should not be appearing because:
The values passed to temperature_phX should be valid
use_T_start = true so the "else" option causing the errors should not be run
Below is a code that reproduces the error when you run "RunMe" along with an option that will not produce the error. The representative error is at the bottom.
Any insight to how to solve this issue would be greatly appreciated.
model InitialValuesSimplified
outer Modelica.Fluid.System system "System wide properties";
replaceable package Medium =
Modelica.Media.Water.StandardWater "Medium in the component";
parameter Medium.AbsolutePressure p_a_start=system.p_start
"Pressure at port a";
parameter Boolean use_T_start=true "Use T_start if true, otherwise h_start";
// Creates error log
parameter Medium.Temperature T_a_start=
if use_T_start then
system.T_start
else
Medium.temperature_phX(p_a_start,h_a_start,X_start)
"Temperature at port a";
// No error log
// parameter Medium.Temperature T_a_start=
// if use_T_start then
// system.T_start
// else
// system.T_start
// "Temperature at port a";
parameter Modelica.Media.Interfaces.Types.MassFraction X_start[Medium.nX]=
Medium.X_default "Mass fractions m_i/m";
parameter Medium.SpecificEnthalpy h_a_start=
if use_T_start then
Medium.specificEnthalpy_pTX(p_a_start,T_a_start,X_start)
else
1e5 "Specific enthalpy at port a";
end InitialValuesSimplified;
Code to run snippet:
model RunMe
InitialValuesSimplified initialValuesSimplified;
inner Modelica.Fluid.System system;
end RunMe;
Error code sample:
Log-file of program ./dymosim
(generated: Mon Sep 12 17:15:19 2016)
dymosim started
... "dsin.txt" loading (dymosim input file)
T >= 273.15
The following error was detected at time: 0
IF97 medium function g1: the temperature (= 86.3 K) is lower than 273.15 K!
The stack of functions is:
Modelica.Media.Water.IF97_Utilities.BaseIF97.Basic.g1
Modelica.Media.Water.IF97_Utilities.waterBaseProp_pT
Modelica.Media.Water.IF97_Utilities.h_props_pT(
initialValuesSimplified.p_a_start,
initialValuesSimplified.T_a_start,
Modelica.Media.Water.IF97_Utilities.waterBaseProp_pT(initialValuesSimplified.p_a_start, initialValuesSimplified.T_a_start, 0))
Non-linear solver will attempt to handle this problem.
The problem is that the combination:
parameter Real T_start=if use_T then system.T_start else foo(3,h_start);
parameter Real h_start=if use_T then bar(4,T_start) else 2;
isn't handled symbolically as two different cases (use_T and not use_T), since that could lead to a combinatorial explosion. Instead it is seen as non-linear equation - and h_start is computed, but doesn't influence the resulting T_start.
If you don't intend to change these parameters you could make them final and in the first equation replace h_start by a suitable default.
Otherwise a solution is to give a better start-value for T_a_start:
parameter Real T_start(start=300)=if use_T then system.T_start else foo(3,h_start);
Note that the problem isn't lack of start-value, but that the default start-value (500K) is too far off, the solver over-compensates and goes to 86K before converging on 293.15K. (The non-linear solver will likely be improved to avoid over-compensating this much.)

Why does supressing weights improve Tensorflow neural net performance?

I have a 2-layer non-convolutional network in Tensorflow, using tanh as the activation function. I understand that weights should be initialized with a truncated normal distribution divided by sqrt(nInputs) e.g.:
weightsLayer1 = tf.Variable(tf.div(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1),math.sqrt(nInputUnits))))
Being a bit of a bumbling newbie in NN and Tensorflow, I mistakenly implemented this as 2 lines only to make it more readable:
weightsLayer1 = tf.Variable(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1])
weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits))
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step. However, to my suprise, the "incorrect" implementation consistently yields better performance, both in train and test/evaluation datasets. I thought that the incorrect 2-line implementation should be a train wreck, since it is recomputing (suppressing) weights to values other than those chosen by the optimizer, which I would expect would wreak havoc in the optimization process, but it actually improves it. Does anyone have any explanation for this? I am using the Tensorflow adam optimizer.
Update 2016.6.22 - updated the 2nd code block above.
You are right that weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits)) is executed at each step. But that does NOT mean that the values in the weight variable are scaled down by sqrt(nInputUnits) in each step. This line is not an in-place operation that affects the values stored in the variable. It computes a new tensor, holding the values in the variable divided by sqrt(nInputUnits) and that tensor, I assume, then goes into the rest of your computation graph. This does not interfere with the optimizer. You are still defining a valid computation graph, just with an somewhat arbitrary scaling of the weights. The optimizer can still compute the gradients with respect to this variable (it will back-propagate through your division operation) and create the corresponding update operations.
In terms of the model that you are defining, the two versions are totally equivalent. For any set of values of weightsLayer1 in the original model (where you don't do the division), you can simply scale them up by sqrt(nInputUnits) and you will get the identical results with your second model. The two represent exactly the same model class, if you will.
Why one works better than the other? Your guess is as good as mine. If you have done the same division for all your variables, you have effectively divided your learning rate by sqrt(nInputUnits). This smaller learning rate might have been beneficial to the problem at hand.
Edit: I think the fact that you give the same name to the variable and the newly created tensor causes confusion. When you do
A = tf.Variable(1.0)
A = tf.mul(A, 2.0)
# Do something with A
then the second line creates a new tensor (as discussed above) and you re-bind the name (and it is only a name) A to that new tensor. For the graph being defined, the naming is absolutely irrelevant. The following code defines the same graph:
A = tf.Variable(1.0)
B = tf.mul(A, 2.0)
# Do something with B
Maybe this becomes clear if you execute the following code:
A = tf.Variable(1.0)
print A
B = A
A = tf.mul(A, 2.0)
print A
print B
The output is
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
Tensor("Mul:0", shape=(), dtype=float32)
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
The first time you print A it tells you that A is a variable object. After executing A = tf.mul(A, 2.0) and printing A again, you can see that the name A is now bound to a tf.Tensor object. However, the variable still exists, as can be seen by looking at the object behind the name B.
This is what the single line of code does:
t = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
Creates a Tensor with shape [ nInputUnits, nUnitsHiddenLayer1 ], initialized with 1.0 as the standard deviation of the truncated normal distribution. ( 1.0 is standard stddev value )
t1 = tf.div( t, math.sqrt( nInputUnits ) )
divide all values in t with math.sqrt( nInputUnits )
Your two lines of code do exactly the same thing. On the first line and the second line all values are divided by math.sqrt( nInputUnits ).
As for your statement:
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step.
EDIT my mistake
Indeed you are right, they are divided by math.sqrt( nInputUnits ) at every execuction, but not reinitialized! The point of importance is where you put tf.variable()
Here both lines are only initialized once:
weightsLayer1 = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.Variable( tf.div( weightsLayer1, math.sqrt( nInputUnits ) ) )
and here the second line is preformed at every step:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.div( weightsLayer1, math.sqrt( nInputUnits ) )
Why does the second yield better results? it looks like some kind normalization to me, but somebody more knowledgeable should verify that.
Ps.
you can write it more readable like this:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] , stddev = 1. / math.sqrt( nInputUnits ) )

partial Distance Based RDA - Centroids vanished from Plot

I am trying to fir a partial db-RDA with field.ID to correct for the repeated measurements character of the samples. However including Condition(field.ID) leads to Disappearance of the centroids of the main factor of interest from the plot (left plot below).
The Design: 12 fields have been sampled for species data in two consecutive years, repeatedly. Additionally every year 3 samples from reference fields have been sampled. These three fields have been changed in the second year, due to unavailability of the former fields.
Additionally some environmental variables have been sampled (Nitrogen, Soil moisture, Temperature). Every field has an identifier (field.ID).
Using field.ID as Condition seem to erroneously remove the F1 factor. However using Sampling campaign (SC) as Condition does not. Is the latter the rigth way to correct for repeated measurments in partial db-RDA??
set.seed(1234)
df.exp <- data.frame(field.ID = factor(c(1:12,13,14,15,1:12,16,17,18)),
SC = factor(rep(c(1,2), each=15)),
F1 = factor(rep(rep(c("A","B","C","D","E"),each=3),2)),
Nitrogen = rnorm(30,mean=0.16, sd=0.07),
Temp = rnorm(30,mean=13.5, sd=3.9),
Moist = rnorm(30,mean=19.4, sd=5.8))
df.rsp <- data.frame(Spec1 = rpois(30, 5),
Spec2 = rpois(30,1),
Spec3 = rpois(30,4.5),
Spec4 = rpois(30,3),
Spec5 = rpois(30,7),
Spec6 = rpois(30,7),
Spec7 = rpois(30,5))
data=cbind(df.exp, df.rsp)
dbRDA <- capscale(df.rsp ~ F1 + Nitrogen + Temp + Moist + Condition(SC), df.exp); ordiplot(dbRDA)
dbRDA <- capscale(df.rsp ~ F1 + Nitrogen + Temp + Moist + Condition(field.ID), df.exp); ordiplot(dbRDA)
You partial out variation due to ID and then you try to explain variable aliased to this ID, but it was already partialled out. The key line in the printed output was this:
Some constraints were aliased because they were collinear (redundant)
And indeed, when you ask for details, you get
> alias(dbRDA, names=TRUE)
[1] "F1B" "F1C" "F1D" "F1E"
The F1? variables were constant within ID which already was partialled out, and nothing was left to explain.