For a prior on a bound measure, I am trying to stretch a beta distribution between [-1,1], "[a]s described by Barnard, McCulloch & Meng (2000)" (according to this tutorial).
Specifically, I am trying to implement this suggestion:
rho_half_with ~ dbeta(1, 1)
# shifting and streching rho_half_with from [0,1] to [-1,1]
rho ~ 2 * rho_half_with - 1
However, I always get
syntax error on line (...) near "2"
No entry in the manual for JAGS or BUGS I found deals with manipulations of distributions (as sources of stochastic relation assignments). Is it indeed possible to apply basic arithmetic operations to BUGS/JAGS stochastic relation (following the ~ operator), and if yes, how?
The problem with the code you have posted is that you use a ~ in a non-stochastic relation, where JAGS would want you to use <- instead. The following should work:
rho_half_with ~ dbeta(1, 1)
# shifting and streching rho_half_with from [0,1] to [-1,1]
rho <- 2 * rho_half_with - 1
Regarding the error message you mention in the comments you get that because you try to initiate a variable that is not stochastic (rho). Remove that initialization or switch to initializing rho_half_with to solve that problem.
Related
I have a constrained nonlinear optimization problem, "A". Inside the computation is an om.Group which I'll call "B" that requires a nonlinear solve. Whether "B" finds a solution or crashes seems to depend on its initial conditions. So far I've found that some of the initial conditions given to "B" are inconsistent with the constraints on "A", and that this seems to be contributing to its propensity for crashing. The constraints on "A" can be computed before "B".
If the objective of "A" could be computed before "B" then I would put "A" in its own group and have it pass its known-good solution to "B". However, the objective of "A" can only be computed as a result of the converged solution of "B". Is there a way to tell OpenMDAO or the optimizer (right now I'm using ScipyOptimizerDriver and the SLSQP method) that when it chooses a new point in design-variable space, it should check that the constraints of "A" hold before proceeding to "B"?
A slightly simpler example (without the complication of an initial guess) might be:
There are two design variables 0 < x1 < 1, 0 < x2 < 1.
There is a constraint that x2 >= x1.
Minimize f(sqrt(x2 - x1), x1) where f crashes if given imaginary inputs. How can I make sure that the driver explores the design space without giving f a bad input?
I have two proposed solutions. The best one is highly problem dependent. You can either raise an AnalysisError or use numerical clipping.
import numpy as np
import openmdao.api as om
class SafeComponent(om.ExplicitComponent):
def setup(self):
self.add_input('x1')
self.add_input('x2')
self.add_output('y')
def compute(self, inputs, outputs):
x1 = inputs['x1']
x2 = inputs['x2']
diff = x1 - x2
######################################################
# option 1: raise an error, which causes the
# optimizer line search to backtrack
######################################################
# if (diff < 0):
# raise om.AnalysisError('invalid inputs: x2 > x1')
######################################################
# option 2: use numerical clipping
######################################################
if (diff < 0):
diff = 0.
outputs['y'] = np.sqrt(diff)
# build the model
prob = om.Problem()
prob.model.add_subsystem('sc', SafeComponent(), promotes=['*'])
prob.setup()
prob['x1'] = 10
prob['x2'] = 20
prob.run_model()
print(prob['y'])
Option 1: raise an AnalysisError
Some optimizers are set up to handle this well. Others are not.
As of V3.7.0, the OpenMDAO wrappers for SLSQP from scipy and pyoptsparse, and the SNOPT/IPOPT wrappers from pyoptsparse all handle AnalysisErrors gracefully.
When the error is raised, the execution stops and the optimizer recognizes a failed case. It backtracks on the linesearch a bit to try and get out of the situation. It will usually try a few steps backwards, but at some point it will give up. So the success of this situation depends a bit on why you ended up in the bad part of the space and how much the gradients are pushing you back into it.
This solution works very well with fully analytic derivatives. The reason is that (most) gradient based optimizers will only ever ask for function evaluations along a line search operation. So that means that, as long as a clean point is found, you're always able to be able to compute derivatives at that point as well.
If you're using finite-differences, you could end a line search right near the error condition, but not violating it (e.g. x1=1, x2=.9999999). Then during the FD step to compute derivatives, you might end up tripping the error condition and raising the error. The optimizer is not going to be able to recover from this condition. Errors during FD steps will effectively kill the whole opt.
So, for this reason I never recommend the AnalysisError approach if you're suing FD.
Option 2: Numerical Clipping
If you optimizer wrapper does not have the ability to handle an AnalysisError, you can try some numerical clipping instead. You can add a filter in your calcs to to keep the values numerically safe. However, you obviously need to use this very carefully. You should at least add an additional constraint that forces the optimizer to keep away from the error condition when converged (e.g. x1 >= x2).
One important note: if you provide analytic derivatives, include the clipping in them!
Sometimes the optimizer just wants to pass through this bad region on its way to the answer. In that case, the simple clipping I show here is probably fine. Other times it wants to ride the constraint (be sure you add that constraint!!!) and then you probably want a more smoothly varying type of clipping. In other words don't use a simple if-condition. Smooth the round corner a bit, and maybe make the value asymptotically approach 0 from a very small value. This way you have a c1 continuous function and the derivatives won't got to exactly 0 for these inputs.
I am working on a simple model which includes a derivative of dy/dx, but in Modelica, I can't write this equation directly, I could use the combination of x=timeand der(y), but I think this is a compromise because of limitation of Modelica language.
My question is:
Is there another better method to describe derivative in Modelica?
Here is the code:
model HowToExpressDerivative "dy/dx=5, how to describe this equation in Modelica?"
Real x,y;
equation
x = time;
der(y) = 5;
end HowToExpressDerivative;
I also tried to use der(y)/der(x) to express dy/dx, but there is an error when x equals time^2.
model HowToExpressDerivative "dy/dx=5, how to describe this equation in Modelica?"
Real x,y;
equation
x=time^2;
der(y)/der(x)=5;
end HowToExpressDerivative;
Error: The following error was detected at time: 0
Model error - division by zero: (1.0) / (der(x)) = (1) / (0)
Error: Integrator failed to start model.
... "HowToExpressDerivative.mat" creating (simulation result file)
ERROR: The simulation of HowToExpressDerivative FAILED
Given enthalpy h and crank-angle phi you could replace dh/dphi=... by:
der(h)/der(phi)=...
However, even if correct that formula will break down when the engine is standing still (der(phi)=0), so it is not ideal.
An alternative would be to rewrite the formulas. Looking more closely the formula seems to be:
dh/dphi=(\partial a/\partial T)*dT/dphi+...
which suggests that they could be rewritten as:
der(h)=(\partial a/\partial T)*der(T)+...
I'm trying to sample from the truncated normal distribution by using the truncnorm() function from the scipy stats package in python. However, I keep getting the following error:
x = _norm_ilogcdf(np.log(q) + _norm_logcdf(b))
z = z - (_norm_logcdf(z) - y) / _norm_logcdfprime(z)
assert np.abs(z) > TRUNCNORM_TAIL_X/2
I'm not completely sure what it means, but I'm guessing it has something to do with the mean being outside the bounds. But then what is the difference in comparison to:
Domain error in arguments
For clarification, I am not sampling from a standard normal. I altered the bounds by use of the following equation:
a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std
and I enter these bounds into the function truncnorm.rvs(a,b,my_mean, my_std). Any clarification is much appreciated!
I've encountered the same problem as well. The thing is that the lower bound a is above (or close to) the 99% quantile of the Normal distribution. Thus the truncation causes scipy to crush. The only solution I came out with is to check before truncation if myclip_a is above the 99% quantile and if so avoid the update. I hope that someone will find a better solution than mine!
I’m running RStudio Version 1.1.419 with R-3.4.3 on Windows 10. I am trying to fit an (f)arima model and setting the fractional differencing parameter during the optimization process to be between (-0.5,0.5), i.e. allowing for antipersistence (d < 0), short memory (d = 0) and long memory (d > 0). I have tried multiple functions to accomplish that. I am aware that the default of fracdiff$drange is (0,0.5). Therefore this ...
> result <- fracdiff(MeanPrice, nar = 2, nma = 1, drange = c(-0.5,0.5))
sadly returns this..
Warning: C fracdf() optimization failure
Warning message: unable to compute correlation matrix; maybe change 'h'
Is there a way to fit fracdiff or other models (maybe arfima::arfima()?) with that drange? Your help is very much appreciated.
If you look at the package documentation, it states that the h argument for fracdiff "is used to compute a finite difference approximation to the Hessian, and
hence only influences the cov, cor, and std.error computations." However, as they are referring to the Hessian, I would assume that this affects the results of the MLE. There are other functions in that package that may be helpful: fdGHP for estimating the order of fractional differencing based on the Geweke and Porter-Hudak method, and similarly fdSperio.
Take a look at the forecast package. If you estimate the order of fractional differencing using the above mentioned functions, you might be able to use the same method described in the details of the arfima function.
So, I'm trying to establish some new stability criteria for my simulations, and this involves a lot of convoluted inequalities. I've worked through the math a few times by hand, and it's very laborous; so, I wanted to figure out a way to automate the process (as I'm trying to find the best integration scheme from a stability perspective). Is there anyway to solve inequalities symbolically in Matlab? Here's what I'm trying to solve. In the following expression, x refers to the gradient of a force function with respect to x, and t is the time step. In general, x < 0 and t > 0:
-(t*x + (2*t^3*x + t^2*x^2 - 2*t*x + 4*t + 1)^(1/2) + 1)/(x*t^2 - 2) < 1
Based on what I've looked at online, this seems to be possible in MuPAD, but using the following code does not give me any valid results:
solve(-(t*x + (2*t^3*x + t^2*x^2 - 2*t*x + 4*t + 1)^(1/2) + 1)/(x*t^2 - 2) < 1, t)
Any idea what I can do to make this work and automate the process?
First, since Wolfram Alpha gives you an answer (that I presume you've checked for correctness), I assume you want to use Matlab to solve other similar problems. However, this is a very non-trivial inequality due to the roots of the polynomials. I haven't been able to get Matlab/MuPAD to do anything with it. As I stated, regular Matlab can solve inequalities and systems of equalities in many cases, e.g., in R2013b
syms x real;
solve(x^3-1>1,x)
Even Mathematica 9 has trouble (the Reduce function can be used instead of it's Solve, but the output is not easy to use).
You can, however, solve for the real roots where x < 0 and t > 0 via
syms x t real;
assume(x<0);
assume(t>0);
f = -(t*x+sqrt(2*t^3*x+t^2*x^2-2*t*x+4*t+1)+1)/(x*t^2-2);
s = solve(f==1,t)
which returns:
s =
-(x + (x*(x - 2))^(1/2))/x
This simplifies to sqrt((x-2)/x)-1. Thus t > sqrt((x-2)/x)-1, one of the bounds provided by Wolfram Alpha. (The other more complicated bound is always less than zero and actually is the condition that ensures that t is real.)
But, do you even need to be solving this problem symbolically? Do you need explicit expressions for the various intervals in terms of all x? If not, this type of problem is probably better suited numeric approaches – either via root solving (e.g., fzero) or minimization (e.g., fmincon).