Transfering algorithms AsymptoticOutputTracking to Matlab-Simulink - matlab

https://mathematica.stackexchange.com/questions/241776/transfering-algorithms-asymptoticoutputtracking-to-matlab-simulink
The following system is built and modeled in simulink.
I am interested in the transient process in the variable $x$.
This system is described by the following differential equations:
\begin{cases} \frac{dx}{dt}=-H(t) \ \frac{dH}{dt}+h H(t)=\frac{df}{dt} \end{cases}
where $f=e^{-(x(t)-x_e)^2}$,$x_e$=0,$h$=1
If we numerically solve this system in Mathematica, then we get:
Clear["Derivative"]
ClearAll["Global`*"]
pars = {xs = -1, xe = 0, h = 1}
f = Exp[-(x[t] - xe)^2]
sys = NDSolve[{x'[t] == -H[t], H'[t] + h H[t] == D[f, t], x[0] == xs,
H[0] == Exp[-(xs - xe)^2]}, {x, H}, {t, 0, 10}]
Plot[{Evaluate[x[t] /. sys]}, {t, 0, 10}, PlotRange -> Full,
PlotPoints -> 100]
We can see that the results from Simulink and Mathematica are the same.
Next, I want to do the following. Using the AsymptoticOutputTracker command, control the variable $x$ according to the specified reference law, i.e. the system of equations will take the form:
\begin{cases} \frac{dx}{dt}=-H(t)+u(t) \ \frac{dH}{dt}+h H(t)=\frac{df}{dt} \end{cases}
As a reference signal I use $r_1=e^{-t}+1$ with decay rate $p_1=-5$. As a controlled output, I use the variable $x$, and with $f$ I just see what happens in the end. There is my code:
(***)
Clear["Derivative"]
ClearAll["Global`*"]
f = Exp[-(x[t] - xe)^2]
asys = AffineStateSpaceModel[{x'[t] == -H[t] + u[t],
H'[t] + h H[t] == D[f, t]}, {{x[t], xs}, {H[t],
Exp[-(xs - xe)^2]}}, {u[t]}, {x[t], f}, t] // Simplify
pars1 = {Subscript[r, 1] -> Exp[-t] + 1, Subscript[p, 1] -> -5}
fb = AsymptoticOutputTracker[
asys, {Subscript[r, 1]}, {Subscript[p, 1]}] // Simplify
pars = {xs = -1, xe = 0, h = 1}
csys = SystemsModelStateFeedbackConnect[asys, fb] /. pars1 //
Simplify // Chop
plots = {OutputResponse[{csys}, {0, 0}, {t, 0, 10}]}
Plot[{plots[[1, 1]], Exp[-t] + 1}, {t, 0, 10}, PlotRange -> All,
PlotPoints -> 200]
Plot[{plots[[1, 2]]}, {t, 0, 10}, PlotRange -> All, PlotPoints -> 200]
In general, everything works, but now QUESTION:
And how to transfer the feedback signal generated by the AsymptoticOutputTracker command to simulink and in general simulate all this in simulink?
I would be grateful for the help and advice of respected experts.

Related

about or-tools OnlyEnforceIf,AddBoolAnd question

I have a Bool array, I want some ones to appear in certain places like this:
[1,1,1,1,0,0,0,0,0,0] or [0,0,0,0,0,0,1,1,1,1],here I want four ones appear in array's head,or tail,just two places,how can I do this?
model = cp_model.CpModel()
solver = cp_model.CpSolver()
shifts = {}
ones={}
sequence = []
for i in range(10):
shifts[(i)] = model.NewIntVar(0, 10, "shifts(%i)" % i)
ones[(i)] = model.NewBoolVar( '%i' % i)
for i in range(10):
model.Add(shifts[(i)] ==8).OnlyEnforceIf(ones[(i)])
model.Add(shifts[(i)] == 0).OnlyEnforceIf(ones[(i)].Not())
#I want the four 8s in the array to only appear in two positions at the head or tail of the array, and not in other positions.
model.AddBoolAnd([ones[(0)],ones[(1)],ones[(2)],ones[(3)]])# appear in head
#model.AddBoolAnd([ones[(6)],ones[(7)],ones[(8)],ones[(9)]]) #appear in tauk ,error!
model.Add(sum(ones[(i)] for i in range(10)) == 4)
status = solver.Solve(model)
print("status:",status)
res=[]
for i in range(10):
res.append(solver.Value(shifts[(i)]))
print(res)
bold
italic
quote
Try AddAllowedAsignments:
model = cp_model.CpModel()
ones = [model.NewBoolVar("") for _ in range(10)]
model.AddAllowedAssignments(
ones, [[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]]
)
Edit: #Laurent's suggestion
model = cp_model.CpModel()
ones = [model.NewBoolVar("") for _ in range(10)]
head = model.NewBoolVar("head")
# HEAD
model.AddImplication(head, ones[0])
model.AddImplication(head, ones[1])
model.AddImplication(head, ones[2])
model.AddImplication(head, ones[3])
model.AddImplication(head, ones[4].Not())
model.AddImplication(head, ones[5].Not())
...
# TAIL
model.AddImplication(head.Not(), ones[0].Not())
model.AddImplication(head.Not(), ones[1].Not())
....
model.AddImplication(head.Not(), ones[6])
model.AddImplication(head.Not(), ones[7])
model.AddImplication(head.Not(), ones[8])
model.AddImplication(head.Not(), ones[9])

Double pendulum animation

I'm trying to obtain animated double pendulum. Though I can obtain animation for any (one) mass, I can't obtain it for both.
restart;
with(DEtools, odeadvisor);
with(plots);
with(plottools);
Sys := [2*(diff(T1(t), t, t))+cos(T1(t)-T2(t))*(diff(T2(t), t, t))+sin(T1(t)-T2(t))*(diff(T2(t), t))^2+19.6*sin(T1(t)) = 0, diff(T2(t), t, t)+cos(T1(t)-T2(t))*(diff(T1(t), t, t))-sin(T1(t)-T2(t))*(diff(T1(t), t))+9.8*sin(T2(t)) = 0, T1(0) = 1, (D(T1))(0) = 0, T2(0) = 1, (D(T2))(0) = 1];
sol := dsolve(Sys, type = numeric, range = 0 .. 20, output = listprocedure);
odeplot(sol, [T1(t), T2(t)], 0 .. 20, refine = 1);
TT1, TT2 := op(subs(sol, [T1(t), T2(t)]));
f := proc (t) options operator, arrow; pointplot([cos(TT1(t)), sin(TT1(t))], color = blue, symbol = solidcircle, symbolsize = 25) end proc;
p := proc (t) options operator, arrow; pointplot([cos(TT2(t)), sin(TT2(t))], color = red, symbol = solidcircle, symbolsize = 25) end proc;
Any help would be appreciated.
You have provided no explanation of the way your equations are intended to model a physical system, which is not helpful.
So I have made some guesses about your intentions and your model. Please don't blame me if my guesses are not on the mark.
restart;
with(plots):
Sys := [2*(diff(T1(t), t, t))+cos(T1(t)-T2(t))*(diff(T2(t), t, t))
+sin(T1(t)-T2(t))*(diff(T2(t), t))^2+19.6*sin(T1(t)) = 0,
diff(T2(t), t, t)+cos(T1(t)-T2(t))*(diff(T1(t), t, t))
-sin(T1(t)-T2(t))*(diff(T1(t), t))+9.8*sin(T2(t)) = 0,
T1(0) = 1, (D(T1))(0) = 0, T2(0) = 1, (D(T2))(0) = 1]:
sol := dsolve(Sys, numeric, range = 0 .. 20, output = listprocedure):
TT1, TT2 := op(subs(sol, [T1(t), T2(t)])):
fp := t -> plots:-display(
pointplot([sin(TT1(t))+sin(TT2(t)), -cos(TT1(t))-cos(TT2(t))],
color = red, symbol = solidcircle, symbolsize = 25),
pointplot([sin(TT1(t)), -cos(TT1(t))],
color = blue, symbol = solidcircle, symbolsize = 25),
plottools:-line([0,0],[sin(TT1(t)), -cos(TT1(t))]),
plottools:-line([sin(TT1(t)), -cos(TT1(t))],
[sin(TT1(t))+sin(TT2(t)), -cos(TT1(t))-cos(TT2(t))]),
scaling=constrained
):
animate(fp, [t], t=0..10, frames=200);
I don't know whether this kind of stacked view is what you're after, as a representation of the position of "both" masses. It's not really clear what you mean by that.
But perhaps the key thing is that, if the two-element lists you are using within your pointplot calls represent (displacement) vectors, then you can get the stacked/cumulative effect on the second mass by adding those two vectors elementwise. That's how the red point gets its position in my animation. Hopefully this will allow you to get the cumulative effect with both masses, in your own choice of representation.

What is the purpose of the add_loss function in Keras?

Currently I stumbled across variational autoencoders and tried to make them work on MNIST using keras. I found a tutorial on github.
My question concerns the following lines of code:
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
# Compile
vae.add_loss(vae_loss)
vae.compile(optimizer='rmsprop')
Why is add_loss used instead of specifying it as compile option? Something like vae.compile(optimizer='rmsprop', loss=vae_loss) does not seem to work and throws the following error:
ValueError: The model cannot be compiled because it has no loss to optimize.
What is the difference between this function and a custom loss function, that I can add as an argument for Model.fit()?
Thanks in advance!
P.S.: I know there are several issues concerning this on github, but most of them were open and uncommented. If this has been resolved already, please share the link!
Edit 1
I removed the line which adds the loss to the model and used the loss argument of the compile function. It looks like this now:
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
# Compile
vae.compile(optimizer='rmsprop', loss=vae_loss)
This throws an TypeError:
TypeError: Using a 'tf.Tensor' as a Python 'bool' is not allowed. Use 'if t is not None:' instead of 'if t:' to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor.
Edit 2
Thanks to #MarioZ's efforts, I was able to figure out a workaround for this.
# Build model
vae = Model(x, x_decoded_mean)
# Calculate custom loss in separate function
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
return vae_loss
# Compile
vae.compile(optimizer='rmsprop', loss=vae_loss)
...
vae.fit(x_train,
x_train, # <-- did not need this previously
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test)) # <-- worked with (x_test, None) before
For some strange reason, I had to explicitly specify y and y_test while fitting the model. Originally, I didn't need to do this. The produced samples seem reasonable to me.
Although I could resolve this, I still don't know what the differences and disadvantages of these two methods are (other than needing a different syntax). Can someone give me more insight?
I'll try to answer the original question of why model.add_loss() is being used instead of specifying a custom loss function to model.compile(loss=...).
All loss functions in Keras always take two parameters y_true and y_pred. Have a look at the definition of the various standard loss functions available in Keras, they all have these two parameters. They are the 'targets' (the Y variable in many textbooks) and the actual output of the model. Most standard loss functions can be written as an expression of these two tensors. But some more complex losses cannot be written in that way. For your VAE example this is the case because the loss function also depends on additional tensors, namely z_log_var and z_mean, which are not available to the loss functions. Using model.add_loss() has no such restriction and allows you to write much more complex losses that depend on many other tensors, but it has the inconvenience of being more dependent on the model, whereas the standard loss functions work with just any model.
(Note: The code proposed in other answers here are somewhat cheating in as much as they just use global variables to sneak in the additional required dependencies. This makes the loss function not a true function in the mathematical sense. I consider this to be much less clean code and I expect it to be more error-prone.)
JIH's answer is right of course but maybe it is useful to add:
model.add_loss() has no restrictions, but it also removes the comfort of using for example targets in the model.fit().
If you have a loss that depends on additional parameters of the model, of other models or external variables, you can still use a Keras type encapsulated loss function by having an encapsulating function where you pass all the additional parameters:
def loss_carrier(extra_param1, extra_param2):
def loss(y_true, y_pred):
#x = complicated math involving extra_param1, extraparam2, y_true, y_pred
#remember to use tensor objects, so for example keras.sum, keras.square, keras.mean
#also remember that if extra_param1, extra_maram2 are variable tensors instead of simple floats,
#you need to have them defined as inputs=(main,extra_param1, extraparam2) in your keras.model instantiation.
#and have them defind as keras.Input or tf.placeholder with the right shape.
return x
return loss
model.compile(optimizer='adam', loss=loss_carrier)
The trick is the last row where you return a function as Keras expects them with just two parameters y_true and y_pred.
Possibly looks more complicated than the model.add_loss version, but the loss stays modular.
I was also wondering about the same query and some related stuff like how to add loss function within the intermediate layers. Here I'm sharing some of the observed information, hope it may help others. It's true that standard keras loss functions only take two arguments, y_true and y_pred. But during the experiment, there can some cases where we need some external parameter or coefficient while computing with these two values (y_true, y_pred). This can be needed at the last layer as usual or somewhere in the middle of the model's layer.
model.add_loss()
The accepted answer correctly said about the model.add_loss() functions. It potentially depends on the layer inputs (tensor). According to the official doc, when writing the call method of a custom layer or a subclassed model, we may want to compute scalar quantities that we want to minimize during training (e.g. regularization losses). We can use the add_loss() layer method to keep track of such loss terms. For instance, activity regularization losses dependent on the inputs passed when calling a layer. Here's an example of a layer that adds a sparsity regularization loss based on the L2 norm of the inputs:
from tensorflow.keras.layers import Layer
class MyActivityRegularizer(Layer):
"""Layer that creates an activity sparsity regularization loss."""
def __init__(self, rate=1e-2):
super(MyActivityRegularizer, self).__init__()
self.rate = rate
def call(self, inputs):
# We use `add_loss` to create a regularization loss
# that depends on the inputs.
self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return inputs
Loss values added via add_loss can be retrieved in the .losses list property of any Layer or Model (they are recursively retrieved from every underlying layer):
from tensorflow.keras import layers
class SparseMLP(Layer):
"""Stack of Linear layers with a sparsity regularization loss."""
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(32, activation=tf.nn.relu)
self.regularization = MyActivityRegularizer(1e-2)
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
x = self.regularization(x)
return self.dense_2(x)
mlp = SparseMLP(1)
y = mlp(tf.ones((10, 10)))
print(mlp.losses) # List containing one float32 scalar
Also note, when using model.fit(), such loss terms are handled automatically. When writing a custom training loop, we should retrieve these terms by hand from model.losses, like this:
loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# Iterate over the batches of a dataset.
for x, y in dataset:
with tf.GradientTape() as tape:
# Forward pass.
logits = model(x)
# Loss value for this batch.
loss_value = loss_fn(y, logits)
# Add extra loss terms to the loss value.
loss_value += sum(model.losses) # < ------------- HERE ---------
# Update the weights of the model to minimize the loss value.
gradients = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
Custom losses
With model.add_loss(), (AFAIK), we can use it somewhere in the middle of the network. Here we no longer bound with only two parameters i.e. y_true, y_pred. But what if we also want to impute external parameter or coefficient to the last layer loss functions of the network. Nric answer is correct. But it can also be implemented by subclassing the tf.keras.losses.Loss class by implementing the following two methods:
__init__(self): accept parameters to pass during the call of your loss function
call(self, y_true, y_pred): use the targets (y_true) and the model predictions (y_pred) to compute the model's loss
Here is an example of a custom MSE by subclassing the tf.keras.losses.Loss class. And here we also no longer bound only two parameters i.e. y_ture, y_pred.
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model.compile(optimizer=..., loss=CustomMSE())
Try this:
import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from scipy import stats
import tensorflow as tf
import seaborn as sns
from pylab import rcParams
from sklearn.model_selection import train_test_split
from keras.models import Model, load_model, Sequential
from keras.layers import Input, Lambda, Dense, Dropout, Layer, Bidirectional, Embedding, Lambda, LSTM, RepeatVector, TimeDistributed, BatchNormalization, Activation, Merge
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras import regularizers
from keras import backend as K
from keras import metrics
from scipy.stats import norm
from keras.utils import to_categorical
from keras import initializers
bias = bias_initializer='zeros'
from keras import objectives
np.random.seed(22)
data1 = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
data2 = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
data3 = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0], dtype='int32')
#train = np.zeros(shape=(992,54))
#test = np.zeros(shape=(921,54))
train = np.zeros(shape=(300,54))
test = np.zeros(shape=(300,54))
for n, i in enumerate(train):
if (n<=100):
train[n] = data1
elif (n>100 and n<=200):
train[n] = data2
elif(n>200):
train[n] = data3
for n, i in enumerate(test):
if (n<=100):
test[n] = data1
elif(n>100 and n<=200):
test[n] = data2
elif(n>200):
test[n] = data3
batch_size = 5
original_dim = train.shape[1]
intermediate_dim45 = 45
intermediate_dim35 = 35
intermediate_dim25 = 25
intermediate_dim15 = 15
intermediate_dim10 = 10
intermediate_dim5 = 5
latent_dim = 3
epochs = 50
epsilon_std = 1.0
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0.,
stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
x = Input(shape=(original_dim,), name = 'first_input_mario')
h1 = Dense(intermediate_dim45, activation='relu', name='h1')(x)
hD = Dropout(0.5)(h1)
h2 = Dense(intermediate_dim25, activation='relu', name='h2')(hD)
h3 = Dense(intermediate_dim10, activation='relu', name='h3')(h2)
h = Dense(intermediate_dim5, activation='relu', name='h')(h3) #bilo je relu
h = Dropout(0.1)(h)
z_mean = Dense(latent_dim, activation='relu')(h)
z_log_var = Dense(latent_dim, activation='relu')(h)
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
decoder_h = Dense(latent_dim, activation='relu')
decoder_h1 = Dense(intermediate_dim5, activation='relu')
decoder_h2 = Dense(intermediate_dim10, activation='relu')
decoder_h3 = Dense(intermediate_dim25, activation='relu')
decoder_h4 = Dense(intermediate_dim45, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
h_decoded1 = decoder_h1(h_decoded)
h_decoded2 = decoder_h2(h_decoded1)
h_decoded3 = decoder_h3(h_decoded2)
h_decoded4 = decoder_h4(h_decoded3)
x_decoded_mean = decoder_mean(h_decoded4)
vae = Model(x, x_decoded_mean)
def vae_loss(x, x_decoded_mean):
xent_loss = objectives.binary_crossentropy(x, x_decoded_mean)
kl_loss = -0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var))
loss = xent_loss + kl_loss
return loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
vae.fit(train, train, batch_size = batch_size, epochs=epochs, shuffle=True,
validation_data=(test, test))
vae = Model(x, x_decoded_mean)
encoder = Model(x, z_mean)
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h (decoder_input)
_h_decoded1 = decoder_h1 (_h_decoded)
_h_decoded2 = decoder_h2 (_h_decoded1)
_h_decoded3 = decoder_h3 (_h_decoded2)
_h_decoded4 = decoder_h4 (_h_decoded3)
_x_decoded_mean = decoder_mean(_h_decoded4)
generator = Model(decoder_input, _x_decoded_mean)
generator.summary()
You need to change the compile row to
vae.compile(optimizer='rmsprop', loss=vae_loss)

scipy transferfunction vs state space

I have a LTI system which I am modeling using scipy.signals. But I get different results when using TransferFunction or StateSpace.
Besides the magnitudes for both the bode plot and the step response being different, the StateSpace representation add two more zeros on the LTI system I know are not there.
I know for a fact the two descriptions should be equivalent (at least I think I do), because I re did the math several times.
Could someone please help me explain what is happening?
the for the transfer function:
from params import *
numeratorthetaact = [Kt*Jl/(L*Jl*Ja), Kt*(betal+betac)/(L*Jl*Ja), Kt*kc/(L*Jl*Ja)]
denominatorthetaact = [1.0,
(L*betac*(Ja+Jl))/(Jl*Ja) + R/L,
(Ja+Jl)*(L*kc+R*betac)/(Jl*Ja*L) + (Kt*Kb)/(Ja*L),
(R*kc*(Ja+Jl))/(L*Jl*Ja) + (betac*Kt*Kb)/(L*Jl*Ja),
(kc*Kt*Kb)/(L*Jl*Ja),
0.0]
tfact = TransferFunction(numeratorthetaact, denominatorthetaact)
sysact = lti(tfact.num, tfact.den)
print "Zeros: ", sysact.zeros
print "Poles: ", sysact.poles
t, swact = step(sysact, T = time)
freqrad = numpy.multiply(frequency, 2.0*numpy.pi)
wrad, magact, phaseact = sysact.bode(w=freqrad)
whz = numpy.multiply(wrad, 1.0/(numpy.pi*2.0))
p.subplot(3,1,1)
p.plot(t, swact, label="Step Galvo")
p.legend()
p.subplot(3,1,2)
p.semilogx(whz, magact, label="Freq Galvo")
p.legend()
p.grid()
p.subplot(3,1,3)
p.semilogx(whz, phaseact, label="Phase Galvo")
p.legend()
p.grid()
p.show()
The code for StateSpace
from params import *
matrixA = numpy.array([[-(betaa+betac)/Ja, -kc/Ja, betac/Ja, kc/Ja, Kb/Ja],
[1.0, 0, 0, 0, 0],
[betac/Jl, kc/Jl, -(betal+betac)/Jl, -kc/Jl, 0],
[0, 0, 1.0, 0, 0],
[-Kb/L, 0, 0, 0, -R/L]])
matrixB = numpy.array([[0],
[0],
[0],
[0],
[1.0/L]])
matrixC = numpy.array([[0, 1.0, 0, 0, 0]])
matrixD = 0.0
ssact = StateSpace(matrixA, matrixB, matrixC, matrixD)
sysact = lti(ssact.A, ssact.B, ssact.C, ssact.D)
print "Zeros: ", sysact.zeros
print "Poles: ", sysact.poles
t, swact = step(sysact, T = time)
freqrad = numpy.multiply(frequency, 2.0*numpy.pi)
wrad, magact, phaseact = sysact.bode(w=freqrad)
whz = numpy.multiply(wrad, 1.0/(numpy.pi*2.0))
p.subplot(3,1,1)
p.plot(t, swact, label="Step Galvo")
p.legend()
p.subplot(3,1,2)
p.semilogx(whz, magact, label="Freq Galvo")
p.legend()
p.grid()
p.subplot(3,1,3)
p.semilogx(whz, phaseact, label="Phase Galvo")
p.legend()
p.grid()
p.show()
The resulting plots:
StateSpace vs TransferFunction
It is also worth mentioning that i get a BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) error when running the StateSpace code.
Thank you

How to code a matrix in WinBUGS?

I am trying to code the 2X2 matrix sigma with the 4 elements. Not sure how to code in WINBUGS. My goal is to get the posterior p's, their means and variances and create an ellipse region covered by the two posterior p's. Heres my code below:
model{
#likelihood
for(j in 1 : Nf){
p1[j, 1:2 ] ~ dmnorm(gamma[1:2], T[1:2 ,1:2])
for (i in 1:2){
logit(p[j,i]) <- p1[j,i]
Y[j,i] ~ dbin(p[j,i],n)
}
X_mu[j,1]<-p[j,1]-mean(p[,1])
X_mu[j,2]<-p[j,2]-mean(p[,2])
v1<-sd(p[,1])*sd(p[,1])
v2<-sd(p[,2])*sd(p[,2])
v12<-(inprod(X_mu[j,1],X_mu[j,2]))/(sd(p[,1])*sd(p[,2]))
sigma[1,1]<-v1
sigma[1,2]<-v12
sigma[2,1]<-v12
sigma[2,2]<-v2
sigmaInv[1:2, 1:2] <- inverse(sigma[,])
T1[j,1]<-inprod(sigmaInv[1,],X_mu[j,1])
T1[j,2]<-inprod(sigmaInv[2,],X_mu[j,2])
ell[j,1]<-inprod(X_mu[j,1],T1[j,1])
ell[j,2]<-inprod(X_mu[j,2],T1[j,2])
}
#priors
gamma[1:2] ~ dmnorm(mn[1:2],prec[1:2 ,1:2])
expit[1] <- exp(gamma[1])/(1+exp(gamma[1]))
expit[2] <- exp(gamma[2])/(1+exp(gamma[2]))
T[1:2 ,1:2] ~ dwish(R[1:2 ,1:2], 2)
sigma2[1:2, 1:2] <- inverse(T[,])
rho <- sigma2[1,2]/sqrt(sigma2[1,1]*sigma2[2,2])
}
# Data
list(Nf =20, mn=c(-0.69, -1.06), n=60,
prec = structure(.Data = c(.001, 0,
0, .001),.Dim = c(2, 2)),
R = structure(.Data = c(.001, 0,
0, .001),.Dim = c(2, 2)),
Y= structure(.Data=c(32,13,
32,12,
10,4,
28,11,
10,5,
25,10,
4,1,
16,5,
28,10,
21,7,
19,9,
18,12,
31,12,
13,3,
10,4,
18,7,
3,2,
27,5,
8,1,
8,4),.Dim = c(20, 2))
You have to specify each element in turn. You can use the inverse function (rather than solve) to invert the matrix.
model{
sigma[1,1]<-v1
sigma[1,2]<-v12
sigma[2,1]<-v21
sigma[2,2]<-v2
sigmaInv[1:2, 1:2] <- inverse(sigma[,])
}