Error calling / running Model via OMShell & OMPython - OpenModelica - modelica

I am using the Dimensions model for placing parameters of the system that I use in many different models and call them by using extend, instead of declaring them again for each model. This is a simple example, but in reality, I have the way more of them.
A simple model with the structure I have:
package Main
model Dimensions
final parameter Modelica.SIunits.Length x = 10;
final parameter Modelica.SIunits.Length y = 5;
end Dimensions;
package Test_env
extends Main.Dimensions;
model Test_model
Real z;
equation
z = x + y;
end Test_model;
end Test_env;
end Main;
If I run this example in OMEdit it works without any problem. However, if I run it in OMShell or OMPython / OMCSessionZMQ it doesn't work.
Q - maybe I am using the extends clause incorrectly? If so, what would be the alternative of declaring parameters once and reusing them in other models?
This is what I get in OMShell:
>> loadFile("D:/1.Modelica/Simulations/Main.mo")
true
>> getClassNames()
{Main}
>> getClassNames(Main)
{Dimensions,Test_env}
>> getClassNames(Main.Test_env)
{Test_model}
>> simulate(Main.Test_env.Test_model, startTime=0, stopTime=1, numberOfIntervals=500, tolerance=1e-4, method="dassl", outputFormat="mat"); getErrorString()
record SimulationResult
resultFile = "",
simulationOptions = "startTime = 0.0, stopTime = 1.0, numberOfIntervals = 500, tolerance = 0.0001, method = 'dassl', fileNamePrefix = 'Main.Test_env.Test_model', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = ''",
messages = "Failed to build model: Main.Test_env.Test_model",
timeFrontend = 0.0110966,
timeBackend = 0.0,
timeSimCode = 0.0,
timeTemplates = 0.0,
timeCompile = 0.0,
timeSimulation = 0.0,
timeTotal = 0.0111225
end SimulationResult;
"[D:/1.Modelica/Simulations/Main.mo:3:5-3:45:writable] Error: Class Modelica.SIunits.Length not found in scope Main.Dimensions.
[D:/1.Modelica/Simulations/Main.mo:1:1-18:9:writable] Error: Class Test_env.Test_model not found in scope Main.
Error: Class Main.Test_env.Test_model not found in scope .
Error: Error occurred while flattening model Main.Test_env.Test_model
"
And this is from OMPython / OMCSessionZMQ:
omc.sendExpression('simulate(Main.Test_env.Test_model, stopTime=1.0)')
---------------------------------------------------------------------------
{'resultFile': '',
'simulationOptions': "startTime = 0.0, stopTime = 1.0, numberOfIntervals = 500, tolerance = 1e-006, method = 'dassl', fileNamePrefix = 'Main.Test_env.Test_model', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = ''",
'messages': 'Failed to build model: Main.Test_env.Test_model',
'timeFrontend': 0.0018766,
'timeBackend': 0.0,
'timeSimCode': 0.0,
'timeTemplates': 0.0,
'timeCompile': 0.0,
'timeSimulation': 0.0,
'timeTotal': 0.0018919}

To summarise the answers given via comments:
Use extends inside your model which will be extended.
package Test_env
model Test_model
Real z;
extends Main.Dimensions;
equation
z = x + y;
end Test_model;
end Test_env;
If all your models need the same fixed parameters it is still a good practise to add the extend to every model, so everybody knows where the variables come from.
Also compare with Modelica.Constants to see how the Modelica Standard Library defines constants. I used this to create the completed example:
package Main
model Dimensions
final constant Modelica.SIunits.Length x = 10;
final constant Modelica.SIunits.Length y = 5;
end Dimensions;
package Test_env
import Dim = Main.Dimensions;
model Test_model
Real z;
equation
z = Dim.x + Dim.y;
end Test_model;
end Test_env;
end Main;
And if you use something from a different package (here Modelica.SIunits.Length) you need to load the package. That's what your errors say with
Error: Class Modelica.SIunits.Length not found in scope Main.Dimensions.
In OpenModelica Modelica is already loaded on startup, so use loadModel(Modelica) or loadFile(...) in OMShell.
>> loadModel(Modelica)
true
>> loadFile("Path/To/Main.mo")
true
>> simulate(Main.Test_env.Test_model, startTime=0, stopTime=1, numberOfIntervals=500, tolerance=1e-4, method="dassl", outputFormat="mat")
record SimulationResult
resultFile = "C:/Users/USERNAME/AppData/Local/Temp/OpenModelica/Main.Test_env.Test_model_res.mat",
simulationOptions = "startTime = 0.0, stopTime = 1.0, numberOfIntervals = 500, tolerance = 0.0001, method = 'dassl', fileNamePrefix = 'Main.Test_env.Test_model', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = ''",
messages = "LOG_SUCCESS | info | The initialization finished successfully without homotopy method.
LOG_SUCCESS | info | The simulation finished successfully.
",
timeFrontend = 0.3193980510936645,
timeBackend = 0.00467019998960375,
timeSimCode = 0.001078686094233897,
timeTemplates = 0.02625684206983937,
timeCompile = 9.15578961474681,
timeSimulation = 0.2440117147112652,
timeTotal = 9.751522705140404
end SimulationResult;
>>

Related

ValueError: Target size (torch.Size([128])) must be the same as input size (torch.Size([112]))

I have a training function, in which inside there are two vectors:
d_labels_a = torch.zeros(128)
d_labels_b = torch.ones(128)
Then I have these features:
# Compute output
features_a = nets[0](input_a)
features_b = nets[1](input_b)
features_c = nets[2](inputs)
And then a domain classifier (nets[4]) makes predictions:
d_pred_a = torch.squeeze(nets[4](features_a))
d_pred_b = torch.squeeze(nets[4](features_b))
d_pred_a = d_pred_a.float()
d_pred_b = d_pred_b.float()
print(d_pred_a.shape)
The error raises in the loss function: ` pred_a = torch.squeeze(nets3)
pred_b = torch.squeeze(nets3)
pred_c = torch.squeeze(nets3)
loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) + d_criterion(d_pred_a, d_labels_a) + d_criterion(d_pred_b, d_labels_b)
The problem is that d_pred_a/b is different from d_labels_a/b, but only after a certain point. Indeed, when I print the shape of d_pred_a/b it istorch.Size([128])but then it changes totorch.Size([112])` independently.
It comes from here:
# Compute output
features_a = nets[0](input_a)
features_b = nets[1](input_b)
features_c = nets[2](inputs)
because if I print the shape of features_a is torch.Size([128, 2048]) but it changes into torch.Size([112, 2048])
nets[0] is a VGG, like this:
class VGG16(nn.Module):
def __init__(self, input_size, batch_norm=False):
super(VGG16, self).__init__()
self.in_channels,self.in_width,self.in_height = input_size
self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm)
self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm)
self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm)
self.block_4 = VGGBlock(256,512,batch_norm=batch_norm)
#property
def input_size(self):
return self.in_channels,self.in_width,self.in_height
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
# x = self.avgpool(x)
x = torch.flatten(x,1)
return x
I solved. The problem was the last batch. I used drop_last=True in the dataloader and It worked.

Strange problem with the fluid library in openmodelica

I wrote a simple code to experiment the use of the PrescribedPump machine in the Fluid package of the standard library. I'm using OpenModelica 1.13.2.
I would like to pump some water from a tank to another one, by using a prescribedPump driven with a constant value of 10000.
Here the code:
model PompaPilotata
package Medium = Modelica.Media.Water.ConstantPropertyLiquidWater;
inner Modelica.Fluid.System system ;
Modelica.Fluid.Vessels.OpenTank bacinella1(redeclare package Medium = Medium,
T_ambient = system.T_ambient, T_start = system.T_ambient, crossArea = 4, energyDynamics = Modelica.Fluid.Types.Dynamics.FixedInitial,
height = 10, level_start = 2, massDynamics = Modelica.Fluid.Types.Dynamics.FixedInitial, nPorts = 1, p_ambient = system.p_ambient,
use_HeatTransfer = false, use_T_start = true, use_portsData = false) ;
Modelica.Fluid.Vessels.OpenTank bacinella2(redeclare package Medium = Medium,
T_ambient = system.T_ambient, T_start = system.T_ambient, crossArea = 4, energyDynamics = Modelica.Fluid.Types.Dynamics.FixedInitial,
height = 10, level_start = 2, massDynamics = Modelica.Fluid.Types.Dynamics.FixedInitial, nPorts = 1, p_ambient = system.p_ambient,
use_HeatTransfer = false, use_T_start = true, use_portsData = false) ;
Modelica.Fluid.Machines.PrescribedPump Pompa(redeclare package Medium = Medium,
medium(h(stateSelect = StateSelect.default), p(stateSelect = StateSelect.default)),N_nominal = 100, V = 0.1, allowFlowReversal = false,
checkValve = true, energyDynamics = Modelica.Fluid.Types.Dynamics.DynamicFreeInitial, m_flow_start = 0.0000001,
massDynamics = Modelica.Fluid.Types.Dynamics.DynamicFreeInitial, nParallel = 1, use_HeatTransfer = false, use_N_in = true) ;
Modelica.Blocks.Sources.Constant Costante(k = 10000);
Modelica.Fluid.Pipes.StaticPipe tubo1(redeclare package Medium = Medium,allowFlowReversal = true,
diameter = 0.1, height_ab = 0, isCircular = true, length = 5, nParallel = 1) ;
Modelica.Fluid.Pipes.StaticPipe tubo2(redeclare package Medium = Medium,allowFlowReversal = true,
diameter = 0.1, height_ab = 0, isCircular = true, length = 5, nParallel = 1);
equation
connect(tubo2.port_b, bacinella2.ports[1]);
connect(Pompa.port_b, tubo2.port_a);
connect(tubo1.port_b, Pompa.port_a);
connect(bacinella1.ports[1], tubo1.port_a);
connect(Costante.y, Pompa.N_in);
end PompaPilotata;
I get this error message from the compiler:
In file included from C:/OpenModelica1.13.264bit/include/omc/c/util/modelica_string.h:38:0,
from C:/OpenModelica1.13.264bit/include/omc/c/openmodelica_func.h:52,
from PompaPilotata_model.h:6,
from PompaPilotata_06inz.c:2:
PompaPilotata_06inz.c: In function 'PompaPilotata_eqFunction_237':
C:/OpenModelica1.13.264bit/include/omc/c/meta/meta_modelica_data.h:231:21: error: incompatible type for argument 2 of 'omc_Modelica_Fluid_Machines_PrescribedPump$Pompa_flowCharacteristic'
#define mmc_mk_real mmc_mk_rcon
^
C:/OpenModelica1.13.264bit/include/omc/c/meta/meta_modelica_data.h:225:45: note: in definition of macro 'mmc_unbox_real'
#define mmc_unbox_real(X) mmc_prim_get_real(X)
^
PompaPilotata_06inz.c:3005:139: note: in expansion of macro 'mmc_mk_real'
data->simulationInfo->realParameter[7] = mmc_unbox_real(omc_Modelica_Fluid_Machines_PrescribedPump$Pompa_flowCharacteristic(threadData, mmc_mk_real(data->simulationInfo->realParameter[5])));
^
In file included from PompaPilotata_model.h:23:0,
from PompaPilotata_06inz.c:2:
PompaPilotata_functions.h:223:15: note: expected 'modelica_real {aka double}' but argument is of type 'void *'
modelica_real omc_Modelica_Fluid_Machines_PrescribedPump$Pompa_flowCharacteristic(threadData_t threadData, modelica_real _V_flow);
^
: recipe for target 'PompaPilotata_06inz.o' failed
\tools\msys\mingw64\bin\mingw32-make: [PompaPilotata_06inz.o] Error 1
\tools\msys\mingw64\bin\mingw32-make: * Waiting for unfinished jobs....
Compilation process failed. Exited with code 2.
Someone can explain me what does it mean and how to fix it?
Thanks
The model does not work in Dymola eiter, but it gives the following hint:
Function Pompa.flowCharacteristic_Unique7 is neither external nor has an algorithm. It should have been redeclared.
Therefore redeclaring the function for the flowCharacteristic should help. Copying this part from Modelica.Fluid.Examples.PumpingSystem and reducing the values for V_flow_nominal by a factor of 1000 (which is a wild guess) gives:
Modelica.Fluid.Machines.PrescribedPump Pompa(redeclare package Medium = Medium,
redeclare function flowCharacteristic = Modelica.Fluid.Machines.BaseClasses.PumpCharacteristics.quadraticFlow (V_flow_nominal={0.001,0.0025,0.005}, head_nominal={100,60,0}),
medium(h(stateSelect = StateSelect.default), p(stateSelect = StateSelect.default)),N_nominal = 100, V = 0.1, allowFlowReversal = false,
checkValve = true, energyDynamics = Modelica.Fluid.Types.Dynamics.DynamicFreeInitial, m_flow_start = 0.0000001,
massDynamics = Modelica.Fluid.Types.Dynamics.DynamicFreeInitial, nParallel = 1, use_HeatTransfer = false, use_N_in = true);
With the second line actually being added.

How can I measure Precision and Recall on Logistic Regression with PySpark?

I am using a Logistic Regression model on PySpark through databricks but i am not able to get my precision and recall. Everything works fine and I am able to get my ROC but there is not attribute or lib for Precision and Recall
lrModel = LogisticRegression()
predictions = bestModel.transform(testData)
# Instantiate metrics object
results = predictions.select(['probability', 'label'])
results_collect = results.collect()
results_list = [(float(i[0][0]), 1.0-float(i[1])) for i in results_collect]
scoreAndLabels = sc.parallelize(results_list)
metrics = MulticlassMetrics(scoreAndLabels)
# Overall statistics
precision = metrics.precision()
recall = metrics.recall()
f1Score = metrics.fMeasure()
print("Summary Stats")
print("Precision = %s" % precision)
print("Recall = %s" % recall)
print("F1 Score = %s" % f1Score)
>>>Summary Stats
>>>Precision = 0.0
>>>Recall = 0.0
>>>F1 Score = 0.0
I was able to create my own function to do so. It returns everything and more. I am using the "MulticlassMetrics()" from mllib package. Since its a multiclass it calculates metrics for each label so, you have to specify which label you want to retrieve.
### Model Evaluator User Defined Functions
def udfModelEvaluator(dfPredictions, labelColumn='label'):
colSelect = dfPredictions.select(
[F.col('prediction').cast(DoubleType())
,F.col(labelColumn).cast(DoubleType()).alias('label')])
metrics = MulticlassMetrics(colSelect.rdd)
mAccuracy = metrics.accuracy
mPrecision = metrics.precision(1)
mRecall = metrics.recall(1)
mF1 = metrics.fMeasure(1.0, 1.0)
mMatrix = metrics.confusionMatrix().toArray().astype(int)
mTP = metrics.confusionMatrix().toArray()[1][1]
mTN = metrics.confusionMatrix().toArray()[0][0]
mFP = metrics.confusionMatrix().toArray()[0][1]
mFN = metrics.confusionMatrix().toArray()[1][0]
mResults = [mAccuracy, mPrecision, mRecall, mF1, mMatrix, mTP, mTN, mFP, mFN, "Return [[0]=Accuracy, [1]=Precision, [2]=Recall, [3]=F1, [4]=ConfusionMatrix, [5]=TP, [6]=TN, [7]=FP, [8]=FN]"]
return mResults
To call the function:
metricsList = udfModelEvaluator(predictionsData, "label")
metricsList

Keras shape error when checking input

I am trying to train a simple MLP model that maps input questions (using a 300D word embedding) and image features extracted using a pretrained VGG16 model to a feature vector of fixed length. However, I can't figure out how to fix the error mentioned below. Here is the code I'm trying to run at the moment:
parser = argparse.ArgumentParser()
parser.add_argument('-num_hidden_units', type=int, default=1024)
parser.add_argument('-num_hidden_layers', type=int, default=3)
parser.add_argument('-dropout', type=float, default=0.5)
parser.add_argument('-activation', type=str, default='tanh')
parser.add_argument('-language_only', type=bool, default= False)
parser.add_argument('-num_epochs', type=int, default=10) #default=100
parser.add_argument('-model_save_interval', type=int, default=10)
parser.add_argument('-batch_size', type=int, default=128)
args = parser.parse_args()
questions_train = open('data/qa/preprocess/questions_train2014.txt', 'r').read().splitlines()
answers_train = open('data/qa/preprocess/answers_train2014_modal.txt', 'r').read().splitlines()
images_train = open('data/qa/preprocess/images_train2014.txt', 'r').read().splitlines()
vgg_model_path = 'data/coco/vgg_feats.mat'
maxAnswers = 1000
questions_train, answers_train, images_train = selectFrequentAnswers(questions_train,answers_train,images_train, maxAnswers)
#encode the remaining answers
labelencoder = preprocessing.LabelEncoder()
labelencoder.fit(answers_train)
nb_classes = len(list(labelencoder.classes_))
joblib.dump(labelencoder,'models/labelencoder.pkl')
features_struct = scipy.io.loadmat(vgg_model_path)
VGGfeatures = features_struct['feats']
print ('loaded vgg features')
image_ids = open('data/coco/coco_vgg_IDMap.txt').read().splitlines()
id_map = {}
for ids in image_ids:
id_split = ids.split()
id_map[id_split[0]] = int(id_split[1])
nlp = English()
print ('loaded word2vec features...')
img_dim = 4096
word_vec_dim = 300
model = Sequential()
if args.language_only:
model.add(Dense(args.num_hidden_units, input_dim=word_vec_dim, init='uniform'))
else:
model.add(Dense(args.num_hidden_units, input_dim=img_dim+word_vec_dim, init='uniform'))
model.add(Activation(args.activation))
if args.dropout>0:
model.add(Dropout(args.dropout))
for i in range(args.num_hidden_layers-1):
model.add(Dense(args.num_hidden_units, init='uniform'))
model.add(Activation(args.activation))
if args.dropout>0:
model.add(Dropout(args.dropout))
model.add(Dense(nb_classes, init='uniform'))
model.add(Activation('softmax'))
json_string = model.to_json()
if args.language_only:
model_file_name = 'models/mlp_language_only_num_hidden_units_' + str(args.num_hidden_units) + '_num_hidden_layers_' + str(args.num_hidden_layers)
else:
model_file_name = 'models/mlp_num_hidden_units_' + str(args.num_hidden_units) + '_num_hidden_layers_' + str(args.num_hidden_layers)
open(model_file_name + '.json', 'w').write(json_string)
print ('Compiling model...')
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
print ('Compilation done...')
print ('Training started...')
for k in range(args.num_epochs):
#shuffle the data points before going through them
index_shuf = list(range(len(questions_train)))
shuffle(index_shuf)
questions_train = [questions_train[i] for i in index_shuf]
answers_train = [answers_train[i] for i in index_shuf]
images_train = [images_train[i] for i in index_shuf]
progbar = generic_utils.Progbar(len(questions_train))
for qu_batch,an_batch,im_batch in zip(grouper(questions_train, args.batch_size, fillvalue=questions_train[-1]),
grouper(answers_train, args.batch_size, fillvalue=answers_train[-1]),
grouper(images_train, args.batch_size, fillvalue=images_train[-1])):
X_q_batch = get_questions_matrix_sum(qu_batch, nlp)
if args.language_only:
X_batch = X_q_batch
else:
X_i_batch = get_images_matrix(im_batch, id_map, VGGfeatures)
X_batch = np.hstack((X_q_batch, X_i_batch))
Y_batch = get_answers_matrix(an_batch, labelencoder)
loss = model.train_on_batch(X_batch, Y_batch)
progbar.add(args.batch_size, values=[("train loss", loss)])
#print type(loss)
if k%args.model_save_interval == 0:
model.save_weights(model_file_name + '_epoch_{:02d}.hdf5'.format(k))
model.save_weights(model_file_name + '_epoch_{:02d}.hdf5'.format(k))
And here is the error I get:
Keras: Error when checking input: expected dense_9_input to have shape
(4396,) but got array with shape (4096,)
I think that the error lies in what you pass in the else statement in the first layer of your model versus what you pass in training. In your first layer you specify:
model = Sequential()
if args.language_only:
model.add(Dense(args.num_hidden_units, input_dim=word_vec_dim, init='uniform'))
else:
model.add(Dense(args.num_hidden_units, input_dim=img_dim+word_vec_dim, init='uniform'))
You clearly pass input_dim = img_dim + word_vec_dim = 4096 + 300 = 4396. During training you pass:
X_q_batch = get_questions_matrix_sum(qu_batch, nlp)
if args.language_only:
X_batch = X_q_batch
else:
X_i_batch = get_images_matrix(im_batch, id_map, VGGfeatures)
X_batch = np.hstack((X_q_batch, X_i_batch))
So, in the else branch, X_batch will have X_q_batch or X_i_batch rows, which apparently = 4096.
By the way, for debugging purposes, it would be easier to give your layers a name, e.g.
x = Dense(64, activation='relu', name="dense_one")
I hope this helps.

scipy transferfunction vs state space

I have a LTI system which I am modeling using scipy.signals. But I get different results when using TransferFunction or StateSpace.
Besides the magnitudes for both the bode plot and the step response being different, the StateSpace representation add two more zeros on the LTI system I know are not there.
I know for a fact the two descriptions should be equivalent (at least I think I do), because I re did the math several times.
Could someone please help me explain what is happening?
the for the transfer function:
from params import *
numeratorthetaact = [Kt*Jl/(L*Jl*Ja), Kt*(betal+betac)/(L*Jl*Ja), Kt*kc/(L*Jl*Ja)]
denominatorthetaact = [1.0,
(L*betac*(Ja+Jl))/(Jl*Ja) + R/L,
(Ja+Jl)*(L*kc+R*betac)/(Jl*Ja*L) + (Kt*Kb)/(Ja*L),
(R*kc*(Ja+Jl))/(L*Jl*Ja) + (betac*Kt*Kb)/(L*Jl*Ja),
(kc*Kt*Kb)/(L*Jl*Ja),
0.0]
tfact = TransferFunction(numeratorthetaact, denominatorthetaact)
sysact = lti(tfact.num, tfact.den)
print "Zeros: ", sysact.zeros
print "Poles: ", sysact.poles
t, swact = step(sysact, T = time)
freqrad = numpy.multiply(frequency, 2.0*numpy.pi)
wrad, magact, phaseact = sysact.bode(w=freqrad)
whz = numpy.multiply(wrad, 1.0/(numpy.pi*2.0))
p.subplot(3,1,1)
p.plot(t, swact, label="Step Galvo")
p.legend()
p.subplot(3,1,2)
p.semilogx(whz, magact, label="Freq Galvo")
p.legend()
p.grid()
p.subplot(3,1,3)
p.semilogx(whz, phaseact, label="Phase Galvo")
p.legend()
p.grid()
p.show()
The code for StateSpace
from params import *
matrixA = numpy.array([[-(betaa+betac)/Ja, -kc/Ja, betac/Ja, kc/Ja, Kb/Ja],
[1.0, 0, 0, 0, 0],
[betac/Jl, kc/Jl, -(betal+betac)/Jl, -kc/Jl, 0],
[0, 0, 1.0, 0, 0],
[-Kb/L, 0, 0, 0, -R/L]])
matrixB = numpy.array([[0],
[0],
[0],
[0],
[1.0/L]])
matrixC = numpy.array([[0, 1.0, 0, 0, 0]])
matrixD = 0.0
ssact = StateSpace(matrixA, matrixB, matrixC, matrixD)
sysact = lti(ssact.A, ssact.B, ssact.C, ssact.D)
print "Zeros: ", sysact.zeros
print "Poles: ", sysact.poles
t, swact = step(sysact, T = time)
freqrad = numpy.multiply(frequency, 2.0*numpy.pi)
wrad, magact, phaseact = sysact.bode(w=freqrad)
whz = numpy.multiply(wrad, 1.0/(numpy.pi*2.0))
p.subplot(3,1,1)
p.plot(t, swact, label="Step Galvo")
p.legend()
p.subplot(3,1,2)
p.semilogx(whz, magact, label="Freq Galvo")
p.legend()
p.grid()
p.subplot(3,1,3)
p.semilogx(whz, phaseact, label="Phase Galvo")
p.legend()
p.grid()
p.show()
The resulting plots:
StateSpace vs TransferFunction
It is also worth mentioning that i get a BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) error when running the StateSpace code.
Thank you