Declaring Heat Capacity Cp in Dymola - modelica

I am having some problems to call the Specific Heat Capacity of my working fluid that in this case is Hydrogen, I can't call it using the Pressure or either the Temeperature, if someone could help me please, thanks in advance.
Here is my code
import Modelica.SIunits;
package Hyd
extends ExternalMedia.Media.CoolPropMedium(
mediumName="hydrogen",
substanceNames={"hydrogen"},
inputChoice=ExternalMedia.Common.InputChoice.pT);
end Hyd;
SIunits.SpecificHeatCapacity cp_in;//[J/kg*K]
Hyd.AbsolutePressure Pb_0;
Hyd.Temperature Tin;
Hyd.SaturationProperties sat9,sat10;
Equation
sat9=Hyd.setSat_T(Tin);
sat10=Hyd.setSat_p(Pb_0);
cp_in=Hyd.specificHeatCapacityCp(sat9);//[J/kg*K]
cp_in=Hyd.specificHeatCapacityCp(sat10);//[J/kg*K]
The function is declared as:
function specificHeatCapacityCp_Unique8
input ExternalMedia.Media.BaseClasses.ExternalTwoPhaseMedium.ThermodynamicState state ;
output Modelica.Media.Interfaces.Types.SpecificHeatCapacity cp := 1000.0 "Specific heat capacity at constant pressure";
end specificHeatCapacityCp_Unique8;

I'm not sure what you are trying to achieve, exactly, but you are passing a SaturationProperties object to a function expecting a ThermodynamicState, which cannot work (and is reported as such when using OpenModelica).
Here is a working version computing cp at the saturation pressure at 300 K:
model test_SO_68546587
import Modelica.SIunits;
package Hyd
extends ExternalMedia.Media.CoolPropMedium(
mediumName="hydrogen",
substanceNames={"hydrogen"},
inputChoice=ExternalMedia.Common.InputChoice.pT);
end Hyd;
SIunits.SpecificHeatCapacity cp_in;//[J/kg*K]
Hyd.AbsolutePressure Pb_0;
Hyd.Temperature Tin;
Hyd.ThermodynamicState state;
equation
state = Hyd.setState_pT(p=Pb_0, T=Tin);
Tin = 300;
Pb_0 = Hyd.saturationPressure(Tin);
cp_in=Hyd.specificHeatCapacityCp(state);// 14345.2 J/kg*K # 300 K, 12.951 bar
end test_SO_68546587;

Related

Simulink model 'to workspace' output

I am trying to control motor torque and am using a workspace variable in Simulink and want to output similar variable to workspace.
I have size(T_u)=[3, 91] whereas the output I am getting from the simulation has size [91, 90]
I am unable to understand why this is so.
Code that I am using:
load('Motor_Param.mat')
t = 1:0.1:10;
T_o = [0.05*(10-t);0.04*(10-t);0.03*(10-t)];
T_d = zeros(size(T_o));
T_e = (T_d - T_o);
C_PD = pid(100,0,10,100);
T_u = zeros(size(T_e));
for k=1:size(T_e,1)
T_u(k,:) = lsim(C_PD,T_e(k,:),t);
%T_u(1,:)= -45.0450000000000 -44.5444552724092 -44.0439110892737 -43.5433674500493 -43.0428243541925 -42.5422818011600 -42.0417397904094 -41.5411983213986 -41.0406573935862 -40.5401170064312 -40.0395771593933 -39.5390378519326 -39.0384990835098 -38.5379608535861 -38.0374231616233 -37.5368860070837 -37.0363493894301 -36.5358133081260 -36.0352777626353 -35.5347427524223 -35.0342082769522 -34.5336743356904 -34.0331409281029 -33.5326080536564 -33.0320757118181 -32.5315439020554 -32.0310126238368 -31.5304818766308 -31.0299516599067 -30.5294219731343 -30.0288928157839 -29.5283641873264 -29.0278360872332 -28.5273085149760 -28.0267814700274 -27.5262549518604 -27.0257289599483 -26.5252034937652 -26.0246785527857 -25.5241541364848 -25.0236302443380 -24.5231068758215 -24.0225840304120 -23.5220617075865 -23.0215399068228 -22.5210186275990 -22.0204978693939 -21.5199776316868 -21.0194579139572 -20.5189387156857 -20.0184200363529 -19.5179018754402 -19.0173842324294 -18.5168671068029 -18.0163504980435 -17.5158344056347 -17.0153188290603 -16.5148037678048 -16.0142892213531 -15.5137751891906 -15.0132616708034 -14.5127486656779 -14.0122361733011 -13.5117241931606 -13.0112127247442 -12.5107017675407 -12.0101913210389 -11.5096813847285 -11.0091719580996 -10.5086630406426 -10.0081546318487 -9.50764673120954 -9.00713933821711 -8.50663245236405 -8.00612607314350 -7.50562020004906 -7.00511483257487 -6.50460997021554 -6.00410561246623 -5.50360175882257 -5.00309840878072 -4.50259556183731 -4.00209321748951 -3.50159137523496 -3.00109003457184 -2.50058919499879 -2.00008885601498 -1.49958901712007 -0.999089677814209 -0.498590837598075 0.00190750402718064
a = sim('Motor_Control','SimulationMode','normal');
out = a.get('T_l')
end
Link to .mat and .slx files is: https://drive.google.com/open?id=1kGeA4Cmt8mEeM3ku_C4NtXclVlHsssuw
If you set the Save format in the To Workspace block to Timeseries the output will have the dimensions of the signal times the number of timesteps.
In your case I activated the option Display->Signals & Ports->Signal dimensions and the signal dimensions in your model look like this:
So the signal that you output to the workspace has the size 90. Now if I print size(out.Data) I get
ans = 138 90
where 90 is the signal dimension and 138 is the number of timesteps in your Simulink model.
You could now use the last row of the data (which has the length 90) and add it to your array.
I edit your code, the code has [21,3] output size. "21" is coming from (t_final*1/sample_time+1)
In your code, time t should start from 0.
Motor_Control.slx model has 0.1 sample time if you run the model for a 9 second, the output file has 91 samples for each signal and that's why you have [91, 90] sized output. I download from your drive link and this Simulink model has 2 sec. simulation.
T_u is used as an input of the Simulink model, it is not constant so T_u must be time series.
The edited code is below;
load('Motor_Param.mat')
t = 0:0.1:10;
T_o = [0.05*(10-t);0.04*(10-t);0.03*(10-t)];
T_d = zeros(size(T_o));
T_e = (T_d - T_o);
C_PD = pid(100,0,10,100);
T_u = timeseries(zeros(size(T_e)),t);
for k=1:size(T_e,1)
T_u.Data(k,:) = lsim(C_PD,T_e(k,:),t);
a = sim('Motor_Control','SimulationMode','normal');
out = a.get('T_l')
end

How to reproduce a linear regression done via pseudo inverse in pytorch

I try to reproduce a simple linear regression x = A†b using pytorch. But I get completely different numbers.
So first I use plain numpy and do
A_pinv = np.linalg.pinv(A)
betas = A_pinv.dot(b)
print(((b - A.dot(betas))**2).mean())
print(betas)
which results in:
364.12875
[0.43196774 0.14436531 0.42414093]
Now I try to get similar enough numbers using pytorch:
# re-implement via pytoch model using built-ins
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
# We'll create a TensorDataset, which allows access to rows from inputs and targets as tuples.
# We'll also create a DataLoader, to split the data into batches while training.
# It also provides other utilities like shuffling and sampling.
inputs = to.from_numpy(A)
targets = to.from_numpy(b)
train_ds = TensorDataset(inputs, targets)
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
# define model, loss and optimizer
new_model = nn.Linear(source_variables, predict_variables, bias=False)
loss_fn = F.mse_loss
opt = to.optim.SGD(new_model.parameters(), lr=1e-10)
def fit(num_epochs, new_model, loss_fn, opt):
for epoch in tnrange(num_epochs, desc="epoch"):
for xb,yb in train_dl:
# Generate predictions
pred = new_model(xb)
loss = loss_fn(pred, yb)
# Perform gradient descent
loss.backward()
opt.step()
opt.zero_grad()
if epoch % 1000 == 0:
print((new_model.weight, loss))
print('Training loss: ', loss_fn(model(inputs), targets))
# fit the model
fit(10000, new_model, loss_fn, opt)
It prints as the last result:
tensor([[0.0231, 0.5185, 0.4589]], requires_grad=True), tensor(271.8525, grad_fn=<MseLossBackward>))
Training loss: tensor(378.2871, grad_fn=<MseLossBackward>)
As you can see these numbers are completely different so I must have made a mistake somewhere ...
Here are the numbers for A and b to reproduce the result:
A = np.array([[2822.48, 2808.48, 2810.92],
[2832.94, 2822.48, 2808.48],
[2832.57, 2832.94, 2822.48],
[2824.23, 2832.57, 2832.94],
[2854.88, 2824.23, 2832.57],
[2800.71, 2854.88, 2824.23],
[2798.36, 2800.71, 2854.88],
[2818.46, 2798.36, 2800.71],
[2805.37, 2818.46, 2798.36],
[2815.44, 2805.37, 2818.46]], dtype=float32)
b = np.array([2832.94, 2832.57, 2824.23, 2854.88, 2800.71, 2798.36, 2818.46, 2805.37, 2815.44, 2834.4 ], dtype=float32)

Using pytorch cuda for RNNs on google colaboratory

I have a code (a code we saw in a class) of a recurrent neural network that reads a given text and tries to produce its own text similar to the example. The code is written in python and uses the pytorch library. I wanted to modify to see whether I could increase its speed by using GPU instead of CPU and I made some tests on google collaboratory. The GPU version of the code runs fine but is about three times slower than the CPU version. I do not know the details of GPU architecture so I can not really understand why it is slower. I know that GPUs can do more arithmetic operations per cycle but have more limited memory so I am curious if I am having a memory issue. I also tried using CUDA with a generative adversarial network and in this case it was almost ten times faster. Any tips on this would be welcome.
The code (CUDA version) is below. I am new at this stuff so sorry if some of the terminology is not correct.
The architecture is input->encoder->recursive network->decoder->output.
import torch
import time
import numpy as np
from torch.autograd import Variable
import matplotlib.pyplot as plt
from google.colab import files
#uploding text on google collab
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
#data preprocessing
with open('text.txt','r') as file:
#with open closes the file after we are done with it
rawtxt=file.read()
rawtxt = rawtxt.lower()
#a function that assigns a number to each unique character in the text
def create_map(rawtxt):
letters = list(set(rawtxt))
lettermap = dict(enumerate(letters)) #gives each letter in the list a number
return lettermap
num_to_let = create_map(rawtxt)
#inverse to num_to_let
let_to_num =dict(zip(num_to_let.values(), num_to_let.keys()))
print(num_to_let)
#turns a text of characters into text of numbers using the mapping
#given by the input mapdict
def maparray(txt, mapdict):
txt = list(txt)
for k, letter in enumerate(txt):
txt[k]=mapdict[letter]
txt=np.array(txt)
return txt
X=maparray(rawtxt, let_to_num) #the data text in numeric format
Y= np.roll(X, -1, axis=0) #shifted data text in numeric format
X=torch.LongTensor(X)
Y=torch.LongTensor(Y)
#up to here we are done with data preprocessing
#return a random batch for training
#this reads a random piece inside data text
#with the size chunk_size
def random_chunk(chunk_size):
k=np.random.randint(0,len(X)-chunk_size)
return X[k:k+chunk_size], Y[k:k+chunk_size]
nchars = len(num_to_let)
#define the recursive neural network class
class rnn(torch.nn.Module):
def __init__(self,input_size,hidden_size,output_size, n_layers=1):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers= n_layers
self.encoder = torch.nn.Embedding (input_size, hidden_size)
self.rnn = torch.nn.RNN(hidden_size, hidden_size, n_layers, batch_first=True)
self.decoder = torch.nn.Linear (hidden_size, output_size)
def forward (self,x,hidden):
x=self.encoder(x.view(1,-1))
output, hidden = self.rnn(x.view(1,1,-1), hidden)
output = self.decoder(output.view(1,-1))
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(self.n_layers , 1 , self.hidden_size)).cuda()
#hyper-params
lr = 0.009
no_epochs = 50
chunk_size = 150
myrnn = rnn(nchars, 150, nchars,1)
myrnn.cuda()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(myrnn.parameters(), lr=lr)
t0 = time.time()
for epoch in range(no_epochs):
totcost=0
generated = ''
for _ in range(len(X)//chunk_size):
h=myrnn.init_hidden()
cost = 0
x, y=random_chunk(chunk_size)
x, y= Variable(x).cuda(), Variable(y).cuda()
for i in range(chunk_size):
out, h = myrnn.forward(x[i],h)
_, outl = out.data.max(1)
letter = num_to_let[outl[0]]
generated+=letter
cost += criterion(out, y[i])
optimizer.zero_grad()
cost.backward()
optimizer.step()
totcost+=cost
totcost/=len(X)//chunk_size
print('Epoch', epoch, 'Avg cost/chunk: ', totcost)
print(generated[0:750],'\n\n\n')
t1 = time.time()
total = t1-t0
print('total',total)
#we encode each word into a vector of fixed size

Tensorflow: Cannot interpret feed_dict key as Tensor

I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.
The inputs are images of size 28 * 28. In the following code I get the error in line
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.
Here is the code I have written
#Initialize
batch_size = 128
layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001
#Create neural network model
def create_model(inp, w, b):
layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
layer1 = tf.nn.relu(layer1)
layer2 = tf.matmul(layer1, w['w2']) + b['b2']
return layer2
#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}
init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
model = create_model(x, w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#Process
with tf.Session(graph=graph1) as sess:
tf.initialize_all_variables().run()
total_batch = int(train_dataset.shape[0] / batch_size)
for epoch in range(num_steps):
loss = 0
for i in range(total_batch):
batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
loss = loss + c
loss = loss / total_batch
if epoch % 500 == 0:
print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset})
print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code
then i had again loaded the model.
K.clear_session()
i faced this problem in production server,
but in my pc it was running fine
...........
from keras import backend as K
#Before prediction
K.clear_session()
#After prediction
K.clear_session()
Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,
# define a graph
graph1 = tf.Graph()
with graph1.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create(x, w, b)
with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......
If you use django server, just runserver with --nothreading
for example:
python manage.py runserver --nothreading
I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it
In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:
# Declare this as global:
global graph
graph = tf.get_default_graph()
# Then just before you call in your model, use this
with graph.as_default():
# call you models here
Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.
Hope that helps.
The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:
with tf.Session() as sess:
sess.run(logits, feed_dict=feed_dict)
sess.run(logits, feed_dict=feed_dict)
If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.
You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.
Implement this at the topmost block of Notebook file:
from keras import backend as K
K.clear_session()
Similar to #javan-peymanfard and #hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:
#app.post('/endpoint_name')
async def endpoint_function():
# Do stuff here, including possibly (re)loading the model

Training siamese neural network on multiple GPUs in Torch: Share not supported for cunn's DataParallelTable

I'm trying to speed up my network implemented in torch7 but I get an error when I try to use nn.DataParallelTable.
This is what I'm trying to do:
m1, m2 = createModel(8,48), createModel(8,48)
--8 # of GPUs, 48 hidden unit in the last layer
m2:share(m1,'weight', 'bias') ----THE ERROR IS HERE
prl = nn.ParallelTable()
prl:add(m1)
prl:add(m2)
prl:cuda()
mlp = nn.Sequential()
mlp:add(prl)
mlp:cuda()
crit = nn.CosineEmbeddingCriterion():cuda()
Where the functions are:
function createModel(nGPU,bot)
local features = nn.Concat(2)
local fb1 = nn.Sequential() -- branch 1
fb1:add(nn.SpatialConvolution(1,48,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(48,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(128,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialMaxPooling(2,2,2,2))
view = 12
local fb2 = fb1:clone() -- branch 2
for k,v in ipairs(fb2:findModules('nn.SpatialConvolution')) do
v:reset() -- reset branch 2's weights
end
features:add(fb1) features:add(fb2) features:cuda()
--------------the error is at this line-----------
features = makeDataParallel(features, nGPU)
local classifier = nn.Sequential()
classifier:add(nn.View(256viewview))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(256viewview, 4096))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(4096, 4096))
classifier:add(nn.Tanh())
classifier:add(nn.Linear(4096, bot))
classifier:add(nn.Tanh())
classifier:cuda()
local model = nn.Sequential():add(features):add(classifier)
return model
end
and the other one is:
function makeDataParallel(model, nGPU)
if nGPU > 1 then
print('converting module to nn.DataParallelTable')
assert(nGPU <= cutorch.getDeviceCount(), 'number of GPUs less than nGPU specified')
local model_single = model
model = nn.DataParallelTable(1)
for i=1, nGPU do
cutorch.setDevice(i)
model:add(model_single:clone():cuda(), i)
end
end
cutorch.setDevice(1)
return model
end
The error I get is:
[C]: in function 'error'
...a/torch/install/share/lua/5.1/cunn/DataParallelTable.lua:337: in function 'share'
/home/andrea/torch/install/share/lua/5.1/nn/Container.lua:97: in function 'share'
main.lua:123: in main chunk
[C]: at 0x00406670
Do you possibly know where the error is? Sorry but I'm kinda new at this and I cannot find a way to figure it out. Of course I'm figuring out wrong the net structure. Thanks in advance.