Question for modeling Compressor in heat pump cycle.(PQ flash)(CoolProp)(ExternalMedia)(Dymola) - modelica

I have probelm using ExternalMedia(Coolprop) in Dymola(modelica) modeling the compressor in heat pump system.
The error messages are like below
dymosim started
... "Project_WD.Test.Test_Compressor" simulating
... "dsin.txt" loading (dymosim input file)
Error: The following error was detected at time: 0
Pressure to PQ_flash [-509444 Pa] must be in range [0.0228908 Pa, 3.629e+06 Pa]
The stack of functions is:
Project_WD.Media.R600a_CP.setSat_p_Unique15
Project_WD.Media.R600a_CP.setSat_p_Unique15(compressor.p_su)
Project_WD.Media.R600a_CP.saturationTemperature_Unique13(compressor.p_su)
Non-linear solver will silently attempt to handle this problem.
Non-linear solver gave up after attempt: 305
----------------------------------------------------------------------------------------------------------------------
I think this is a CoolProp error, I did not used -50944 Pa in my script, and I wonder how to fix it.
I tried to fix the initial value of the p.su in the script. But It did not work. and I tried to make proper input value in Source and Sink But nothing changes.
The Main Code is like below
model Test_Compressor
ThermoCycle.Components.FluidFlow.Reservoirs.SinkP sinkPFluid(redeclare
package Medium = Project_WD.Media.R600a_CP, p0=2000000)
annotation (Placement(transformation(extent={{42,48},{62,68}})));
ThermoCycle.Components.FluidFlow.Reservoirs.SourceMdot sourceWF(
h_0=503925,
UseT=true,
Mdot_0=0.05,
redeclare package Medium = Project_WD.Media.R600a_CP,
T_0=323.15)
annotation (Placement(transformation(extent={{-94,-60},{-74,-40}})));
Component.compressor_BCA030NAMV compressor(redeclare package Medium =
Project_WD.Media.R600a_CP, redeclare function CPmodel =
Project_WD.Component_Parameter.BCA030NAMV)
annotation (Placement(transformation(extent={{-38,-14},{4,26}})));
equation
connect(sourceWF.flangeB, compressor.InFlow) annotation (Line(
points={{-75,-50},{-17,-50},{-17,-12}},
color={0,0,255},
smooth=Smooth.None));
connect(compressor.OutFlow, sinkPFluid.flangeB) annotation (Line(
points={{-17,24},{-18,24},{-18,58},{43.6,58}},
color={0,0,255},
smooth=Smooth.None));
annotation (Diagram(coordinateSystem(extent={{-120,-100},{80,100}},
preserveAspectRatio=false),
graphics), Icon(coordinateSystem(extent={{-120,-100},
{80,100}})),
experiment(StopTime=1000),
__Dymola_experimentSetupOutput);
end Test_Compressor;
The Sub Code is like below
model compressor_BCA030NAMV
/****************************************** FLUID ******************************************/
replaceable package Medium =
ThermoCycle.Media.R600a_CP
constrainedby Modelica.Media.Interfaces.PartialMedium "Medium model" annotation (choicesAllMatching = true);
/*Ports */
ThermoCycle.Interfaces.Fluid.FlangeA InFlow(redeclare package Medium =
Medium)
annotation (Placement(transformation(extent={{-10,-100},{10,-80}}),
iconTransformation(extent={{-10,-100},{10,-80}})));
ThermoCycle.Interfaces.Fluid.FlangeB OutFlow(redeclare package Medium =
Medium)
annotation (Placement(transformation(extent={{-10,80},{10,100}}),
iconTransformation(extent={{-10,80},{10,100}})));
replaceable function CPmodel =
Project_WD.Component_Parameter.BCA030NAMV constrainedby
Project_WD.calculate.compressor_para
"Compressor Model - LG compressor Map data model!" annotation (
choicesAllMatching=true);
/****************************************** PARAMETERES ******************************************/
parameter Modelica.Units.SI.Frequency N_rot=46 "Compressor frequency";
parameter Modelica.Units.SI.MassFlowRate M_dot_start=1.38e-05
"Nominal Mass flow rate" annotation (Dialog(tab="Initialization"));
parameter Modelica.Units.SI.Pressure p_su_start=100000
"Inlet pressure start value" annotation (Dialog(tab="Initialization"));
parameter Modelica.Units.SI.Pressure p_ex_start=1500000
"Outlet pressure start value" annotation (Dialog(tab="Initialization"));
parameter Modelica.Units.SI.Temperature T_su_start=253.15
"Inlet temperature start value" annotation (Dialog(tab="Initialization"));
parameter Medium.SpecificEnthalpy h_su_start = Medium.specificEnthalpy_pT(p_su_start, T_su_start)
"Inlet enthalpy start value" annotation (Dialog(tab="Initialization"));
parameter Medium.SpecificEnthalpy h_ex_start= Medium.specificEnthalpy_pT(p_ex_start, Medium.saturationTemperature(p_ex_start)+50)
"Outlet enthalpy start value" annotation (Dialog(tab="Initialization"));
/****************************************** VARIABLES ******************************************/
Medium.ThermodynamicState vaporIn
"Thermodynamic state of the fluid at the inlet";
Medium.ThermodynamicState vaporOut
"Thermodynamic state of the fluid at the outlet - isentropic";
Real epsilon_s "Isentropic Efficiency";
Real epsilon_v "Volumetric efficiency";
Modelica.Units.SI.Volume Vs "Swept Volume";
Real rpm;
Modelica.Units.SI.Power W_dot;
Modelica.Units.SI.VolumeFlowRate V_dot_su;
Modelica.Units.SI.MassFlowRate M_dot(start=M_dot_start);
Medium.Density rho_su(start=Medium.density_pT(p_su_start,T_su_start));
Medium.SpecificEntropy s_su;
Medium.SpecificEnthalpy h_su(start=h_su_start);
Medium.SpecificEnthalpy h_ex(start=h_ex_start);
Medium.AbsolutePressure p_su(start=p_su_start);
Medium.AbsolutePressure p_ex(start=p_ex_start);
Medium.SpecificEnthalpy h_ex_s;
Medium.Temperature T_cd(start=Medium.saturationTemperature(p_ex_start));
Medium.Temperature T_ev(start=Medium.saturationTemperature(p_su_start));
Real results[3];
equation
/* Fluid Properties */
vaporIn = Medium.setState_ph(p_su,h_su);
rho_su = Medium.density(vaporIn);
s_su = Medium.specificEntropy(vaporIn);
vaporOut = Medium.setState_ps(p_ex,s_su);
h_ex_s = Medium.specificEnthalpy(vaporOut);
T_cd = Medium.saturationTemperature(p_ex);
T_ev = Medium.saturationTemperature(p_su);
results=CPmodel(T_ev,T_cd);
M_dot = results[1];
W_dot = results[2];
Vs = results[3];
/*EQUATIONS */
rpm = N_rot*60;
V_dot_su = epsilon_v*Vs*N_rot;
V_dot_su = M_dot/rho_su;
h_ex = h_su + (h_ex_s - h_su)/epsilon_s;
W_dot = M_dot*(h_ex - h_su) "Consumed Power";
//BOUNDARY CONDITIONS //
/* Enthalpies */
h_su = if noEvent(InFlow.m_flow <= 0) then h_ex else inStream(InFlow.h_outflow);
h_su = InFlow.h_outflow;
OutFlow.h_outflow = if noEvent(OutFlow.m_flow <= 0) then h_ex else inStream(
OutFlow.h_outflow);
/*Mass flows */
M_dot = InFlow.m_flow;
OutFlow.m_flow = -M_dot;
/*pressures */
InFlow.p = p_su;
OutFlow.p = p_ex;
annotation (Icon(graphics={Bitmap(extent={{-80,-80},{80,80}}, fileName="modelica://LG_compressor/comp.png")}),
conversion(noneFromVersion=""));
end compressor_BCA030NAMV;
function compressor_para
"Base model for the computation of the compressor performance. Must be redeclared with the coefficients and the swept volume"
input Modelica.Units.SI.Temperature T_ev "Condensing temperature";/* unit : K */
input Modelica.Units.SI.Temperature T_cd "Evaporation temperature";/* unit : K */
output Real result[3]
"Vector with the output flow rate, cp power and swept volume";
protected
parameter Real coef[11,4]=
[0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0;
0,0,0,0]
"Coefficient matrix. Column1: Capacity. Column2: Power. Column3: Current. Column4: Flow Rate";
parameter Modelica.Units.SI.Volume Vs=0 "Compressor Swept Volume";
protected
Modelica.Units.SI.MassFlowRate Mdot "Mass Flow Rate";/* unit : kg/s */
Modelica.Units.SI.Power Wdot "Compressor Power";/* unit : W */
algorithm
Wdot := Project_WD.calculate.Comp_mapdata_cal(
(T_ev - 273.15),
(T_cd - 273.15),
coef[:, 2]); // Temperature input is in Celcius
Mdot := Project_WD.calculate.Comp_mapdata_cal(
(T_ev - 273.15),
(T_cd - 273.15),
coef[:, 4]); // Temperature input is in Celcius. Output is in kg/hr
result:={Mdot,Wdot,Vs};
end compressor_para;
function BCA030NAMV
"BCA030NAMV BLDC/Compact/A/Displacement 03.0cc/rev = 3E-06/rev / R600a / Cu Wire / 220-240V, 50/60Hz / inverter Compressor by LG elec"
extends Project_WD.calculate.compressor_para(Vs=3.0e-6, coef=[0,87.0312493,0,2.73823813;
0,-23.5090917,0,-0.314445833; 0,-0.576825,0,-0.0085375; 0,-0.001413333,0,
-2.66667E-05; 0,-7.740443,0,-0.116178854; 0,0.234250512,0,0.003251526; 0,
-0.001864852,0,-2.81759E-05; 0,0.80516,0,0.01332; 0,-0.006655,0,-0.0001235;
0,0.01724,0,0.00028; 0,-0.000143,0,-0.0000025]);
end BCA030NAMV;
The Fluid Property I used is below
package R600a_CP "CoolProp model of Isobutane"
extends ExternalMedia.Media.CoolPropMedium(
mediumName = "Isobutane",
substanceNames = {"Isobutane"},
ThermoStates = Modelica.Media.Interfaces.Choices.IndependentVariables.ph,
SpecificEnthalpy(start=2e5));
end R600a_CP;

Related

Declaring Heat Capacity Cp in Dymola

I am having some problems to call the Specific Heat Capacity of my working fluid that in this case is Hydrogen, I can't call it using the Pressure or either the Temeperature, if someone could help me please, thanks in advance.
Here is my code
import Modelica.SIunits;
package Hyd
extends ExternalMedia.Media.CoolPropMedium(
mediumName="hydrogen",
substanceNames={"hydrogen"},
inputChoice=ExternalMedia.Common.InputChoice.pT);
end Hyd;
SIunits.SpecificHeatCapacity cp_in;//[J/kg*K]
Hyd.AbsolutePressure Pb_0;
Hyd.Temperature Tin;
Hyd.SaturationProperties sat9,sat10;
Equation
sat9=Hyd.setSat_T(Tin);
sat10=Hyd.setSat_p(Pb_0);
cp_in=Hyd.specificHeatCapacityCp(sat9);//[J/kg*K]
cp_in=Hyd.specificHeatCapacityCp(sat10);//[J/kg*K]
The function is declared as:
function specificHeatCapacityCp_Unique8
input ExternalMedia.Media.BaseClasses.ExternalTwoPhaseMedium.ThermodynamicState state ;
output Modelica.Media.Interfaces.Types.SpecificHeatCapacity cp := 1000.0 "Specific heat capacity at constant pressure";
end specificHeatCapacityCp_Unique8;
I'm not sure what you are trying to achieve, exactly, but you are passing a SaturationProperties object to a function expecting a ThermodynamicState, which cannot work (and is reported as such when using OpenModelica).
Here is a working version computing cp at the saturation pressure at 300 K:
model test_SO_68546587
import Modelica.SIunits;
package Hyd
extends ExternalMedia.Media.CoolPropMedium(
mediumName="hydrogen",
substanceNames={"hydrogen"},
inputChoice=ExternalMedia.Common.InputChoice.pT);
end Hyd;
SIunits.SpecificHeatCapacity cp_in;//[J/kg*K]
Hyd.AbsolutePressure Pb_0;
Hyd.Temperature Tin;
Hyd.ThermodynamicState state;
equation
state = Hyd.setState_pT(p=Pb_0, T=Tin);
Tin = 300;
Pb_0 = Hyd.saturationPressure(Tin);
cp_in=Hyd.specificHeatCapacityCp(state);// 14345.2 J/kg*K # 300 K, 12.951 bar
end test_SO_68546587;

Talos --> TypeError: __init__() got an unexpected keyword argument 'grid_downsample'

I am trying to run a hyperparameters optimization with Talos. As I have a lot of parameters to test, I want to use a 'grid_downsample' argument that will select 30% of all possible hyperparameters combinations. However when I run my code I get: TypeError: __init__() got an unexpected keyword argument 'grid_downsample'
I tested the code below without the 'grid_downsample' option and with less hyperparameters.
#load data
data = pd.read_csv('data.txt', sep="\t", encoding = "latin1")
# split into input (X) and output (y) variables
Y = np.array(data['Y'])
data_bis = data.drop(['Y'], axis = 1)
X = np.array(data_bis)
p = {'activation':['relu'],
'optimizer': ['Nadam'],
'first_hidden_layer': [12],
'second_hidden_layer': [12],
'batch_size': [20],
'epochs': [10,20],
'dropout_rate':[0.0, 0.2]}
def dnn_model(x_train, y_train, x_val, y_val, params):
model = Sequential()
#input layer
model.add(Dense(params['first_hidden_layer'], input_shape=(1024,)))
model.add(Dropout(params['dropout_rate']))
model.add(Activation(params['activation']))
#hidden layer 2
model.add(Dense(params['second_hidden_layer']))
model.add(Dropout(params['dropout_rate']))
model.add(Activation(params['activation']))
# output layer with one node
model.add(Dense(1))
model.add(Activation(params['activation']))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=params['optimizer'], metrics=['accuracy'])
out = model.fit(x_train, y_train,
batch_size=params['batch_size'],
epochs=params['epochs'],
validation_data=[x_val, y_val],
verbose=0)
return out, model
scan_object = ta.Scan(X, Y, model=dnn_model, params=p, experiment_name="test")
reporting = ta.Reporting(scan_object)
report = reporting.data
report.to_csv('./Random_search/dnn/report_talos.txt', sep = '\t')
This code works well. If I change the scan_object as the end to: scan_object = ta.Scan(X, Y, model=dnn_model, grid_downsample=0.3, params=p, experiment_name="test"), it gives me the error: TypeError: __init__() got an unexpected keyword argument 'grid_downsample' while I was expecting to have the same results format as a normal grid search but with less combinations. What am I missing? Did the name of the argument change? I'm using Talos 0.6.3 in a conda environment.
Thank you!
might be too late for you now but they've switched it to fraction_limit. It would give this for you
scan_object = ta.Scan(X, Y, model=dnn_model, params=p, experiment_name="test", fraction_limit = 0.1)
Sadly, the doc isn't well updated
Check out their examples on GitHub:
https://github.com/autonomio/talos/blob/master/examples/Hyperparameter%20Optimization%20with%20Keras%20for%20the%20Iris%20Prediction.ipynb

Tensorflow: Cannot interpret feed_dict key as Tensor

I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.
The inputs are images of size 28 * 28. In the following code I get the error in line
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.
Here is the code I have written
#Initialize
batch_size = 128
layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001
#Create neural network model
def create_model(inp, w, b):
layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
layer1 = tf.nn.relu(layer1)
layer2 = tf.matmul(layer1, w['w2']) + b['b2']
return layer2
#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}
init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
model = create_model(x, w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#Process
with tf.Session(graph=graph1) as sess:
tf.initialize_all_variables().run()
total_batch = int(train_dataset.shape[0] / batch_size)
for epoch in range(num_steps):
loss = 0
for i in range(total_batch):
batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
loss = loss + c
loss = loss / total_batch
if epoch % 500 == 0:
print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset})
print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code
then i had again loaded the model.
K.clear_session()
i faced this problem in production server,
but in my pc it was running fine
...........
from keras import backend as K
#Before prediction
K.clear_session()
#After prediction
K.clear_session()
Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,
# define a graph
graph1 = tf.Graph()
with graph1.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create(x, w, b)
with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......
If you use django server, just runserver with --nothreading
for example:
python manage.py runserver --nothreading
I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it
In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:
# Declare this as global:
global graph
graph = tf.get_default_graph()
# Then just before you call in your model, use this
with graph.as_default():
# call you models here
Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.
Hope that helps.
The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:
with tf.Session() as sess:
sess.run(logits, feed_dict=feed_dict)
sess.run(logits, feed_dict=feed_dict)
If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.
You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.
Implement this at the topmost block of Notebook file:
from keras import backend as K
K.clear_session()
Similar to #javan-peymanfard and #hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:
#app.post('/endpoint_name')
async def endpoint_function():
# Do stuff here, including possibly (re)loading the model

Training siamese neural network on multiple GPUs in Torch: Share not supported for cunn's DataParallelTable

I'm trying to speed up my network implemented in torch7 but I get an error when I try to use nn.DataParallelTable.
This is what I'm trying to do:
m1, m2 = createModel(8,48), createModel(8,48)
--8 # of GPUs, 48 hidden unit in the last layer
m2:share(m1,'weight', 'bias') ----THE ERROR IS HERE
prl = nn.ParallelTable()
prl:add(m1)
prl:add(m2)
prl:cuda()
mlp = nn.Sequential()
mlp:add(prl)
mlp:cuda()
crit = nn.CosineEmbeddingCriterion():cuda()
Where the functions are:
function createModel(nGPU,bot)
local features = nn.Concat(2)
local fb1 = nn.Sequential() -- branch 1
fb1:add(nn.SpatialConvolution(1,48,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(48,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(128,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialMaxPooling(2,2,2,2))
view = 12
local fb2 = fb1:clone() -- branch 2
for k,v in ipairs(fb2:findModules('nn.SpatialConvolution')) do
v:reset() -- reset branch 2's weights
end
features:add(fb1) features:add(fb2) features:cuda()
--------------the error is at this line-----------
features = makeDataParallel(features, nGPU)
local classifier = nn.Sequential()
classifier:add(nn.View(256viewview))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(256viewview, 4096))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(4096, 4096))
classifier:add(nn.Tanh())
classifier:add(nn.Linear(4096, bot))
classifier:add(nn.Tanh())
classifier:cuda()
local model = nn.Sequential():add(features):add(classifier)
return model
end
and the other one is:
function makeDataParallel(model, nGPU)
if nGPU > 1 then
print('converting module to nn.DataParallelTable')
assert(nGPU <= cutorch.getDeviceCount(), 'number of GPUs less than nGPU specified')
local model_single = model
model = nn.DataParallelTable(1)
for i=1, nGPU do
cutorch.setDevice(i)
model:add(model_single:clone():cuda(), i)
end
end
cutorch.setDevice(1)
return model
end
The error I get is:
[C]: in function 'error'
...a/torch/install/share/lua/5.1/cunn/DataParallelTable.lua:337: in function 'share'
/home/andrea/torch/install/share/lua/5.1/nn/Container.lua:97: in function 'share'
main.lua:123: in main chunk
[C]: at 0x00406670
Do you possibly know where the error is? Sorry but I'm kinda new at this and I cannot find a way to figure it out. Of course I'm figuring out wrong the net structure. Thanks in advance.

level set implementation

I have a question about level-set implementation.
In the article "Distance Regularized Level Set Evolution and Its Application to Image Segmentation" Chunming Li, Chenyang Xu you can find a diffusion equation:
(sorry, but i'm not allowed to post images :( )
(14) page 4 from pdf
For dp(s) = 1-1/s the implementation is [MATLAB]:
distRegTerm = 4*del2(phi)-curvature;
where:
[phi_x,phi_y]=gradient(phi);
s=sqrt(phi_x.^2 + phi_y.^2);
smallNumber=1e-10;
Nx=phi_x./(s+smallNumber);
Ny=phi_y./(s+smallNumber);
curvature=div(Nx,Ny);
and it is OK, because for that dp the equation is (15)
I don't understand why for (10)
where:
p(s) is (16)
the code is:
distRegTerm=distReg_p2(phi);
where:
function f = distReg_p2(phi)
[phi_x,phi_y]=gradient(phi);
s=sqrt(phi_x.^2 + phi_y.^2);
a=(s>=0) & (s<=1);
b=(s>1);
ps=a.*sin(2*pi*s)/(2*pi)+b.*(s-1);
dps=((ps~=0).*ps+(ps==0))./((s~=0).*s+(s==0));
f = div(dps.*phi_x - phi_x, dps.*phi_y - phi_y) + 4*del2(phi);
I don't understand the last line of this function.
Thanks