Simulation of closed volume getting error in simulation part - modelica

I modelled a compressor that gives air at a constant mass flow rate of 0.1 kg/s. I used a closed volume (fluid library) to store the air.
When translated there was no error but when I do simulation, I am getting the following error.

Related

Error in All sample times for this block must be discrete. No continuous or constant sample times are allowed

I'm designing DSS System, the problems is when I execute it this error occurs:
Error in 'DSS_System_withANFIS/Synchronization Unit/Acquisition/Integrate and Dump1': All sample times for this block must be discrete. No continuous or constant sample times are allowed.
Here is my overall system design:
and this is my acquisition subsystem of Synchronization unit:
The error occurred at integrate and dump unit.
Any help?
I found the solution.
I just changed the solver to discrete in configuration parameters in simulink UI

lsqcurvefit fails depending on platform

I am trying to process an extremely large dataset which requires that I do several million non-linear curve fits. I have acquired a dedicated piece of code that is designed to be used for the data I have collected, which at its heart uses the MATLAB function lsqcurvefit. All works well when I run it on my laptop, except that the fitting is too slow to be useful to me right now, which is not too surprising considering that the model function is quite complicated. To put this in perspective, my laptop can only process about 8000 fits per hour, and I have on the order of tens of millions of fits to do.
Fortunately I have access to a computing cluster at my institution, which should enable me to process this data in a more reasonable time frame. The issue that has arisen is that - despite being cross-platform - there seems to be some significant difference between what the MATLAB code is doing on my Windows laptop and the cluster. Despite running the exact same code, on exactly the same data, with the same version of MATLAB, the code running on the Unix cluster fails with the following error message:
Error using eig
Input to EIG must not contain NaN or Inf.
Error in trust (line 29)
[V,D] = eig(H);
Error in trdog (line 109)
[st,qpval,po,fcnt,lambda] = trust(rhs,MM,delta);
Error in snls (line 311)
[sx,snod,qp,posdef,pcgit,Z] = trdog(x,g,A,D,delta,dv,...
Error in lsqncommon (line 156)
snls(funfcn,xC,lb,ub,flags.verbosity,options,defaultopt,initVals.F,initVals.J,caller,
...
Error in lsqcurvefit (line 254)
lsqncommon(funfcn,xCurrent,lb,ub,options,defaultopt,caller,...
I can confirm that there are no infinities or NaNs in my data, which this error message might initially seem to suggest. I can only conclude that using a different platform leads to some differing accuracy in execution, which probably leads to a divide by zero error somewhere along the way. My question is - how can I make this code run on the cluster?
For reference, my laptop is running Windows 7 Professional 64-bit, with an Intel i5 5200U 2.20GHz x4, and the cluster runs Scientific Linux 6.7 x86_64, with various Intel Xeon proccessors, with both running MATLAB R2015b.

Extremely low accuracy on own data in caffe

I'm trying to train a network on my own data. Whole dataset consists of 256x256 jpeg images. There is 236 objects for classification. Training and validation sets have ~247K and ~61K images, respectively. I've made LMDBs from them using $CAFFE_ROOT/build/tools/convert_imageset utility.
Just for starting I'm using caffenet's topology for my model. During training I come across the weird message "Data layer prefetch queue empty" that I never seen before.
Moreover, initially, network has an abnormal accuracy (~0.00378378) and during next 1000 iterations, it reaches max ~0.01 and further does not increase (just fluctuates).
What I'm doing wrong and how can I improve the accuracy?
Runtime log:
http://paste.ubuntu.com/15568421/
Model:
http://paste.ubuntu.com/15568426/
Solver:
http://paste.ubuntu.com/15568430/
P.S. I'm using the latest version of Caffe, Ubuntu Server 14.04 LTS and g2.2xlarge instance on AWS.

hardware co simulation using Digilent Atlys FPGA is Slow

I'm using DIGILENT's Atlys FPGA board for image processing but i'm facing one problem that is when i do software co simulation using Black box i'm getting the output very soon i.e, within 1 min but when i generate hardware co simulation model and use for hardware co simulation the output i'm getting taking very long time 20 to 30 mins. why is this? and how to overcome this long time?

Simulink Error: The overall system has unobservable mode in z=1

I am using simulink to model a process with time delay, compensators and MPC controller.
Problem is when I hit the Simulate button I get this error:
MPC: Utility: Kalman1, Message = Problems encountered when designing the overall state observer(Kalman Filter).
The overall system has unobservable modes in z=1
The overall system Transfer Function is:
     0.2713 z^4 - 1.755 z^3 + 4.05 z^2 - 3.919 z + 1.353
--------------------------------------------------------------
z^7 - 6.21 z^6 + 11.57 z^5 - 3.192 z^4 - 8.476 z^3 + 5.311 z^2
Can anyone help with this error?
Thanks