How to read time-series data file in Modelica - modelica

I am required to read time-series data (csv for example) in Modelica, specifically using the open modelica compiler (omc). I did some internet search and found a ncReader library on the net. It works on dymola but not with the open modelica compiler. My test code is like this:
der(x) = t;
t = NcDataReader2.ncEasyGet1D("datafile.nc", "temperature", time);
der(y) = q;
q = NcDataReader2.ncEasyGet1D("datafile.nc", "flow", time);
When I try to run it on open modelica, I get the following error:
Translation 09:21:41 0:0-0:0 Error building simulator. Build log: gcc
-falign-functions -msse2 -mfpmath=sse -I"C:/OpenModelica1.9.0//include/omc" -I. -DOPENMODELICA_XML_FROM_FILE_AT_RUNTIME -c -o TimeSeries.NcTest.o TimeSeries.NcTest.c TimeSeries.NcTest.c:19:28: error:
ncDataReaderEA.h: No such file or directory mingw32-make: ***
[TimeSeries.NcTest.o] Error 1
I think the reason it works on dymola is because dymola uses a C compiler and maybe compiling the c file of the library on the fly. Unfortunately, I have to use open modelica.
Can anyone help if this error can be fixed for using with the open modelica compiler OR if there is any other alternative to read-time series data file in modelica (open modelica compiler)?
Thanks in advance

I am late by two years, but here is a solution.
Use the block
Modelica/Blocks/Sources/CombiTimeTable
Your txt file should follow this format
#1
double tableName(rows, columns)
0 0.1 32
1 0.2 35
2 0.3 38
where first column should be time in ascending order and other columns are the respective data.

Related

pycuda.driver.pagelocked_empty() returns empty list

I have retrained a model with tensorflow v2 and I want it to run on a Jetson Nano GPU. For that I had to save the model from .h5 as .pb then to .onnx and then to .trt (for which I also had to make the conversion to onnx with opset 12).
Now when I can finally run the model, I am reusing an old code that used to work with the old .trt model but at locking a page:
import pycuda.driver as cuda
....
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_size
dtype = trt.nptype(engine.get_binding_dtype(binding))
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
This results in an error:
pycuda_driver.LogicError: cuMemAlloc failed: invalid argument
After further debugging it turns out cuda.pagelocked_empty(size, dtype) retuns [] at the output binding separable_conv2d_29 with size=0 and dtype=numpy.float32. With the running code, the size is >0 for both input and output bindings.

R Markdown: Ready to publish regression table in WORD using miceadds::glm.cluster

I am trying to do a logistic regression with robust clustered error using miceadds::glm.cluster:
model1 <- miceadds::glm.cluster(data=df_clean3, formula=recall ~ log(Population)+NoComplaintsReported+NoCrashesFiresReported+NoInjuriesReported+NoFatalityIncidentsReported+NoOtherFailuresReported+YearOpen+label, cluster="label", family="binomial")
I want to report a ready-to-publish regression table in Microsoft Word. I have tried the below methods, but each are not the "professionally prepared" version that I am looking for.
Can someone help me with this?
1-tab_df (from the sjPlot library): It stops with the error message that:
Error in UseMethod("family") :
no applicable method for 'family' applied to an object of class "NULL"
2- Stargazer: the output table does not look neat.
3- summ (from jtools library): the output table does not look neat.
4- apa.reg.table (from apaTables library): it stops with this error message:
Error in 2:last_model_number_predictors : argument of length 0

Matlab script edit - model recognition

I try to alter the properties of a MatlabFunction on Simulink. The recommended commands here are below.
open_system('my_model');
S = sfroot
B = S.find('Name','myBlockName','-isa','Stateflow.EMChart');
The 'BlockName' is a MatlabFunction on the model my_model. The content of B is '0×1 empty handle' and S.Name gives DefaultBlockDiagram. It seems it does not recognize the opened model.
The correct command is "slroot"

How to write a script for Wolfram SystemModeler to run several simulations?

I want to run around 100 simulations with my model changing two parameters f and TLoadand track the changes on the phase currents currentSensor.i[1] etc.
Now I'm stuck with the documentation on the Wolfram website because there is no definite explanation on how to use scripting with the SystemModeler. I found for example this link on the Wolfram site with some code but no explanation in which commandline I should use it.
I downloaded the WolframScript program and tried to open my model with wolframscript -file SMPM_VoltageSource_Inverter.mo but it says ToExpression::sntx: Invalid syntax in or before ... eventhouh my model simulates totally fine and without any errors in the SimulationCenter.
Can someone explain to me:
Is it possible to write scripts ?
If Yes:
How can I simulate my model?
How can I do a parameter sweep of f and TLoad? Is it as described in the link?
Is it possible to export data of currentSensor.i[1] as a csv-file? And how to?
Thank you for any help!
I don't know about wolfram sorry, but for OpenModelica the following works:
// to load Model from file use
// loadFile("fileName.mo");
loadString("
model M
parameter Real a = 1;
Real x;
equation
x = a * sin(time);
end M;
"); getErrorString();
buildModel(M); getErrorString();
for a in {1,2,3,4} loop
str_a := String(a); getErrorString();
system("./M -override a=" + str_a); getErrorString();
// for windows use
//system("M.exe -override a=" + str_a); getErrorString();
system("mv M_res.mat " + "M_" + str_a + ".mat");
end for;
Put this in a file named for example model.mos and call it from terminal or command line, depending on your os, with omc model.mos if you have OpenModelica installed. this should generate a csv.
EDIT: I realized the original just saves the last value of x, you might want the full output. Therefore i changed the .mos-file. Each different result will be saved in a different file, if you want to change it to csv you just have to change the generated xml.

Grib2 to PostGIS raster -- anyone get this to work?

I have an application for which I need to import U.S. National Weather Service surface analyses, which are distributed as grib2 files. I want to pull those into PostGIS 2.0 rasters, do some calculations and modeling, and display the data and model results in GeoServer.
Since grib2 is a GDAL-supported format, the supplied raster2pgsql utility should be able to slurp a grib2 right into PostGIS-compatible SQL, and once it's there, GeoServer ought to be able to handle it. However, I'm running into problems which have no obvious solutions -- not obvious to me, at any rate! Raster2pgsql runs, apparently without errors, producing SQL, and running the SQL creates what looks very much like a raster. But GeoServer can't display it -- the bounds, in particular, come out looking weird (0,0 -1,-1) and "preview layer" just throws a NullPointerException.
Has anyone been down this road already? I've got issues as basic as not knowing what the SRID should be for the data (4326, perhaps?). I don't expect anyone to debug my problems for me but if someone has already got this toolchain working, or part of it, I can plug known-good things in and see what I can discover.
TIA,
rw
Updated: Per Mike, here is the coordinate-system stuff from one of the files; I elided the other 749 bands in the output from "gdalinfo". Note that the filename is different -- I found out by running "gdalinfo" on my original file that something was wrong with it, gdalinfo couldn't read it. New (35MB!) file here.
Gdalinfo output:
Driver: GRIB/GRIdded Binary (.grb)
Files: ruc2.t00z.bgrb13anl.grib2
Size is 451, 337
Coordinate System is:
PROJCS["unnamed",
GEOGCS["Coordinate System imported from GRIB file",
DATUM["unknown",
SPHEROID["Sphere",6371229,0]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["standard_parallel_1",25],
PARAMETER["standard_parallel_2",25],
PARAMETER["latitude_of_origin",0],
PARAMETER["central_meridian",265],
PARAMETER["false_easting",0],
PARAMETER["false_northing",0]]
Origin = (-3332155.288903323933482,6830293.833488883450627)
Pixel Size = (13545.000000000000000,-13545.000000000000000)
Corner Coordinates:
Upper Left (-3332155.289, 6830293.833) (139d51'22.04"W, 54d10'20.71"N)
Lower Left (-3332155.289, 2265628.833) (126d 6'34.06"W, 16d 9'49.48"N)
Upper Right ( 2776639.711, 6830293.833) ( 57d12'21.76"W, 55d27'10.73"N)
Lower Right ( 2776639.711, 2265628.833) ( 68d56'16.73"W, 17d11'55.33"N)
Center ( -277757.789, 4547961.333) ( 98d 8'30.73"W, 39d54'5.40"N)
Band 1 Block=451x1 Type=Float64, ColorInterp=Undefined
Description = 1[-] HYBL="Hybrid level"
Metadata:
GRIB_UNIT=[Pa]
GRIB_COMMENT=Pressure [Pa]
GRIB_ELEMENT=PRES
[Etc., Etc., for all 750 bands]
I hope this helps, at least those comming to this thread.
Bear in mind that GeoServer, while being capable of loading Raster data from PostGIS, the default PostGIS "importing" module is ONLY available for vector data, that's why you get those odd bounds (-1 -1 0 0).
You'll have to add ImageMosaicJDBC plugin to your geoserver installation, follow steps here!
http://docs.geoserver.org/latest/en/user/tutorials/imagemosaic-jdbc/imagemosaic-jdbc_tutorial.html
Got an excellent answer to my problem here. Putting it in as a separate answer.
He recommended using gdalwarp to pull the GRIB2 file into a known SRID, thus:
gdalwarp -t_srs EPSG:4326 original_file.grib2 4326_file.grib2
Then, raster2pgsql works just fine, e.g.
raster2pgsql -M -a 4326_file.grib2 some_sql.sql