I'm using Dymola 2019 and have to use +50 instances of CombiTimeTable in my model to load a CSV file with a size greater than 200 MB (Yearly Weather data with a resolution of 60 s).
An additional increase of my model resulted in getting the following error message in Dymola:
Error: The following error was detected at time: 0
Memory allocation error
FixInitials:Init: Integrator failed to start model.
A dirty fix to this problem is possible if I split up the big csv file into smaller shunks, but this is obviously not the best solution to my problem.
How can I increase the designated Memory for Dymola or what is a best practice loading big csv files. Is another format more efficent?
Setting Advanced.CompileWith64=2 inside Dymola should generate a 64-bit dymosim-executable that avoids this issue.
Specifically the message "Memory allocation error" only occurs if you are out of dynamic memory for malloc.
Related
We experience a very rare crash on our single MongoDb instance. System was good for 'years', running in a VMWare VM with 16Gb of memory. Now in short time we experience twice the:
WT_CURSOR.search: read checksum error for 4096B block at offset 20169220096: block header checksum of 538976288 does not match expected checksum of 2817788143.
Found the location where the error is generated in wired tiger code (block_read.c). Figured out that the checksum reported (538976288) is actually 0x20202020 which is 4 times space character.
Too nice a number to be random. Load is ever growing on that system, we are not good at throwing old data away. Error we can 'solve' by throwing away the collection with error but that is after crash and with data loss.
Any pointers where to look, my suspicion is a write out of bounds but no proof so far :-(
I am trying to simulate a large Modelica model in Dymola. This model uses several records that define time series input data (data with 900 second intervals for 1 year), which it reads via the CombiTimeTable model.
If I limit the records to only contain the data for 2 weeks (also 900 second intervals), the model simulates fine.
With the yearly data, the translation seems to run successfully, but simulation fails. The dslog file contains the message Not enough storage for initial variable data.
This happens on a Windows 10 system with 8 GB RAM as well as on a Windows 7 system with 32 GB RAM.
Is there any way to avoid this error and get the simulation to run? Thanks in advance!
The recommended way is to have the time series data not within the records (that is in your model or library) but as external data files. The CombiTimeTable supports both reading from text file and MATLAB MAT file at simulation run-time. You will also benefit from shorter translation times.
You still can organize your external files relative to your library by means of Modelica URIs since the CombiTimeTable (as well as the other table blocks) already call the loadResource function. The recommended way is to organize these files in an Resources directory of your Modelica package.
I have built a treebagger model in Matlab which I am trying to compile in R2016a using the application compiler. I did this successfully a few days ago, despite the fact that the file for the treebagger model was about 2GB.
I retrained my model because I had made some small changes to my data and now when I try to compile I get an error saying that I am out of disk space, even though I have around 250GB free disk space on the drive. More precisely, the message was "Out of disk space. Failed to create CTF file. Please free -246249051088 bytes and re-run Compiler."
I even retried on a drive with about 2.5TB of free space and got the same issue. Any ideas? Thanks for any help.
I'm trying to import data from a MS Access database from Matlab and I'm getting the following error:
Error using database/fetch (line 37)
[Microsoft][ODBC Microsoft Access Driver] The query cannot
be completed. Either the size of the query result is larger
than the maximum size of a database (2 GB), or there is not
enough temporary storage space on the disk to store the
query result.
I have 4GB of RAM and 60GB of free hard drive space so I don't think it's a space problem. The database is 1022Mb.
Are you by any chance asking for a huge amount of data?
A few nice outerjoins perhaps, or multiple combinations of tables?
My guess is that if this is causing the problem, you should just split the query up into a few pieces and it will work.
Haven't had many good experiences with GD::Graph when trying to plot larger data arrays.
What i have is two arrays, one is 2mln float/integer values, the other - various length but less than 2 million. Trying to plot them on the same line graph. (i do create a 0..2000000 index array for the x axis). Everything has worked when tested for 1 million of the values.
Larger array sizes throw up:
Not a GD::Image object at
/usr/local/lib/perl5/site_perl/5.8.9/GD/Graph.pm
line 182
not even sure where in my script it fails - no other errors
Did not find anything in the official documentation about memory/data limits of GD::Graph.
Additional info that might help you people help me:
my script attempts to save graphs into a file (.gif)
pretty sure this is not due to my web server memory limit (it would show a message about killed perl process)
Thanks
Could you maybe post the code in question so we can give it an inspection and see what's up? at first guess, it does sound like a memory issue related to inability to allocate that much storage space, the allocation is returning a null pointer in the underlying system and thus Perl can't actually create the GD object, since you're trying to allocate somewhere in the range of 125MB off the heap with 2000000 64bit (assuming you're on a 64bit host) ints/floats. But, it could just be something syntactical.