How to automatically delete Dymolas build files after simulation? - modelica

Every time I simulate in Dymola, a number of "useless" (for me) files are created in the working directory - i.e. dsfinal.txt, dsin.txt, dslog.txt, dsmodel.c, dymosim.exe. I find it annoying as it messes up my directory.
Is there a way to select only the desired output files to be kept after the simulations, without the need of manually deleting the undesired ones?

Those are temporary, but necessary files for Dymola. As far as I know there is no option to delete them automatically. Of course you could script that, but I don't see a real point to it and those files are used by some functionality - e.g. dsfinal.txt is used when as simulation is continued.
Some notes: Those files are created in the working directory - which should contain temporary files only. The working directory can be set via the GUI using File -> Options -> Settings:
A rather common problem is, that there is a Open and a Load function in Dymola:
As the description states, Load does not influence the working directory, whereas Open sets it to the directory from which a file is opened. The latter is also true for opening files e.g. via a double-click from the explorer. So usually it is better to go with Load.
My advice would be to separate the directories in which models/packages are stored and the working directory. This way the working directories content can be fully deleted basically anytime...

Related

using powershell to only read last 1month of folders. copy in 7 days worth of folders then using robocopy

apologies to start as im new to powershell and robocopy.
i have a robocopy command that pulls in any files within its many subfolders that are within a maxage of 7. however, the main folder has a huge amount of folders dating back years(and i only need last 7 days each week it runs) so its slow reading each file in each folder before it even copies using robocopy.
it looks like powershell commands may be a way for me to limit the search of files for my robocopy, would this be possible? currently robocopy search each files in each folder in my main folder, ideally i would want it to be smart enough to only search even a months worth of files and then copy over last 7 days. this would speed up the run time hugely.
if possible even further, i only want csv files in each of the folders in my main folder but current robocopy searches the other folders and its files as well which takes time. all the csv files are in a folder called "run" in each parent folder(parent folder is a unique number within the "mainfolder".
my robocopy command:
robocopy \\server\mainfolder \\server\new_main_folder /S /maxage:7 /r:0 /w:0
I was going to point to you either FastCopy or FreeFileSync, both handle long file name paths and work well for me. But found problems running FastCopy when trying to filter folders the way you described. I wasn't getting the results I expected, so that leaves FreeFileSync. There is a little bit of a learning curve with FreeFileSync, but really, the only problem/complaint I've had with it is the xml based batch script that you can use to automate the program kept changing formats and they haven't been providing a way to read the old xml batch scripts with the new version of the software. Maybe that has changed, I haven't looked into that lately.
Maybe other people have had better experience with RoboCopy, but I found it to take literally many multiples longer to do the same job as many other copy programs. I don't think FreeFileSync is as fast as FastCopy, but I've never seen it act as bad as what I experienced with RoboCopy.
The way FreeFileSync works is:
You define 1 or more source/destination pairs.
There is a global setting at the top to set the defaults for all copy pairs.
There are individual settings per each copy pair that when set override the global settings.
In the filter tab of the settings you can set "Time span:" to "Last x days:" and set it to the 7 days that you want.
You can change include from * to something like \run\*.csv. I didn't try that exact pattern, but the patterns I did try worked as expected (Unlike FastCopy).
The Synchronization tab is the tricky/fun one. You can do logs, versioning, tell the system to shutdown or restart when done, maintain a database for tracking moved files ("Detect moved files" checkbox), and all kinds of adjustments to how it behaves when files don't match.
When done, there is I believe at least 2 options for saving the configuration - though I've always just created the xml based batch script and called that from another scripting language or an icon on the desktop.

how to create a script that allows to use the path list as a reference for copying files in PowerShell in .bat script

I'm looking for a way to automate archiving where after I plug my two external drives I can copy all my resources. The problem is that I have different file structures on my laptop and on both external drives so I need to select specific folders to be copied. It means that I can't select one root folder and copy it straightforward. I tried to find a way to declare more than one path in the cp command and in the copy command, without success. An example path:
/my_programming_stuff
/folder1
/folder2
/folder3
/folder4
I want to select only the first 3 folders to copy them into external drive1 and external drive 2. The idea is to create a .bat file that will copy everything at once ( in the best case scenario it will be copied simultaneously on both external drives, so it will be much faster). Another problem is that there needs to be a bypass the ntfs long path limitations (max. 260 characters).
Flags that I want to use:
Copy the files and directories and all of their attributes,
including ownerships and permissions.
Recursively copy directories and their contents.
When copying files from one directory to another, only
copy files that either doesn't exist or are newer than the
existing corresponding files, in the destination
directory.
data verification (so it's certain that the copy was verified)
progression bar with time eta
Until now I was using Total Commander to do this but every day I need to pick only a few folders to be copied which takes time and is inefficient.
I have experience with Bash and PowerShell but I am not sure how to handle this topic.
Create a static batch file with robocopy commands. I think /copyall is the only switch you need to specify for all this. Other defaults should satisfy requirements.
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
I think your time will be better spent learning how to use either FastCopy or FreeFileSynce. I used FreeFileSync some years ago but got disgusted with the it's constantly changing format of its xml file used for starting a backup, so I switched to FastCopy. But it looks like FreeFileSync may be getting their act together and I aim to do some experiments over the summer to see if I want to switch back to it.
Both can handle the long filename format issues, both can be executed by a batch file, both seem to have a lot of quality, but FreeFileSync has more features - and more bloated because of the features. But speed wise, I think FastCopy is probably one of the better products out there and very streamline in use and design.

ClearCase, a makefile use case

I have an issue with the clearmake command in IBM ClearCase,
I use clearmake command to run my own makefile so i can build my program from the 'C' sourse code.
I want to put a command in make file, like shell cleartool -some-command to ignore all checkouts and all private files.
The disadvantage is that in config spec, i must include the command element * CHECKEDOUT.
But in my use case i want to working with files and the same time i could make a compile/build with the old files, so i could work faster and i shouldn't change views or edit configspecs.
But my contemplation is, if i can ignore the checked outed files with a command, without to lose it.
Could you give me a solution ?
I want to working with files and the same time i could make a compile/build with the old files,
It would be easier to use two different snapshot views loaded on the disk at two different places.
In one (where no checkout has ever been done), you can set all files writables (through Windows, not ClearCase): all the files becomes hijacked, but modifiable, host for compilation/testing purposes.
In the other view, you keep your checked out files and your work in progress (but do not run your clearmake).

How can I add a custom package to the startup path in Dymola/Modelica?

I have a custom package that I find myself reusing repeatedly in Dymola models, and I'd like to put this package in a common directory that is automatically loaded whenever I start Dymola. My current strategy is to load the custom package when a model I'm working on is loaded and then save total. This is not elegant because the contents of the custom package end up saved in multiple locations across my hard drive, and if I change one of them, the changes are not reflected everywhere. I would like a more robust way of distributing this custom package to all of my models. Is there a way to tell Dymola to automatically load my custom packaged every time?
The trick is to add the following lines to settings.mos in c:/Users/USERNAME/AppData/Roaming/Dynasim:
Utilities.setenv("MODELICAPATH", "C:\Users\USERNAME\Documents\Dymola");
openModel("c:\Users\USERNAME\Documents\Dymola\UserDefined\package.mo")
The first line adds the directory to the path that Dymola uses to search for packages that have not been loaded prior to the first run of a model, and the second line loads the specified package. These two commands may be somewhat redundant, but I am doing both because I want to make sure my custom packages are on the path in addition to loading the UserDefined package.
Two suggestions. First, you need to add your package to the MODELICAPATH. You'll have to consult the Dymola documentation to figure out exactly what you need to do. But normally, what this means is that you have to set an environment variable that gives a list of directories (; separated) to be searched for your package. Now that will put it in your path so it can find it automatically, but it won't load it until it needs it.
If you want it to always appear in the package browser, you'll probably need to set up a .mos file (script) to load it. Dymola has that capability, but you'll have to read the manual to figure out what that script has to be called and where Dymola expects to find it.
I hope that helps.
In the instalation folder of Dymola 2018 -> insert -> dymola.mos
I've added the lines:
Utilities.setenv("MODELICAPATH", "C:\Users\XXXX\Documents\Dymola");
openModel("C:\Users\XXXX\Documents\Dymola\DCOL\package.mo");
openModel(“C:\Users\XXXX\Documents\Dymola\Annex60 1.0.0\package.mo”);
Now I don't get the utilities sentence, as the DCOL package loads fine without it and the added 'utilities' package in the package menu is useless.
But it does not open the Annex60 package.
I've tried a lot of different combinations and can't get multiple packages to load. I doubt that "cd" and "Advanced.ParallelizeCode", which are also added in the text work.
The accepted answer does not work since Dymola 2017 FD01, as the file settings.mos is not used anymore. User settings are stored in the setup.dymx file instead, located in
C:\Users\USERNAME\AppData\Roaming\DassaultSystemes\Dymola
In contrast to the setup.mos file you can not include custom lines with modelica script in setup.dymx.
The answer using dymola.mos still works, but you need admin privileges to modify this file.
Here is a simple solution which works with all Dyomola versions:
You can pass a .mos-script as first parameter to the dymola.exe.
This can e.g. be done like this:
Create a .mos script somewhere with commands like openModel(), etc.
Create a desktop shortcut to Dymola.exe
Open the properties of the shortcut and add the path to the .mos script in the Target text field. It will then look something like this:
"C:\Program Files\Dymola 2018 FD01\bin64\Dymola.exe" "C:\<some-path>\startup.mos"
Start Dymola with the desktop shortcut. The script will be executed and eventual errors or messages are displayed in the Commands window
Another suggestion where you don't need to hardcode your package into an environment variable of your operating system (and maybe more safe for inexperienced programmers):
Go to the folder where Dymola is installed (e.g. C:\Program Files\Dymola 2020).
Search for the Dymola.mos file in the insert-folder. 'insert' folder
Open the script (e.g., in notepad++)
Add the link(s) to your Dymola-library-package.mo file(s) here with the openModel statement
e.g., openModel("C:/IDEAS/package.mo"); Dymola.mos script
Save the script. Now, every time you open Dymola, your libraries will be loaded automatically.

Does matlab have a matlabrc file?

Today I stumbled upon this thread:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/112560
The question is basically how to make Matlab read your startup.m file regardless of where
you start your matlab session.
One of the solutions offered was:
One solution would be to ask the system administrator to add a few lines
to "matlabrc.m" that adds some pre-determined folder in the user's home
directory to the MATLAB path (say, ~/.matlabstart). Then each user can
have their own "startup.m" file inside this folder.
What I ended up doing in my system (OS X) was to add a startup.m file in:
/Applications/MATLAB_R2011a.app/toolbox/local/
In this startup.m file I added:
if exist([getenv('HOME') '/.matlabrc/startup.m'])
run([getenv('HOME') '/.matlabrc/startup.m']);
end
That way users have the option of creating the hidden folder ~/.matlabrc and inside it they can put the file startup.m. In this startup file they can tell matlab what to execute whenever they start Matlab regardless of the directory where they started it. An example of what I added to my own personal startup.m file is
addpath(genpath('/Users/jmlopez/matlabcode/'))
Now I can add as many folders inside that directory and all of them will be added
to the path every time I start Matlab automatically without having to modify the path.
The question is: Did Matlab already provided a special file like the one I created or did I just go through all this trouble to accomplish what I wanted? If the answer is the second option I gave, then, why doesn't Matlab provide this? It is such a pain in the ass to add directories to the Matlab path whenever you do not have admin permissions and I do not want to carry my startup.m file to every directory I go to. Can someone shed some light into this please?
You can save the pathdef file (which stores all the paths you add) to a custom directory. The problem however is that when matlab starts, it doesn't automatically know which custom directory you used in the previous session.
But that's where the MATLABPATH environment variable comes in. Because this allows to set the matlab starting path yourself. In linux this is simply done by setting this environment variable MATLABPATH before starting matlab (from a terminal / in your .bashrc / ...)
export MATLABPATH=$HOME/.matlab
This way you can let all users have their own pathdef file, which solves the problem of having to add them manually at startup.
EDIT
I tested out if adding startup.m to that MATLABPATH directory worked, ie: does matlab run that startup file? ... and it does. I think it doesn't work for you, because there is another startup.m file in some other (higher priority) directory (probably matlabroot), so that gets precedence. My only startup file is in MATLABPATH, so there is only one choice.
EDIT2
Nope, I added startup to matlabroot directory, and still my own startup file in .matlab gets run. Are you sure you set the MATLABPATH correctly before you started matlab?