Make a package read only in modelica - modelica

Is there a way to make a user created class/package to be read only in Modelica preferably through annotations? Make Modelica class read-only in Dymola gives a Dymola option, I am using OpenModelica and is required to verify a package across its two different versions, since both these versions are editable I am unknowingly making modifications in the older version. Thanks in advance.
I tried to search OpenModelica documentation to see if any OM specific annotations are available. But I couldn't spot them. I am pretty much sure that I have missed it in them, probably used a bad keyword.

In OMEdit there are two ways to open a library: either use "File->System Libraries" or "File->Open Modelica/Library File(s)".
The system libraries only shows packages installed at $HOME/.openmodelica/libraries (on Linux; other path in Windows). These are installed by the package manager or installed there manually. When loading libraries through "File->System Libraries", they are always read-only.
If you load the same library by pointing out the package.mo file, it is opened writable.
You can mark a class read-only in the filesystem by not making the file writable (and if you use a hierarchical file structure instead of the whole library in one file, you can restrict editing to only certain parts in this way).
When loading an encrypted library, it is possible to prevent certain operations on the package using annotations, but editing is always restricted.

Related

Modelica libraries use different MSL version

I want to use two Modelica libraries together, in Dymola, so for convenience I wrote a little script, loadLibraries.mos that just opens the two libraries.
But they use different versions of the MSL (3.2.1 versus 3.2.2), defined by the uses annotation in the top level package.mo:
annotation(uses(Modelica(version="3.2.1")));
The library developed by us uses 3.2.2, the library that uses MSL 3.2.1 is developed by someone else.
Now whenever I run the mos script (or when I open the two libraries manually), Dymola wants to run an update script. As far as I can see, nothing gets changed by the update script, so I would like to
either not run it at all, e.g. by defining a range of accepted versions like annotation(uses(Modelica(version>="3.2.1")));
or always run it, without asking first, e.g. by setting some flag AlwaysSilentyAcceptMSLUpgrade.
Under Edit, Options, Version there is a checkmark Force upgrade of models to MSL version but I am unsure how to use it from my mos script (for all users).
My pragmatic solution would be to ask yourself if your own library really needs anything from 3.2.2 which is not yet present in 3.2.1. Hence change your library to only require 3.2.1. Or the other way round (given you can change the package.mo of the other library) change the uses annotation there to 3.2.2
Don't change your own library, but make the library using Modelica 3.2.1 read-only (e.g. by making the files read-only).
That should skip the prompt (at least from Dymola 2016) - and as far as I understand you don't edit that library yourself anyway.
That works for libraries that don't need any update between the versions; which obviously holds for Modelica 3.2.1->3.2.2 since there is no conversion - but it would also work if there were a conversion that didn't influence this particular library.

Importing source files and folders into IAR Workbench

I have a cup of source files in a certain folder structure in my file system. I want to use this structure for a project in the IAR Workbench. Thinking of Eclipse, that could be so easy! But in the IAR Workbench, the folders will become to "Groups", which are only kind of virtual folders. The Workbench doesn't care about folders.
Is there some easy and fast way to import them?
Up to now I have to add the groups manually each and then add the files to the groups, and that's really annoying!
Is there maybe a tool to generate a proper project file (*.ewp) out of a file/folder structure path?
This would help me a lot!
You should have a look at IAR Project/Add Project Connection command.
Although IAR doesn't seem to have any public documentation on the xml syntax, or at least I couldn't find any, you can find Infineon DAVE (Config.xml) and Freescale PE (ProjectInfo.xml) files if you search around. These can be used as examples to figure out the syntax on how to write your own xml files in one of these interfaces, to allow you to specify where all your c, h, assembly and library files are from where ever they may be in your file system. They also allow you to define preprocessor includes for compiler/assembler, and DAVE allows you to define a path variable, which is also very useful.
See: https://mcuoneclipse.com/2013/11/01/iar-arm-v6-7-comes-with-improved-processor-expert-support/
I have modified a DAVE Config.xml file and found it EXTREMELY useful for managing and migrating even just a handful of project files. For example to upgrade to a new release with all files having a new directory root, you just change a single line in the xml file (defining the new root), and all source files, compiler includes etc are all updated to the new level. No more manually editing the preprocessor includes or replacing all the files in the project. And no more fiddling around with ../../ file system hierarchy navigation stuff, you just specify directly (or indirectly via a path to) where the files are, no more relative from where your project happens to be. VERY NICE.
IAR should consider opening this up (documenting) for general users, as it is very useful for project management and migration. While at it they should also consider generalizing the xml syntax a little bit and allow for definition of IAR group heading names, specifying linker file name, and definitely allowing multiple xml files to be included (connected) (so that subprojects can be easily added or removed without effecting the other subproject definition files) and a few basic things like that.
If they where to do a bang up job on this, they might consider allowing most/all aspects of IAR project configuration that might be required by the subproject, to be defined in these xml files, and then entire (sub)projects could just be plopped down anywhere and be up an running extremely quickly (OK, just let me dream a bit :)
For anyone who happens upon this you may want to check out https://github.com/IARSystems/project-migration-tools. They have a tool for pulling in file trees here.

Library relies on another library

I have RestClientLibrary and UserFunctionsLibrary
UserFunctionsLibrary needs RestClientLibrary in order to function.
When I compile these down to libRestClientLibrary.a and libUserFunctionsLibrary.a how will they be able to interact with each other?
In Xcode currently I have set the User header search paths to find the .h files and I have linked the UserFunctionsLibrary with the RestClientLibrary binary. However, when distributed other users of these libraries may have different set ups and such. I can't see that it will work.
Thanks for any insight you can give me.
Those .a files are just library files. They will need to be linked together to actually be used. The linker will handle resolving all the symbols from RestClientLibrary into UserFunctionsLibrary.
As far as other users, they will have to configure their system in a way such that both libraries are passed to the linker.

Do you put your development/runtime tools in the repository?

Putting development tools (compilers, IDEs, editors, ...) and runtime environments (jre, .net framework, interpreters, ...) under the version control has a couple of nice reasons. First, you can easily compile/run your program just by checking out your repository. You don't have to have anything else. Second, the triple is surely version compatible as you once tested it. However, it has its own drawbacks. The main one is the big volume of large binary files that must be put under version control system. That may cause the VCS slower and the backup process harder. What's your idea?
Tools and dependencies actually used to compile and build the project, absolutely - it is very useful if you ever have to debug an issue or develop a fix for an older version and you've moved on to newer versions that aren't quite compatible with the old ones.
IDE's & editors no - ideally you're project should be buildable from a script so these would not be necessary. The generated output should still be the same regardless of what you used to edit the source.
I include a text (and thus easily diff-able) file in every project root called "How-to-get-this-project-running" that includes any and all things necessary, including the correct .net version and service packs.
Also for proprietry IDE's (e.g. Visual Studio), there can be licensing issues as this makes it difficult to manage who is using which pieces of software.
Edit:
We also used to store batch files that automatically checked out the source code automatically (and all dependencies) in source control. Developers just check out the "Setup" folder and run the batch scripts, instead of having to search the repository for appropriate bits and pieces.
What I find is very nice and common (in .Net projects I have experience with anyway) is including any "non-default install" dependencies in a lib or dependencies folder with source control. The runtime is provided by the GAC and kind of assumed.
First, you can easily compile/run your program just by checking out your repository.
Not true: it often isn't enough to just get/copy/check out a tool, instead the tool must also be installed on the workstation.
Personally I've seen libraries and 3rd-party components in the source version control system, but not the tools.
I keep all dependencies in a folder under source control named "3rdParty". I agree that this is very convinient and you can just pull down the source and get going. This really shouldnt affect the performance of the source control.
The only real draw back is that the initial size to pull down can be fairly large. In my situation anyone who pulls downt he code usually will run it also, so it is ok. But if you expect many people to pull down the source just to read then this can be annoying.
I've seen this done in more than one place where I worked. In all cases, I've found it to be pretty convenient.

Do you version "derived" files?

Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.