Opening Anylogic model in an older version - anylogic

I have created a model in Anylogic 8.3. Now I want to open this model on a different computer that contains an older version, Anylogic 8.2.3. This, however, does not work, as I am prompted with the fact that the model is created in a newer Anylogic version.
Is there a way to circumvent this issue?
I am not a system admin on the computer with the older Anylogic, nor does our license cover updating to a newer version of Anylogic (expired in december 2018).

You can easily do that by opening the .alp file of your model with Notepad or a similar text editor. Then:
Get your actual AnyLogic build version (open AnyLogic, click "Help" and then "About". You can find your build-version as in the image below
replace AnyLogicVersion and AlpVersion with your required values, e.g. something like AnyLogicVersion="8.2.3.xxxxxxxx" and AlpVersion="8.2.3"
save the file and open with AnyLogic 8.2.3
(Note that if you want to open a model in AnyLogic 7 that was developed in AnyLogic 8, you would also need to remove the entire <RunConfiguration> section. But this is not relevant in your case.)

I think it's only possible to go back to an earlier AnyLogicVersion by hacking the .alp if the AlpVersion is the same, because it denotes the structure of the XML. I don't have an 8.4 file handy, but I have, for example, an 8.5.1 and an 8.2.4, and the AlpVersion is 8.4.9 for AnyLogicVersion 8.5.1, but 8.0.4 for AnyLogicVersion 8.2.4.
If the XML structure is different, the newer version of AnyLogic will likely be unable to load the file. Looking at the two examples of essentially the same model that I've detailed above, there are readily apparent structural differences in the ActiveObjectClass, for example. If there are not too many structural differences, you could try replicating them. I've succeeded in doing that manually at least once, that I can recall.
There are a variety of online tools that allow you to compare the XML schemas of two XML documents, from which you will be able to judge whether a manual hack is feasible.

Related

iemmatrix [mtx_*] couldn't create in PureData

I am working on an old internal project. I am working on windows. The puredata I am using is on 32bit.
There are some objects like [mtx_*~] [mtx_:] [mtx_.^] [mtx_circular_harmonics] have "couldn't create" error.
I have iemmatrix installed through "find external".
I tried older versions of Puredata extended or several versions of vanilla. I can't create mtx_, either.
From pd/externals/iemmatrix, I can find a file called "mtx_0x2a0x7e.dll", which I think is "mtx_~" after decoding.
There is not much information on the internet about it anymore.
The "official" version (not the one with the 'extended' suffix) is compiled as a multi-object library. So you have to load the library first, either with a command line flag '-lib iemmatrix' or with a [declare -lib iemmatrix] object in your patch (The latter is much preferred as it makes your patch more portable). When loaded, iemmatrix prints a greeter to the Pd console window:
iemmatrix 0.3.2
objects for manipulating 2d-matrices
(c) 2001-2015 iem
IOhannes m zmölnig
Thomas Musil
Franz Zotter
compiled Sep 6 2019 : 12:07:54
After that you can create objects like [mtx_*~]
The version 'v0.0-extended' was added to facilitate the migration away from now retired Pd-extended. Since it is compiled as a one-object-per-file library and many of the those objects have names that cannot easily be used in filenames, Pd-extended used a trick with an additional hexloader library that translates hex encoded filenames to the actual name of the objects. For being able to load objects from the extended version, you would have to install and load 'hexloader' first.
Having said that, it is highly recommended to use the official version which is actively maintained while the extended version is not and is there for historical reasons.

Library Startup Skript in Dymola

Using Dymola, I'm looking for a way to automatically execute a script when loading a library. The intention is to define additional displayUnits using the defineUnitConversion() command, which are specific to the library that is loaded. Still I think there are quite some other cases where this could be helpful.
What I figured out in this regard:
I know that it is possible to add conversions to the file in DymolaInstallDir/insert/displayUnits.mos but this comes with the disadvantage that is has to be done again on every new computer or after an update of Dymola. I would like to avoid this.
Other than that I only found the libraryinfo.mos file, which seems to be read during the start-up of Dymola. Therefore I assume it is not the right place to put the conversions, as it contains general information about the library and should only contain the respective functions.
Dymola 2022 has a new (tool-specific) feature that covers exactly this use-case. It is mentioned in the Dymola 2022 release notes in the section "Library startup script" on page 24.
It basically introduces the a new annotation, which allows to specify a path to a .mos script, which is executed, when the respective library is loaded. Here is the example from the release notes:
package ThisPack
annotation(__Dymola_startup =
"modelica://ThisPack/Resources/Scripts/Dymola/startup.mos");
end ThisPack;
The annotation can also be set via the UI...

More than one V4L-DVB driver on the same host machine

I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave

Embedded Coder not recognizing tokens in default code generation template

I recently obtained a license to use Embedded Coder with an existing Simulink model that we have developed. In attempting to generate C code for the first time from the model, I am working through several errors. At first, we had no code generation templates (.cgt) files defined in the model parameters. After some hunting, I found the default template that comes with MATLAB (matlabroot/toolbox/rtw/targets/ecoder/ert_code_template.cgt).
The latest is that I get errors on nearly every token in this default code generation template.
Since I'm just trying to get something to build, at first I commented out the offending lines (things like RTWFileVersion, etc), but now I am noticing that it's giving me errors for things that are mandatory (ie. Types). Types is one of several required items that must be in the .cgt file, so what's wrong that causes MATLAB to not recognize these tokens? I'm guessing something may be messed up with my installation, such as a path.
Other details:
Simulink R2013A x32
Target is a Freescale device
Thanks to Matthias W for getting me to check other configuration options. Turns out I had selected a .tlc file that was probably incompatible with Embedded Coder.
In Code Generation for "System target file" I have selected the ert.tlc file and now I am able to build the parts of my model I'm interested in.

ESS workflow for R project/package development

Can anyone share his experience on workflow for R peject development under ESS? I tried several times to learn emacs but I have not get it yet. I can understand ESS as an editor, but is there a project view in ESS? what's the efficient ways to set up/view R project directory, coding, and testing, and how's ESS has an edge to facilitate the whole process?
Do you use ESS as a good R editor only or tend to emulate a R IDE environment within ESS?
Thanks for any advices.
It sounds like you're asking two separate questions.
One question concerns workflow and the other concerns using ESS.
As I use StatET and Eclipse, I'll just share my experience regarding the workflow aspect of your question.
As with Vincent I also follow something like the workflow set out by Josh Reich here (also see Hadley's useful comments):
Workflow for statistical analysis and report writing
Although it can vary between projects, I tend to have a couple of main R files
import.R: this imports data files and does any necessary cleaning and manipulation
analyse.R: This generates the output that I need for any final report
main.R: This calls import.R and analyse.R
The aim is for import.R and analyse.R to represent the complete and final workflow for producing the final results of any analyses.
In terms of a directory structure for an analysis project, I'll often also have the following folders
data: for storing any raw data files
meta: for storing meta data, such as variable labels, scoring systems for tests, recoding information, etc.
output: for storing any graphics, tables, or text generated by my analyses that I might want to incorporate into an external program
temp: When exploring the data and brainstorming analyses, I like to type code into files instead of using the console. I tend to label these temp1.R, temp2.R, temp3.R. I store these in a temp folder. That way I have a permanent record that's easily accessible. If the analyses become final they get incorporated into one of the main R files (i.e., import.R or analysis.R)
functions: If I think that a function will be needed across a couple of projects, I often place it one function per file or a set of related functions in a file in a folder called functions. This makes it relatively easy to reuse functions across projects, when the formal requirements of package development are more than needed.
library: If I want to create some general functions that I think will be project specific, I'll place them in this folder
save: A folder to store any saved R objects
StatET and Eclipse make it easy to interact with such a file system.
Of course, given all the R gurus that use ESS and Emacs, I'm sure it also handles interactions with the file system well.
I'm not exactly sure what you expect as an answer on this one. I, for one, have stolen (and adapted) a system that was suggested here a little while ago (by Josh Reich):
Create a folder for every project, and split up your work in a bunch of different .R files:
Load.R for getting your raw data into R;
Prep.R for cleaning the data, recoding variables, etc.;
Func.R for coding any custom functions you will need for evaluation; and
Eval.R for running your final stuff.
If that doesn't fit your style, just change it.
Then, you can either have a master file to call each of the parts one after each other (good for reproducibility), or save at different stages and have the individual scripts load the appropriate data (good if some of the prep work is very computationally/time intensive).
**
On a different note, the trick that is posted at the link really helped me get into ESS. It turns Shift-Enter into a one-stop-ESS-shop: http://www.kieranhealy.org/blog/archives/2009/10/12/make-shift-enter-do-a-lot-in-ess/
Others have given you some good ideas about how to setup your directory/file structure for a project.
You also asked about "project views," in which case you might want to look into the Emacs Code Browser (ECB).
You can find some screen shots of it in action on its site, here:
http://ecb.sourceforge.net/screenshots/index.html