Where does Enterprise-Architect store user defined Scripts? - enterprise-architect

I'm going to create a script for my EA-Project. To do so, it is necessary to create a new "group" and within this group you can add own scripts.
The local scripts I have found on my harddisk. They reside in EA-install-dir/Scritps.
But where can I find my additional scripts?

EA scripts are stored in one of three locations: in the installation directory, in the project itself and in MDG Technologies.
Scripts in the installation directory are available in any project you access from that machine. They show up in the EA script group Local Scripts.
Scripts can also be stored in the project itself. Each EA project is a database (an .EAP file simply contains a JET database), and scripts stored in the project are found in the table t_scripts, as are the script groups you define to organize them.
This is where scripts land when you create them, and while you can export a script from the editor to a file (Save As), AFAIK there is no way to import them in the corresponding manner. But you don't need to save the script to a file in order to use it, and EA doesn't use the file, only the entry in t_scripts.
Scripts from t_scripts are only available in the project where they are stored. If that project is accessed by several users (.EAP file on network drive or external database repository), they can all use the scripts regardless of the machine from which they access the project.
Finally, scripts can be included in an MDG Technology, which is EA's way of bundling adaptations that are primarily modelling-related (eg UML profiles and document templates, as opposed to Add-Ins which contain arbitrary functionality). When deployed, an MDG technology consists of an XML file in which the scripts (and all other bundled adaptations) can be found.
MDG-deployed scripts are available in any EA session where you have enabled that MDG Technology (Settings - MDG Technologies), and appear in a script group with the same name as the MDG Technology. (The script group EAScriptLib is in fact an MDG Technology.) If the MDG Technology is deployed on a network drive, you can use the scripts from any machine and in any project.

I stumbled upon this when searching for a way to easily export and import my scripts, but I found an easier way :
Project -> Data Management -> Export Reference Data...
Then check "Automation scripts" in the window that appeared and click export, you'll have an xml containing your custom scripts.
To import them in another project : Project -> Data Management -> Import Reference Data...
The "Data Management" menu could be elsewhere depending on your EA version (12 here)

For EA 9.x it is Project->Model Export/Import->Import Reference Data

For EA 13 and later it's Configure -> Model -> Transfer -> Export Reference Data, then choose Automation Scripts near the bottom of the list.

Related

EA cannot find the path specified when generating XML

I have a model (and EA 12.1) which I have inherited from a predecessor. We use it to generate XML schemas. But when I try to right-click the << XSDschema MyModel>> and select Code Engineering...>Generate XSD Schema... it does some processing, and then fails to save the output with the error "The system cannot find the path specified".
Investigation has revealed that if you create the folders C:\Program files (x86)\Sparx Systems\EA\XSDs, XML and XSL the output files are written there. Not ideal in an environment where you don't have access to change/create these folders.
My question is (as I am fairly new to EA) why is it using that folder, and how can I get it to use another? Is it the installation or the model that is specifying this?
I note that the shortcut that launches EA has the Sparx Systems\EA folder as its initial directory, but trying to change that stopped EA starting up when I tried.
Monathan
EA is trying to generate to that folder because you (or your predecessor) told it to, it is not a default setting or anything.
For each «XSDschema» package you can set the file location. If you open the properties of that package you'll see this:
In the dialog for generating the XSD you can change it as well:

Export OSB resources without using export wizard on JDeveloper

Using JDeveloper in order to create and manage Oracle Service Bus 12c resources, I am able to export the required resources into a .jar file using the Resources Export Wizard of JDeveloper, selecting one by one those needed, under the tree of each project.
What I want to do though is find a way to export a .jar file based on resources list, given in a file of a commonly used format (JSON, CSV etc), as it can be time saving for a large number of resources. My first thought was to search if JDeveloper provides such way or attempt do this programmatically, yet my search on this has not given me any information of how-to.
Is there an alternative way of doing this?
If you have Oracle OSB 11.1.1.7.0 or higher you can automate the compilation process for OSB at project level using configjar, here's a whole example of an implementation which include: compilation using configjar, automating the task retrieving the code from GIT using Jenkins and a python script.
You can also do it using ANT, here's a good document of Oracle explaining that. (I've tried it, but found easier to use configjar, this is the only option for versions below 11.1.1.7.0).
After creating any of those compilation methods you can create a CSV file, parse it with python and loop the compilation.

Keeping SSIS packages under the source control

I store all SSIS packages in Subversion repository, their configuration files as well. Configuration file almost always stored in the same folder where package is.
Problem is - SSIS seems to always store path to configuration file (the one saved in the package itself) as an absolute path.
When someone else checks out folder with the package in the location different from where I had on my development PC the configuration file is not detected (because my absolute path is stored and it doesn't exist on the other developer PC). So another developer has to remove this configuration and add it again from where it is now on his local hard drive. Then changed package is saved which will cause new version to be committed. When I get that version from SVN it will no longer match local path on my PC.
On a related note: another developer may want to change values in configuration file as well. If I later get the latest version of everything from SVN package will no longer work on my PC.
How do you work around these inconveniences?
Another solution is to save your configuration in a database with an environment variable as the first configuration to tell it what database to look in, that's what we do. We have scripts to populate ssisconfig for each server in our source control, but the package uses the actual table data for the database in the environment variable we are using.
Anyone who has heard my SQL Saturday presentations knows I don't much care for XML and this is one of the reasons. A trick to using XML configuration with varying locations is to use an environment variable (indirect configuration) to direct SSIS where it can look for that resource. The big, big downside to this approach is you'd generally need to create an environment variable for each set of configuration files or have a massive, honking .dtsconfig file which becomes painful for versioning.
The option I prefer if XML configuration is a must is that the "variableness" is removed. Developers and admins get together and everyone agrees "there will be a folder everywhere SSIS is done to hold configuration files and that location is X" and then it's just a matter of solving for X. At a previous job, we used D:\ssisdata\configs
#HLGEM's approach of a table for configurations is hands down my favorite approach to SSIS configuration (until you get to 2012 and their project deployment model where configuration is an entirely different animal)
I add a folder called "config" under my projects folder, add it to source control and mantain the config file in this folder. You can also add it to the SSIS project if you like.
I think its a good solution because everybody can have this folder and dowload the config file.
When the package is deployed it will read the config file from where you inform in the deployment manifest so this solution wont impact your development

Synchronise files between Eclipse and FTP Site

I am currently coding with Eclipse PDT, and I need to synchronise the files on my workstation with the files on the FTP server.
I've installed RSE, but I can only download and edit files as far as I can see it. What I want to happen is when I hit save, the file is saved locally, and the file to be updated on the FTP site.
Any ideas of how I can achieve this?
Create an ant builder on your project. See this article about how to do that. The important things you should know after you read the article:
You can use Ant FTP task to
transfer the files.
You can define properties given by
the Eclipse platform to get project
root, list of changed files, change
type (add, modify, delete) and so on.
Use them wisely. You will need
project_loc, resource_loc and so on.
See picture at end to see how to get
other available variables that can be
passed to the script.
Tune your Ant script, since if it run
for each file update, then it can be
slow. If it is slow anyway, then you can create a builder plugin for eclipse, which is not so complicated. I created some before.
Be prepared, that ant script can get
not only one file as changed, but a
list.

How to use version control with JasperReports

We're about to start development of a number of reports using Jasper Server Reports version 3.7.0 CE.
Does anyone have any recommendations as to how best to manage version control with this development, given that the structure of the report units is managed in the database and through either iReport or the web front end?
In fact you can import/export to a directory structure using the js-import/js-export scripts, but then you can't edit these files directly with iReport.
Does anyone have any pointers?
This is problematic. I have established a subversion repository to allow standard reports delivery to be versioned but it is a real pain because jasper does not make this even a little bit easy.
I created a maven project with an assembly descriptor so that "src/main/xml/resources/Reports,adhoc,Domains, etc" can be packaged up in a zip that is pushed to our maven repository.
The biggest problem is that you can't just develop adhoc and input controls merely by modifying XML files. The developer has to import what is in source control into a working jasper server, modify the reports or add new ones (after making sure that his organization and datasources are configured) and once he's satisfied that the report(s) works, export the resources to a directory or zip file, manually modify all references in the exported files from datasources and organization specific resource locations back to "generic" before checking in his changes.
When importing into jasper, the same process has to be done in reverse. The generic paths and organization values have to be converted to the developer's organization so they can be easily imported/updated and he can prove out that the full "round trip" works properly before checking in.
To make the export/subversion checkin easier, I created an ant build file which lives in the maven project's root dir. The build prompts (or will read a properties file) to determine the exported zip location, the organization id of the exported tree. It then opens exported zip file from jasper, explodes it, performs text replacements on the files, resets the "createdDate" and "updatedDate" elements to something standard (so that the developer does not end up checking in files that haven't actually changed since jasper does not preserve the date values), and then copy the files into the subversion tree.
For the import process (from the subversion tree into jasper) we have a script that takes as input the organization id and then modifies the versioned xml files to the appropriate values so that the entire tree can be easily imported/updated into their organization.
The reason this level of complexity is required is to allow us to create the same standard reports in a multi-tenant environment, plus jasper's notion of deploying reports is absolutely bizarre. I'm not sure it would be possible to make this process more difficult if you were intending to do so.
If I was in your position I would have established this kind of process:
end of development session: export all reports to a directory structure in a project under version control
commit the project
before next development session: synchronize the project with svn repository
import directory structure to Jasper Server Reports
continue development
Not sure if someone found posted the solution.
This is what I have done for existing reports.
export reports from jasper server
modify file names from .data to .jrmxl
modify subreport calling to add extension (like in A.jrxml should have subreport name as B.jrxml
modify add .jrmxl to datafile,label and name in report unit xml files.
If you are creating new report on jasper server, it simple
give .jrxml to name and label while adding jrxml file. thats it.
Now you can work same files in local and import same to jasper server.