Is it Common to use MEF with a config file for specifying a plugin path? - mef

I have heard that MEF reduces the need for creating config files, but if I have a few different plugin paths that vary depending on the client running the app, is it common and a good idea to have a config file that specifies the correct path. I want to avoid looping through all the DLLs.

Generally people have a well known plugin directory under the where the application is running from, i.e. \Extensions. However that said there isn't any particular reason you cannot do a configuration file for directories or exact extension assemblies.

Related

Deploying config files to PLC

Is it possible include arbitrary files (in this case a .csv) from a TwinCAT project direct to the Boot directory of a PLC?
By using PATH_BOOTPATH in the file open/read FBs it is possible to load files from this directory in a convenient manner regardless of whether using a CE or Windows deployment, However deployment of files to this location seems to be the sticking point.
I know that a copy of the project code is included within the CurrentConfig<Project>.tpzip file, but this file is not easily accessible from code, or updateable.
I've found the 'Additional Files' section within the system configuration, but it makes little sense.
Adding a file from inside the project as a 'Relative' path doesn't seem to do anything
Adding a file from inside the project as an external path includes the file (via symbolic links?) in the 'CurrentConfig.tszip' file, which has the same issues as the .tpzip
Adding an external file as an external path again includes the file inside of the .tszip.
I'm willing to accept that this might not be possible, but it just feels odd that the PATH_BOOTPRJ and PATH_BOOTPATH roots are there and not accessing useful paths.
Deployment
To quote Beckhoff:
Deployment is used to set up commands that are to be executed during the installation and startup of an application.
The event types are essentially at what stage of the deployment process the command is performed, where the command can either be copying a file or execution of a script/program.
Haven't performed extensive testing but between absolute/relative pathing and execution this should solve nearly all issues with deployment configuration.

Changes in conf/server.xml does not seem to have any effect during runtime

Here's what I know:
When uploading files given by users, we should put them in a folder
outside the deployment folder. Let me call it D:\uploads.
We should (somehow) add that folder (D:\uploads) as a web app context.
Here's what I did:
I upload my files to the folder D:\uploads.
I tried adding the web app context as it's mentionned here by adding the following row to TOMCAT_DIR/conf/server.xml:
<Context docBase="D:\uploads" path="/uploads"/>
But that doesn't have any effect. When consulting http://localhost:8080/uploads/file.png or http://localhost:8080/uploads I get a HTTP Status 404 error.
So what I want to know:
What did I do wrong ? How can I add my upload folder to Tomcat?
Is there any better approach when it comes to uploading files ?
Because I'm wondering what should I change if I want to deploy my
application to another server where there's no D:\uploads.
Change the docBase attribute. Use D:/uploads (with slash) instead of D:\uploads (with backslash).
When dealing with files in Java, you can safely use / (slash, not backslash) on all platforms.
Regarding the differences you mentioned in the comments when starting the Tomcat from the IDE and from bin/startup.bat: It's very likely when you start the Tomcat from the IDE, it is not using the same context.xml your Tomcat is using. Just review the Tomcat settings in the IDE.
How to store uploaded files is a common topic at Stack Overflow. Just look around and you'll get surprised in how this topic is popular.
If you aren't happy enough in storing your files in D:/uploads or you'll have other servers accessing the files, you could consider storing them in some location in your network. Depending on your requirements, you can have one dedicated server to store your files or just share the folder which contains the files in your current server. The right decision will always depend on your requirements.

Play Framework - How to maintain configuration files for different environments?

For my Play 2.2/Scala application (built with SBT), I would like to deploy different configuration files depending on the environment I'm deploying to (e.g. to couple a deployment with a particular database server). How does one create different variants of the application's configuration file (conf/application.conf) for different deployment targets? Hopefully variants can be generated from a base version?
What I'm used to from .NET is to have a base configuration file (Web.config), which undergoes a certain transformation depending on the profile one is deploying (e.g. Production). Does one use a similar technique in the Play/Scala world?
Alternative configuration files are covered in Play's documentation quite well in section Specifying alternative configuration file.
In short - in application.conf you place default configuration of your app, and additionally you need to create additional files for you environment(s) ie. life.conf, dev.conf etc. In these files you first need to include application.conf (which will read whole default configuration) and next just overwrite only parts which have to be changed - ie. DB credentials, it could be dev.conf:
include "application.conf"
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:mem:alternative-database-for-dev-testing"
db.default.user=developer
db.default.password="developerpass"
So finally you start your application (after dist) as
./start -Dconfig.resource=dev.conf
or with the Play console
play -Dconfig.resource=dev.conf run
Several tips:
It's good idea to do not place your 'life' DB credentials in default application.conf file, if some dev will forget to include his dev.conf he won't damage the production DB, instead you should put it in prod.conf.
Also these additional configs shouldn't be placed in any VCS (ie. git) repository - creating them directly on target machine (and ignoring in repository) give you sure, that people who shouldn't know the life database credentials won't see it.
It's also possible to use remote alternative config file, which can be useful ie. when you deploying several instances of the same app ie. on several hosts in the cloud.
Each dev can has own config file ie dev_aknuds1.conf, dev_biesior.conf etc, so you can ignore them with one pattern dev_*.conf in repo.
Finally you can just create a shell script (unix) or bat file (Windows) to start using choosen config file like start_dev.sh, run_dev.sh etc. so you won't need to write -Dconfig.resource=... each time

Using a variable like $(ProjectDir) in Integration Services Package (.dtsx)

Is this possible?
I have a package that needs to be copied to three 3 different servers. Each server is used for a different testing environment. All three servers have the same directory layout. The layout is as follows:
*\SERVER\ConfigFiles* <- Here go the .dtsConfig files.
*\SERVER\Packages* <- Here go the .dtsx files.
I want to be able to use the same package copied over the three 3 different servers without any modification. The only difference amongst the 3 servers would be the content inside the .dtsConfig file. The config files contain directories for the excel, log, and SQL server connection for each environment.
For example. Let's say I have a package called Cars.dtsx. This package is EXACTLY the same amongst all three servers. The package file points to a .dtsConfig file that is in the ConfigFiles folder (which is found on all three servers). I want a way for the package to point to the ConfigFiles\Cars.dtsConfig file on each server, but I want to do it without having to provide the name of the server in the directory.
The way I tried it is using "$(ProjectDir)..\ConfigFiles\Cars.dtsConfig" which seems to work if I run the package through the .sln file rather than the .dtsx file.
I hope that wasn't too confusing. Let me know if you need anymore info. Thanks.
Unless I'm missing some nuance, you don't need to do anything special.
Your package is going to have a hard coded reference to D:\ConfigFiles\Cars.dtsConfig It won't matter whether that package is being run from ServerA, ServerB or ServerZ (as long as you have the same file structure on those servers).
By virtue of your asking the question, are you experiencing something different?

Keeping SSIS packages under the source control

I store all SSIS packages in Subversion repository, their configuration files as well. Configuration file almost always stored in the same folder where package is.
Problem is - SSIS seems to always store path to configuration file (the one saved in the package itself) as an absolute path.
When someone else checks out folder with the package in the location different from where I had on my development PC the configuration file is not detected (because my absolute path is stored and it doesn't exist on the other developer PC). So another developer has to remove this configuration and add it again from where it is now on his local hard drive. Then changed package is saved which will cause new version to be committed. When I get that version from SVN it will no longer match local path on my PC.
On a related note: another developer may want to change values in configuration file as well. If I later get the latest version of everything from SVN package will no longer work on my PC.
How do you work around these inconveniences?
Another solution is to save your configuration in a database with an environment variable as the first configuration to tell it what database to look in, that's what we do. We have scripts to populate ssisconfig for each server in our source control, but the package uses the actual table data for the database in the environment variable we are using.
Anyone who has heard my SQL Saturday presentations knows I don't much care for XML and this is one of the reasons. A trick to using XML configuration with varying locations is to use an environment variable (indirect configuration) to direct SSIS where it can look for that resource. The big, big downside to this approach is you'd generally need to create an environment variable for each set of configuration files or have a massive, honking .dtsconfig file which becomes painful for versioning.
The option I prefer if XML configuration is a must is that the "variableness" is removed. Developers and admins get together and everyone agrees "there will be a folder everywhere SSIS is done to hold configuration files and that location is X" and then it's just a matter of solving for X. At a previous job, we used D:\ssisdata\configs
#HLGEM's approach of a table for configurations is hands down my favorite approach to SSIS configuration (until you get to 2012 and their project deployment model where configuration is an entirely different animal)
I add a folder called "config" under my projects folder, add it to source control and mantain the config file in this folder. You can also add it to the SSIS project if you like.
I think its a good solution because everybody can have this folder and dowload the config file.
When the package is deployed it will read the config file from where you inform in the deployment manifest so this solution wont impact your development