I am running automated tests of our application on different versions of an OS build (Windows 7, Windows 10, etc...). My testing suite requires that I copy files to the Slave computers when there are changes in the tests (external to the build application). The test files are not in the Jenkins work space as they do not change frequently and therefore do not need to be copied to the Slave with each execution.
I am looking to be able to update the files on the Slaves, but not under the work space directory, so the Copy-To-Slave plugin will not work from my understanding.
I am looking to have batch files, testing resource files, DB generation scripts and others copied to the Slave computer by a Jenkins job. This job may monitor GIT, but not everything being copied is from GIT.
In essence, execute the following but to the Slave computer
xcopy C:\Testing*.* C:\Resources\Testing /s/v/e
The reason for this is our testing scripts look for certain files to execute (DB scripts for building the database for the current platform/DB Engine) and as these do not change too frequently, we only need to copy the files when they are changed, and leave the files in place for subsequent test runs. There is a large amount of files and GBs of data that does not need to be copied with each test run. There are also multiple executions of the application with the same testing files where the application has different configurations, but should produce the same results, so the test files do not need to be copied with each of these executions.
I found a configuration on the Copy-To-Slave plugin to add additional directories as destinations, that are relative to the file system root directory (C:\ in my case) which will solve my problem.
Related
Is it possible include arbitrary files (in this case a .csv) from a TwinCAT project direct to the Boot directory of a PLC?
By using PATH_BOOTPATH in the file open/read FBs it is possible to load files from this directory in a convenient manner regardless of whether using a CE or Windows deployment, However deployment of files to this location seems to be the sticking point.
I know that a copy of the project code is included within the CurrentConfig<Project>.tpzip file, but this file is not easily accessible from code, or updateable.
I've found the 'Additional Files' section within the system configuration, but it makes little sense.
Adding a file from inside the project as a 'Relative' path doesn't seem to do anything
Adding a file from inside the project as an external path includes the file (via symbolic links?) in the 'CurrentConfig.tszip' file, which has the same issues as the .tpzip
Adding an external file as an external path again includes the file inside of the .tszip.
I'm willing to accept that this might not be possible, but it just feels odd that the PATH_BOOTPRJ and PATH_BOOTPATH roots are there and not accessing useful paths.
Deployment
To quote Beckhoff:
Deployment is used to set up commands that are to be executed during the installation and startup of an application.
The event types are essentially at what stage of the deployment process the command is performed, where the command can either be copying a file or execution of a script/program.
Haven't performed extensive testing but between absolute/relative pathing and execution this should solve nearly all issues with deployment configuration.
I have a legacy Windows application (no source code) that does something with files in a given directory say C:\Pickup The directory path is hard coded into the application and cannot be changed. If I run multiple instances of this application, the instances will compete for the same files in C:\Pickup which is not good.
This application does not have a GUI. I launch it from Task Scheduler many times a day and it runs from 1 minutes to say 20 minutes depending on the number of files it needs to process in C:\Pickup
I am wondering if there is easy to use virtualization technology that will allow me to launch instances of this application in some virtual space where each instance gets its own C:\Pickup folder?
EDIT 1: I am thinking of a solution like IE uses for plug-ins (ActiveX controls) that run inside of it. Somehow when the plug-in accesses the file system, it gets it's own view of the file system. Does anyone know how IE does this?
You can just spin up a series of VM's with something like virtual box. Create a share and mount as D:\ on all of the VM's, then run a batch script to copy the files from your share to C:\Pickup.
I am having trouble packaging applications to get them to run in Azure Batch compute nodes. I am using user subscription with VM configuration, so I can't use application packages. I have been uploading my executable files and dlls as resource files. Currently, I have a task that requires a lot of dlls, but it seems that I can't upload more than 10 resource files through Azure portal.
What is the best way to package an application and all its required dlls to have it run on a batch compute node without using the built-in application package? Is there a way other than going through all its dlls and adding them individually manually as resource files?
How to go about the limitation of 10 resource files per task application?
Thanks!
Application package functionality for Virtual Machine configuration should be available now (documentation may be out of date). With that being said, answers to your questions:
Without using application packages, you can do one of the following: (1) create a SFX-archive (self-extracting archive) with your archiver of choice. Ensure that it can be silently installed without a GUI pop-up (e.g., 7-zip can do this) and run the SFX-archive command as part of your start task. (2) Zip up your files. Add the zip file and unzip.exe as your two resource files. Run the unzip command as part of your start task.
The service limit is not 10 (although that may be the limit in portal). You can add as many resource files up to the service limit which varies depending upon the length of your URLs. For large number of dependencies, please follow the recommendation from #1 or use Application Packages (if possible).
Using ElasticSearch, one can place scripts of various languages in ElasticSearch's /config/scripts directory, and they will be automatically loaded for use in Update requests and other types of operations. In my production environment, I was able to accomplish this and run a successful Update using the script.
So far, however, I've been unsuccessful in getting this feature to work when running a node in local mode for integration tests. I assumed that, since one can configure the ElasticSearch node, with an elasticsearch.yml on the classpath, one should also be able to add a scripts directory and place her desired script there, causing it to be loaded into the local node. That doesn't seem to be the case since when I try to execute an Update utilizing that script it cannot be found.
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Unable to find on disk script scripts.my_script
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:269)
at org.elasticsearch.script.ScriptService.executable(ScriptService.java:417)
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:194)
... 6 more
Does anyone know the proper way to do automatic script loading into a local ElasticSearch node for testing?
I am using the basic ElasticSearch client included in "org.elasticsearch:elasticsearch:1.5.2".
After perusing the source code, I discovered that the reason my script was not being picked up by Elasticsearch's directory watcher was because it was watching user.dir, the default configuration directory. The scripts/ subdirectory would have had to have been under there for the node to pick up my script and load it into the ScriptService for it to be used during updates.
The configuration directory can be overridden in your elasticsearch.yml with the key path.conf. Setting that to somewhere in your project would allow you to load scripts during testing and add those scripts to version control as well. Make sure that under that directory is a scripts/ directory; that is where your scripts will be loaded from.
I'm trying to think of a good solution for automating the deployment of my .NET website to the live server via FTP.
The problem with using a simple FTP deployment tool is that FTPing the files takes some time. If I FTP directly into the website application's folder, the website has to be taken down whilst I wait for the files to all be transferred. What I do instead is manually FTP to a seperate folder, then once the transfer is completed, manually copy and paste the files into the real website folder.
To automate this process I am faced with a number of challenges:
I don't want to FTP all the files - I only want to FTP those files that have been modified since the last deployment. So I need a program that can manage this.
The files should be FTPed to a seperate directory, then copy+pasted into the correct destination onces complete.
Correct security permissions need to be retained on the directories. If a directory is copied over, I need to be sure that the permissions will be retained (this could probably be solved by rerunning a script that applies the correct permissions).
So basically I think that the tool that I'm looking for would do a FTP sync via a temporary directory.
Are there any tools that can manage these requirements in a reliable way?
I would prefer to use rsync for this purpose. But seems you are using windows OS here, some more effort is needed, cygwin stuff or something alike.