Remove the regional files from a LightSwitch deployment - deployment

Is there some way to remove the multitude of region specific files in the LightSwitch \bin directory ? ... apart from manually
We only deploy for local, so having the various language/region specific files seems a waste, increasing build and deployment times

Related

Deploying config files to PLC

Is it possible include arbitrary files (in this case a .csv) from a TwinCAT project direct to the Boot directory of a PLC?
By using PATH_BOOTPATH in the file open/read FBs it is possible to load files from this directory in a convenient manner regardless of whether using a CE or Windows deployment, However deployment of files to this location seems to be the sticking point.
I know that a copy of the project code is included within the CurrentConfig<Project>.tpzip file, but this file is not easily accessible from code, or updateable.
I've found the 'Additional Files' section within the system configuration, but it makes little sense.
Adding a file from inside the project as a 'Relative' path doesn't seem to do anything
Adding a file from inside the project as an external path includes the file (via symbolic links?) in the 'CurrentConfig.tszip' file, which has the same issues as the .tpzip
Adding an external file as an external path again includes the file inside of the .tszip.
I'm willing to accept that this might not be possible, but it just feels odd that the PATH_BOOTPRJ and PATH_BOOTPATH roots are there and not accessing useful paths.
Deployment
To quote Beckhoff:
Deployment is used to set up commands that are to be executed during the installation and startup of an application.
The event types are essentially at what stage of the deployment process the command is performed, where the command can either be copying a file or execution of a script/program.
Haven't performed extensive testing but between absolute/relative pathing and execution this should solve nearly all issues with deployment configuration.

Packaging Applications for Azure Batch

I am having trouble packaging applications to get them to run in Azure Batch compute nodes. I am using user subscription with VM configuration, so I can't use application packages. I have been uploading my executable files and dlls as resource files. Currently, I have a task that requires a lot of dlls, but it seems that I can't upload more than 10 resource files through Azure portal.
What is the best way to package an application and all its required dlls to have it run on a batch compute node without using the built-in application package? Is there a way other than going through all its dlls and adding them individually manually as resource files?
How to go about the limitation of 10 resource files per task application?
Thanks!
Application package functionality for Virtual Machine configuration should be available now (documentation may be out of date). With that being said, answers to your questions:
Without using application packages, you can do one of the following: (1) create a SFX-archive (self-extracting archive) with your archiver of choice. Ensure that it can be silently installed without a GUI pop-up (e.g., 7-zip can do this) and run the SFX-archive command as part of your start task. (2) Zip up your files. Add the zip file and unzip.exe as your two resource files. Run the unzip command as part of your start task.
The service limit is not 10 (although that may be the limit in portal). You can add as many resource files up to the service limit which varies depending upon the length of your URLs. For large number of dependencies, please follow the recommendation from #1 or use Application Packages (if possible).

Copy batch files to Jenkins Slaves with Different OS versions

I am running automated tests of our application on different versions of an OS build (Windows 7, Windows 10, etc...). My testing suite requires that I copy files to the Slave computers when there are changes in the tests (external to the build application). The test files are not in the Jenkins work space as they do not change frequently and therefore do not need to be copied to the Slave with each execution.
I am looking to be able to update the files on the Slaves, but not under the work space directory, so the Copy-To-Slave plugin will not work from my understanding.
I am looking to have batch files, testing resource files, DB generation scripts and others copied to the Slave computer by a Jenkins job. This job may monitor GIT, but not everything being copied is from GIT.
In essence, execute the following but to the Slave computer
xcopy C:\Testing*.* C:\Resources\Testing /s/v/e
The reason for this is our testing scripts look for certain files to execute (DB scripts for building the database for the current platform/DB Engine) and as these do not change too frequently, we only need to copy the files when they are changed, and leave the files in place for subsequent test runs. There is a large amount of files and GBs of data that does not need to be copied with each test run. There are also multiple executions of the application with the same testing files where the application has different configurations, but should produce the same results, so the test files do not need to be copied with each of these executions.
I found a configuration on the Copy-To-Slave plugin to add additional directories as destinations, that are relative to the file system root directory (C:\ in my case) which will solve my problem.

TFS Build Website deployment package web.config transformation not working

So I am trying to use TFS Build for generating deployment packages for my 3 environments (ST, UAT, Prod).
This what I followed to successfully genrate the package locally.
http://social.msdn.microsoft.com/Forums/en-US/tfsbuild/thread/74bb16ab-5fe6-4c00-951b-666afd639864/
So my local machine will generate the package for the acyive configuration and everything is good. Here is my Build definition :
/p:DeployOnBuild=true;DeployTarget=Package
I run my solution file and the web deployment project in the Projects To Build.
It creates the respective folders with ST, UAT and PROD. In each of these there is a _PublishedWebsites folder. This folder have 2 folders.
1) MydeploymentProject - It contains the transformed web.config
2) MyDeploymentProject_Package - Contains the Package folder contents along with the zip file and setparameters files. Here the everything is not transformed. But if I check the TempBuildDir on the TFS server it does contain the transformed config.
When compared the logs local and on server, I found that the on my local After transformation files are updated and package is created whereas on TFS the AfterBuild target is called transformation done and it ends there.
this is my local log
Target "WPPCopyWebApplicaitonPipelineCircularDependencyError" skipped, due to false condition; ($(WPPCopyWebApplicaitonPipelineCircularDependencyError)) was evaluated as (False).
Target "ProcessItemToExcludeFromDeployment" in file "C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets" from project "C:\TAX-IT\Main\Source\TDDB\TDDB_deploy2\TDDB_deploy2.wdproj" (target "PipelineCollectFilesPhase" depends on it):
Done building target "ProcessItemToExcludeFromDeployment" in project "TDDB_deploy2.wdproj".
Target "GetProjectWebProperties" in file "C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets" from project "C:\TAX-IT\Main\Source\TDDB\TDDB_deploy2\TDDB_deploy2.wdproj" (target "PipelineCollectFilesPhase" depends on it):
Using "GetProjectProperties" task from assembly "C:\Program Files\MSBuild\Microsoft\WebDeployment\v10.0\....\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll".
Task "GetProjectProperties"
I am not sure what is wrong.
Also I installed VS2010, web deploy 2.0 and 3.0 and web deployment tools on my Build servers.
Anyone have faced this and resolved.
Please help.
Thanks
MadCoder,
From what I've gathered from your description, you have everything set up correctly. It seems like you are just missing the "Configuration" parameter. When you do run the Build Definition, it uses the configuration specified in your "Configurations to Build" argument. If you want to have multiple configurations built (like you are suggesting), you'll need to have multiple configurations defined. One question I have is: When you look at the logs of the TFS Build Process, do you see multiple configurations built, or do you only see one? If you only see one, then you don't have all of the configurations defined in order to transform the config file. According to your description, you'll need to see something like this in your build definition configuration:
If you don't want to deploy to a webserver, you can stop reading here, and don't have to continue on.
If you choose to use a TFS Build Definition to deploy to a web server, you'll need to have a target web server somewhere and you'll need to install and configure the Web Deploy v2/v3 on that server as well.
When you are using TFS Build Definitions to deploy, the transformation happens upon deployment, not during packaging (prior to deployment). It may package up a transformed config somewhere, but it won't actually transform the config bundled with the website. The only way I've been able to get the deployment to actually work with a transformed config is when I had a website specified in the MSBUILD args. Here is an example of my MSBUILD args:
/p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MSDeployPublishMethod=RemoteAgent /p:MsDeployServiceUrl=MyWebServer/MsDeployAgentService /p:DeployIisAppPath="MyWebsite as named in IIS" /p:UserName=MyDomain\MyWebDeployUser /p:Password=MyWebDeployPassword
If you don't want MSBUILD to do the actual deployment (I prefer not to because then your deployment process is tied to TFS), you can do the deployment after the build process and use the CTT Project, found on codeplex. This tool performs the exact same transformations as MSBUILD, but it also includes the ability to parameterize settings so you can define classes of environments (for example, 3 QA environments, 2 Staging Environments, etc.) and still use the respective transforms for that class of environment.

Packaging with NAnt, how to handle different environments

I'm using NAnt to build an ASP.NET MVC project.
The NAnt script then creates a zip package, containing a deploy script and all the necessary files.
The deploy script backs up the current running website, sets up the newer version of the website and updates the DB.
This works fine for a single environment.
However, we're asked more and more to set up a Staging/Acceptance environment next to the production. These environments, of course, differ in file structure, DB server, config settings etc.
How can I best handle this in the deploy scripts? I don't want to create separate variables for each environment, distinguishable by name only.
Providing defaults and providing the variables in separate files seems more logical.
Does anyone have practical experiences with this?
Store the things that you think are likely to change between environments in config files.
Visual Studio can do the heavy lifting here if you like; you can create settings and specify default values from the Settings tab of a Visual Studio project's properties.
This will create the config file for you and provide strongly-typed access through Properties.Settings.Default.
As for handling multiple environments through your build, I've seen some people recommend maintaining multiple copies of the config files - one for each environment for example - and others recommend using nant to modify the config files during the build or deployment phase. You can use a property passed to nant on the command line (for example) to select which environment you are building (or deploying, depending on how you're doing it).
I don't recommend either of these approaches because:
They both require changes to your build to support new environments.
If you change a setting in a deployed environment and forget to update the build then the next deployment will reset the change (somewhat defeating the point of config settings).
If someone creates a new environment (lets say they want to explore issues arising from upgrading to a new version of SQL Server for example) and doesn't fancy creating all new config files in the build system, they might decide to just use an existing environment's settings. Let's say they choose to deploy using the live settings and forget to change something afterwards. Your new 'test' environment could now be pointing to live kit.
I create a copy of each config file (called web.config.example, for example) and comment out the settings within them (unless they have meaningful defaults). I check these in and have those deployed instead of the real web.config (that is, web.config is NOT deployed automatically. web.config.example is deployed as web.config.example.
The admin of the new environment will have to copy and rename the file to web.config and provide meaningful values). I also put all the calls to the settings behind my own wrapper class - if a mandatory setting is missing I throw an exception.
The build and my environments no longer depend on each other - one build can be deployed to any environment.
If a setting is missing (a new environment or a new setting in an existing environment) then you get a nice clear exception raised to tell the admin what to do.
Existing settings are not altered after an upgrade because only the .example files were updated. It's an admin task to compare the current settings with the latest example and revise if necessary.
To configure the deployment, you could put all the environmental settings (install paths, etc) into nant properties and move them into a separate file (settings.build for example) then use the nant include task to include that file at the top of your deployment file (deploy.build for example). You can then deploy a new version of deploy.build without overwriting your config changes as they are in settings.build. If a new property is introduced into deploy.build nant will fail with a nice message to tell you that you haven't set that property.