ASP Net Core multiple environment publishing - deployment

Coming from the .NET MVC world, I am confused on how .NET Core deals with multi-environment deployments. (Dev, Test, Production)
The tech used here are Bamboo (Build Server) + Octopus Deploy (CD).
.NET Core appears to be using appsettings instead, and web.config is only used for IIS Hosting.
Upon reading some guides, which suggests to add an Environment Variable "ASPNETCORE_ENVIRONMENT" to the project to define the environment currently in.
This is the command I used to build in Bamboo.
dotnet publish -c Test ${bamboo.build.working.directory}\HelloWorld.sln
Questions...
1. I have appsettings.json, appsettings.Test.json, appsettings.Production.json.
It looks like the app knows which appsettings file to read from, based on the
ASPNETCORE_ENVIRONMENT value.
How can I tell Octopus to use the correct file based on the environment I am deploying to?

Having a variable and multiple config files packaged up make things overly complicated, especially if you're using Octopus Deploy.
The only reason for having multiple .environment.json files is because these are probably held in source control and they have their values already set - so environment variables are probably all coming from the source code, rather than the release manager. Otherwise, they'd be identical files and Octopus would still be transforming them (which makes them redundant)
My advice would be to move to a single file, and move the variables into Octopus Deploy. Remove the ASPNETCORE_ENVIRONMENT and you only need one file appSettings.json, which can be transformed during deployment to whatever environment.
ASP.NET Core Web Applications - Octopus Deploy Documentation
Hope this helps

Related

Should I bundle the source code, build script and deployment script together?

Should I bundle the source code, build script and deployment script together? In my previous company, they are always bundled together, but there is always a problem that when the company add a new server, they need to change the deployment script and create a new build version, however, there is no change to the source code. I would like to see what is your company practice on the source control, build and deployment.
The best practices for deployment are to have some standard system for that purpose. Usually that system will have a standard way to enumerate what hosts are available and what versions of software are on each host, so any scripts necessary for deployment become agnostic to the machines in use.
Similarly, in many environments, deployment uses a set of standard techniques. For example, it is common to use CI to run tests and then build one or more deployment artifacts, such as a tarball or container, and then all deploys using the same technique use the same deployment method (e.g. unpack the tarball into a directory named after the repository name), so in that case a deployment script may not even be necessary. If you use a standard method and one is necessary, then obviously you should include it in your artifact (which means it's included in the source code) or in the configuration for the deployment system (which should be maintained as repository as well).
As to whether one should include source code depends on whether it's needed. If you're deploying a project in a language like Python or Ruby, then obviously it will be needed. However, if you're deploying a project in a compiled language like Go or Rust, then it probably is not, and your build artifacts will be smaller and easier to work with if you don't include it and just build a binary artifact during CI.

web.config changes via TFS 2015 Release Management

In the past I've using web.config transforms when manually deploying code to set environment specific setting values and attributes. I am transitioning from environment specific manual builds to a single TFS 2015 Build deployed to multiple environments via Release Management. Environment specfic application settings values configured in the web.config are tokenized. This method essentially inserts tokens into setting values during the build process. When deployed the tokens are replaced with matching Release definition configuration values.
This method is insufficient setting attributes of non-settings however. Examples of these transforms include:
<httpCookies requireSSL="true" xdt:Transform="Insert" />
<compilation xdt:Transform="RemoveAttributes(debug)" />
<httpRuntime xdt:Transform="RemoveAttributes(executionTimeout,maxRequestLength,useFullyQualifiedRedirectUrl,minFreeThreads,minLocalRequestFreeThreads,appRequestQueueLimit,enableVersionHeader)"/>
<httpRuntime enableVersionHeader="false" maxRequestLength="12288" xdt:Transform="SetAttributes"/>
<customErrors mode="On" xdt:Transform="SetAttributes"/>
What is the best way to update these attributes during release?
Both Web Deploy's parameters.xml method and transforms can be used with Release Management. Transforms would be triggered from Build and the process of replacing tokens created by a publish would be triggered by Release Management.
To trigger transforms during the build, you can do this one of two ways:
Add the following MSBuild parameters to force the transformation to happen during the build
/p:UseWPP_CopyWebApplication=true /p:PipelineDependsOnBuild=false
Create a publish profile using the MSDeploy Package option and then trigger the packaging in Build using the following MSBuild parameters:
/p:DeployOnBuild=true /p:PublishProfile=[nameOfProfile]
Either of the above methods will cause normal Web.config XDT's to run. If you need other XML files to be transformed, you'll need to first install SlowCheetah.
Token Replace and Parameters
Now that you have a build artifact with XDT's run, you can use token replacement and the WinRM tasks from Release Management. These will take the Web Deploy package from the Build and execute the SetParameters command before deploying it. The trick is to take the SetParameters.xml file and run a token replace on it first, swapping out Release environment variables first.
User Sumo gave a proper answer, but I want to record some comments related to what instead of how.
IMHO there are different categories of settings to consider, let's exemplify. The database connection string changes at each environment, while requiring SSL should be turned on for all testing and production environments.
In this perspective, you should have settings applied as early as possible, traditionally at build time and called Debug/Release builds; and last-minute settings, environment dependent, up to runtime settings, like Feature toggles.
So in my view you can use a single tool or multiple tools, but it is important that you properly categorize your settings accordingly.

Visual Studio Online / Azure stopping and starting web applications using Powershell

I'm using Visual Studio Online's build tools to deploy web applications from a single solution. I've occasionally been running into file locking issues.
Error: Web Deploy cannot modify the file 'Microsoft.CodeAnalysis.CSharp.dll' on the destination because it is locked by an external process.
After some Googling, I believe the "fix" is to stop the web applications before deployment on Azure and start it back up after. Sounds legit.
However, there does not seem to be a straight forward way to do this directly on VSO's build definitions. I've created an "Azure Powershell" build task, but it wants a PS1 file from the repository. It doesn't seem to let me just run Azure Powershell commands (e.g. Stop-AzureWebsite) from here. My team has created a work-around where we have a "run.ps1" that just executes the command you pass as a parameter, but none of us are satisfied by that.
What are we missing? There has got to be an easier way to do this without having a PS1 script checked into source control.
I solved this by installing Azure App Services - Start and Stop extension from Visual Studio Marketplace.
When installed, it will allow you to wrap the Deploy Website to Azure task in your Release definition with Azure AppServices Stop and Azure AppServices Start tasks, effectively eliminating the lock issues.
Check if you are using "/" on the "Web Deploy Package" path for folder separators instead of "\".
i.e. change
$(System.DefaultWorkingDirectory)/My Project/drop/MyFolder/MyFile.zip
for
$(System.DefaultWorkingDirectory)\My Project\drop\MyFolder\MyFile.zip
I noticed that was the only difference between the one I was getting the error and the others (the Restart step I added was not helping). Once I modified the path, I got it working.
Sounds crappy, but fixed my issue.
Did you use the Build Deployment Template that sets the correct msbuild parameters for you for your package? You can see how here. I would create a build using that template and see if you have the same issues. If so ping me on Twitter #DonovanBrown and I will see if I can figure what is going on.
As a rule it is good practice to have any scripts or commands required to deploy your software to be checked into source control as part of your build. They can then be easily run repeatedly with little configuration at the build level. This provides consistency and transparency.
Even better is to have deployment scripts output as part of the build and use a Release Management tool to control the actual deployment.
Regardless having configuration as code is a mantra that all Dev and Ops teams should live by.

Get Build Version in automated build deployment using TFS

I am deploying web application to azure using TFS CI automated build deployment.
In our config maintain build version like 2014.05.19.1 which is $(Date).$(rev) format.
All I want to update config each time build is deployed.For that I am passing value to 'BuildVersion' parameter in template to powershell script which actually performs publishing to azure.
I tried using $(Date:yyyyMMdd)$(Rev:.r) but it is considered string as it is.
I want to get current build version just like IBuildDetail.BuildNumber
within template.
My question is how to get the build version?
If you are using Invoke Process, instead of passing value for BuildVersion parameter you can directly use 'BuildDetail.BuildNumber' in parameters for process like
String.Format("-BuildNumber ""{0}""",BuildDetail.BuildNumber)
This would give the required build number.
If your PowerShell script is being executed from your TFS build, it should have access to the environment variables specific to the TFS context of the build. If that is the case, you actually don't need to pass the $(BuildVersion) parameter to the script, as it already is accessible to the PS script in the $env:TF_BUILD_BUILDNUMBER environment variable. Try testing something like $env:TF_BUILD_BUILDNUMBER | Out-File "D:\Dev\BuildNumber.txt" in your script. You should hopefully see the file containing your build number after running your build.
(I am assuming you are using a relatively new build process template...one that contains the "Post-Build script path" parameter, such as TfvcTemplate.12.xaml)
Hope this is helpful.
I would recommend that you use the right tool for the right job. The build system, is really only for building (compile & test). We have been using it for other things for years coz we did not have another integrated solution. However Microsoft recently bought InRelease and rebranded as Release Management for Visual Studio 2013. I have successfully integrated this with TFS 2012 as well.

Passing RAILS_ENV into Torquebox without using a Deployment Descriptor

I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.