Sailsjs 0.10.x : How to run the production grunt tasks for a staging environment - sails.js

Sails.js 0.10x: I added an new file "staging.js" to the config/env folder.
Starting the server with >sails lift --staging shows that the staging file was used.
But it still uses the development grunt tasks. e.g. no minification, dev blueprint settings, etc.
I was wondering if there is an easy way to run the the prod grunt tasks with a new environment like staging?

In the sails node module of your application, you can navigate to the Grunt lib file (app_base/node_modules/sails/lib/hooks/grunt/index.js). If you look at the code there, under the initialize method, there's a condition to check if the environment is the production environment and calls the production Grunt task. Even though you could edit this condition to include your staging environment, this file isn't meant to be altered - updating the module in the future would erase any changes you make.
The best thing to do would be to go to app_base/tasks/register/prod.js and move your staging environment tasks to be run at prod and just use the prod environment. Alternatively, you could copy the production tasks over to your staging environment tasks.

Related

Maintainability of TFS xaml build vs TFS vNext build vs Octopus Deploy

My question is about maintainability of vNext/Octopus processes vs XAML based processes. Or rather about the impossibility to maintain them sanely leading me to believe we are doing something terribly wrong.
Given:
Microsoft pushes to phase out its TFS XAML builds in favour of the vNext builds
Octopus Deploy is a popular deployment automation framework
We have many XAML based builds, but starting to port to vNext
The deployments are automated with Octopus Deploy
Concretely, we have three kinds of builds going on in QA:
Old XAML based compilation builds producing artifacts to be deployed
Ultimately just builds the code, zips it and places in a well-known location
New vNext compilation builds producing artifacts to be deployed
Same as above
Deployment builds
XAML based build definition per deployment environment. This is the source of truth for the particular deployment, containing all the configuration URLs, connection strings, certificate thumbprints, etc.. The build definition has over 100 build parameters. Each time a new environment is setup we clone an existing XAML build definition and change the parameters.
This build unpacks the build artifact, generates all the web/app config files based on the configuration parameters and kicks off Octopus Deploy with a lot of parameters using octo.exe and waiting for the end
Octopus Deploy process
Creates 3 packages from the build artifact previously unpacked by the XAML build to match three areas of deployment - web farm, background job engine cluster and the database
Delivers the relevant packages to the relevant tentacles.
The tentacles unpack and setup their respective packages
So, if we have 50 deployment environments, then we have 50 XAML deployment builds, each capturing the context of the respective environment. But the XAML deployment build delegates the deployment job to Octopus and here we are forced to have 50 Octopus projects - one per deployment.
Why is it so? We examined the option of having just one Octopus Project, but what would be the Release versions of such project? In order for us to be able to navigate amongst the gazillion releases, the release version must include:
The build version of the deployed code, e.g. 55.0.18709.3
The name of the deployment environment, e.g. atwfm
Using the example above this gives us 55.0.18709.3-atwfm, but sometimes we want to deploy the same build artifact in the same deployment environment twice. But the only Octopus project would already have the release 55.0.18709.3-atwfm, so how to deploy 55.0.18709.3 in atwfm again, without deleting the already existing release?
We could not find a workaround and so, we have Octopus project per deployment.
THIS IS ABSOLUTELY CRAZY because Octopus projects are a pain to update. Suppose we need to add a step - go do it in 50 projects. There are great advises on the Internet to use automation to edit multiple projects. Not ideal at all.
vNext, BTW, has the same problem. If I am to port the existing XAML builds to vNext I will end up with 50 vNext deployment builds. If I decide to add a step, I need to do it in all the 50 builds!!!
Note, that XAML builds do not have this problem (they have many others, though), because their the process is separate from the parameters. I can modify the workflow once and all the XAML builds are now updated with the new process change.
My question is - how do people work with vNext and Octopus, because our process drives me crazy. There must be a better way.
EDIT 1
I would like to clarify. We sometimes want to deploy the same build artifacts twice. We are not recompiling them and reusing the same version. No. We already have the build artifacts handy with the build version baked inside the artifact. We may want to deploy it the second time into the same environment because, for example, some databases in that environment have been misconfigured and now this is fixed and we need to redeploy. This does not mean we can rerun the already existing Octopus release, because the fix may involve tweaking the deployment parameters of the respective XAML deployment build definition. Hence we may be forced to restart the XAML deployment build, which never compiles code.
EDIT 2
First of all, why do we drive the deployment from TFS XAML builds rather than from Octopus? Historic reasons. We did not have Octopus at first. The deployment was done by our ad hoc code. When we introduced Octopus we decided to keep the XAML deploymenet builds for two reasons:
To save the cost of migrating all the XAML deployment builds with all the gazillion deployment parameters to Octopus. Maybe it was a wrong decision, maybe we could have automated the migration.
Because TFS has better facility to display test results. The deployment may end with deployment smoke tests and their results has to be published somewhere. We do not see how Octopus can help us publish the results, TFS can.
Why would one redeploy? For example, one of the deployment parameters is certificate thumbprint. When the certificate is renewed, this parameter must be changed (we do have automation for updating XAML build parameters). But often we discover that it was already deployed with wrong thumbprint. So, we fix the deployment and redeploy. Or, we discover some strange behavior of the deployed application and wish to redeploy with some extra tracing/debugging features.
There is a lot to unpack here, but I'll give it a go.
TL;DR It's the way you version the releases that's causing you all the pain. Change that and everything else will fall in to place
Lets start at the end and work backwards.
Octopus Deploy has a concept of Environments. This means that you can Deploy the same project to multiple environments and use Octopus's scoping mechanism to manage environment specific configuration.
So using your example.
Creates 3 packages from the build artifact previously unpacked by the
XAML build to match three areas of deployment - web farm, background
job engine cluster and the database
I set up an Environment in Octopus for each of your 50 Environments. (I'll use 3 environments in the example to keep it simple, but the principles apply no matter how many environments you have)
In my Dev Environment I have a single server so I create an environment called "Dev" and add the tentacle for that specific server. Then I tag the tentacle with the deployment type "Web", "Job", "Database"
I then set up a test environment which has 3 servers so I create the Environment and add the 3 servers. I then tag each tentacle with the deployment type "Web", "Job", "Database"
Finally I set up the Production environment. This has 5 web servers, 1 job server and 1 database server. I add all 7 tentacles to the environment, and tag them appropriately.
Now I only need 1 project to deploy to all 3 environments. In my project I have 3 steps.
Step 1 Deploy Web Site
Step 2 Deploy Jobs
Step 3 Deploy database
I can tag each of these steps to say what kind of tentacle I want to deploy to. Now when I run the deployment the link between the tags on the step, and the tags on the tentacle mean Octopus knows where to deploy the code.
Variables: Your variables can be scoped to an environment. So for example if your dev environment database connection string is dev.database.net/Instance and your test environment database connection string is test.Database.net/Instance then you can scope these in the variables section of the project. If your DNS is consitant with your environment names you could even use some of the built in variables to make adding environments more easy. i.e. ${Octopus.Environment.Name}.Database.net/Instance
Releases and version numbers: So here is where I think your problem lies. Adding the environment name to the release and trying to create multiple releases with the same version is basically causing you all of the pain.
Using the example above this gives us 55.0.18709.3-atwfm, but
sometimes we want to deploy the same build artifact in the same
deployment environment twice. But the only Octopus project would
already have the release 55.0.18709.3-atwfm, so how to deploy
55.0.18709.3 in atwfm again, without deleting the already existing release?
There are a couple of things here. In Octopus you can easily deploy again from the UI, however it sounds like you're rebuilding the artifact and trying to create a new release with the same version number. Don't do this! Each new build should have a distinct and unique build number / release number.
The principle I follow is "build once deploy many"
When you create a release it requires a version number, this release then flows through the environments. So I build my code and it gets a versions number 55.0.18709.3 then I deploy it to Dev. When the deployment has been verified I then want to "Promote" the release to test I can do this from within Octopus or I can get TFS to do this.
So I promote 55.0.18709.3 to test and then on to prod. If I need to know which release is in which environment, Octopus tells me this via the dashboard or API.
Finally I can "Orchestrate" the flow of releases through my environments using Build v.next.
So my end to end process looks something like.
Build vNext Build
Compile
Run Unit Tests
Package output
Publish package
build vNext Release
Call Octopus to create the release passing in the version number
Optionally deploy the release to the first environment on your way to live
I now have everything I need in Octopus to deploy to ANY environment with a single project and my environment specific configuration.
I can either "Deploy" the release to a specific environment or "Promote" the release from one environment to another. This can be done easily from within the Octopus UI
Or I can create a "Promote" using the Octopus plugin in TFS and use that to orchestrate the promotion of code through the environments.
Octopus Terminology.
Create release - This pulls together the Artifacts and Release number in Octopus to create an Immutable thing which will be deployed to one of more environments.
Deploy release - The act of pushing your code to a specific environment.
Promote release - Once the code has been deployed in to a single environment, it can them be promoted in to other environments
If you have a specific sequence of environments then you can use the "Lifecycles" feature of Octopus to enforce that workflow. but that's a topic for another day!
EDIT1 Response
I don't think the edit changes my answer, you can re-deploy the same release many times as you like. what you cannot do is create a new release with the same version number. You might want to decouple these steps could you add some more detail about what changes in the XAML build? You can change variables in a release, you can update them in octopus and then redeploy
EDIT 2 Response
That makes things clearer. I think you need to take the hit and migrate the parameters to Octopus. It's variable management is much better than XAML builds and although build vNext is comparable to Octopus it makes more sense to have the config in Octopus. As XAML builds are on their way out, it makes sense to move this stuff now. Whilst it might be a lot of work, at the end you'll have a much smother workflow and you can really take advantage of the power of Octopus.
The Test results point. I agree this is better suited to build vNext, so at this point you will be using build vNext as your Orchestrator and Octopus Deploy as your release management tool.
The process would look something like
Build vNext
Compile code.
Run Unit tests
Run Octopack
Publish packages
Call Octopus and Create release
Call Octopus to Deploy to "Dev"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Test"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Production" (Perhaps with a manual innervation)
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests

Run capybara test against my staging server

I'm new to capybara, and really happy with it's capabilities, I got a few feature tests that runs on my build server in a test environment.
I'm thinking it'll be a good practice to make some other set of tests that would run after a new version is published to my staging server (where the QA doing it's tests)
I need this tests to run on a remote server(does not look like an issue) and I need them to run on the staging environment.
How do I run one set of tests on the staging env and another on the test ?
Can I make a task for the staging?
There are multiple ways to do this. One would be to create a separate directory for tests against staging and then have RSpec run the tests in that directory when you want to test staging. Another would be to tag the features (or scenarios) for staging with staging: true metadata or something
feature 'these tests are done in staging', staging: true do
...
end
and then run rspec with -t staging (which you could set up in your Rakefile).

How do you trigger a gulp/grunt task on a remote server after deploy?

I've just switched to Roots Sage starter theme for Wordpress: roots.io/sage/docs/
and I'm currently reading up on deployment processes.
My processes is usually:
- make changes
- build with grunt/gulp
- commit (including compiled scripts)
- deploy
Sage's .gitignore file removes the dist folder (compiled files) from the repo ie. no css/js in repo. Are you supposed to install node/npm and build the assets on staging/production environment after deploy? If so, how do you trigger a gulp/grunt task on a remote server after deploy?
I'm using https://www.springloops.com/ for managing git and deploy.
Are you supposed to install node/npm and build the assets on staging/production environment after deploy?
You should avoid doing this. There is mixed opinion about committing compiled assets to a VCS as you stated you were doing previously, too.
Let's look at an example.
You finished all your testing locally. You haven't run an npm update in a few days and one of your dependencies has a loose version constraint specified; something like "~1.0.0".
You deploy. On the server, npm install is run before gulp or grunt. gulp runs, the build of your assets completes successfully, and the new version of your app is now live.
Unknown to you, version 1.0.1 of that dependency was released yesterday. For whatever reason, 1.0.1 introduced a change that breaks functionality within your app. That breaking change is now live on your site in production.
Even if you could guarantee all dependencies pulled from npm install on the server will mirror what you had locally/in staging, the headache of maintaining yet another set of software on the server (node.js, ruby, etc...) just for compiling assets should be enough to keep you from doing compilation on in production.
IMO, you should keep compiled assets out of your VCS, and rsync them to your server(s) as part of your deployment.

CI and Deployment with TFS and Powershell

I am working on a CI process with automated deployment. TFS Build is building the solution, and it then uses an InvokeProcess task to kick off a Powershell script. The Powershell script deploys the database changes as a dacpac using sqlpackage, reporting services reports using the web service, fonts to the SSRS server, and the website itself to 1 or more web servers - the whole process uses a deployment configuration file to define drop paths, server ips, installation folders etc. There will be one of these per environment.
I would like to be able to build the solution and deploy to an internal server to run automated tests as part of the automated build. Once tests are completed, and the build has been manually checked, I'd then like to be able to kick off another Build definition which only has the deployment portion of the standard build template, which will simply take a build number or build drop location, and deploy the same build to a different environment (i.e. staging, prod etc.)
The issue I have is that I'm currently managing most of my web/app configuration using config file transformation - i.e. I have build definitions for Debug, Test, Prod etc. and then Web.Debug.config, Web.Test.config etc. I only want to carry out one build, and then deploy that same build to different environments, however at the moment the build will only generate configuration files for one environment - i.e. whatever the build configuration is.
Would the best approach be to generate all config files (or actually pre-createg complete config files for each environment), and then just choose the appropriate one for the specific deployment? Or should I store the env specific config in my deployment configuration file and update the appropriate keys using powershell when deploying?
What would be the normal/recommended approach here?
I'd suggest creating new Configurations for each target environment (e.g. by default you have Debug/Release, create some more). Then use the built-in web.config transforms, for non web-projects use Slow Cheetah
This will spit out pre-configured build outputs for each configuration you specify you want build (in your Build Definition).

Force rules for build and deployment

Our web project is source-controlled with SVN. It contains MSBuild file to build local, test and production builds. We also use CruiseControl.NET to deploy production and test versions to servers manually (not after every commit).
The question is how to check that if production deployment is being done using CC.NET web project is built using production build (not test or other)? How to force specific steps to be executed when building and deploying to production (like compress JS and CSS, compile with debug="false", etc...)? Now it is possible for every developer make changes in MSBuild file (so he/she can forget to compress JS on production build, etc.).
I used CruiseControl with NAnt extensively, but not MSBuild. Do you have different MSBuild files for each build type (i.e. local/test/prod)? Could you have a single one that can be parameterized such that your CCNet integrators can explicitly call the appropriate options for the target environment? That's how I have my continuous integration versus release candidate builds configured in CCNet. A single master NAnt build script and a different CC integrator for each target env with the necessary build parameters (be them targets or property values). I imagine you could do something quite similar with MSBuild vars/targets.