Deployment with Chef - deployment

I am using the chef resource "deploy_revision" to deploy a python code on my nodes. I started this initially for a dev environment but now slowly there is a need for this to expand and i am not sure - if this is a good choice. Below is the code.
data_bag = Chef::EncryptedDataBagItem.load("#{node.chef_environment}", "#{node.chef_environment}")
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
This pulls down the new code whenever there is one, during the automatic chef-run every 30mins on the nodes. This is cool but not so cool in other nodes which are not a part of the development environment. I have 4 environments:
dev
test
stage
prod
If i create 4 remote branches on the git, Is there a way on how to make this deploy from specific branch on specific environments? Something like, the dev nodes deploy the dev remote branch, test deploys the test remote branch and so on.. This way, i can put a gate on the auto deploys that happening every 30mins. i referred the chef docs, there is this "deploy_branch" but i not sure it just says its the same as "deploy_revision".
https://docs.chef.io/resource_deploy.html#layout-modifiers
There is an attribute branch available as per the chef document. So adding the attribute like is what i need?
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
branch "node.chef_environment"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
Then, i came across this bug report (closed): https://tickets.opscode.com/browse/CHEF-5084. It seems to be specifying branches with the attribute, "revision". So, can i use this attribute with the node environment as the parameter? Like this
revision "node.chef_environments"
If you guys, think deployment using chef is not a good idea. Do you think- i should look out for Capistrano?

we do this all the time. Just add:
deploy_revision '/path' do
...
revision node['chef_environment']
...
end
Or in our case, we calculate the branch from the chef_environment
deploy_revision '/path' do
...
revision node['chef_environment'].match(/develop|staging|production|test/)[0]
...
end

This is how it worked for me.
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
revision node.chef_environment
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end

Related

Azure Pipelines how filter artifacts per stage for "Manual only" triggered Releases

Let's say I have these 3 Stages: Dev, QC, Prod.
My requirements are:
Artifacts only from specific branches(release/*) can be deployed to QC/Prod
Artifacts from all branches can be deployed to Dev
I can achieve what I want using Artifact filters for "After stage" triggered Releases but I need this for "Manual only".
Is there a workaround that will let me control/filter which artifacts are available for deployment for specific stages/environments?
Basically, I need the Azure DevOps equivalent of Octopus Channels.
Update
I think I'm close to a solution.
In the "Pre-deployment conditions", I can add a new Deployment Gate which makes a Rest API call.
e.g URL suffix=/Release/releases/76
Now, I just need to correctly parse the ApiResponse because the below Success criteria doesn't work
eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')
Evaluation of expression 'eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')' failed.
As you said, you can do this using Deployment gates on your stages.
Create a new Generic service connection from Project Settings -> Pipelines -> Service Connections.
For service URL something like https://vsrm.dev.azure.com/{OrgName}/{ProjectName}/_apis
On your stage, open the Pre-Deployment Conditions
Enable the Gates option.
Add a new Invoke REST API gate and set the Delay before evaluation to 0 minutes.
4.1 Set the connection type to Generic.
4.2 Select the service connection you created in step 1.
4.3 Set the method to GET.
4.4 Set the URL suffix to /Release/releases/$(Release.ReleaseId)
4.5 On the Advanced area, set the Completion Event to ApiResponse.
4.6 On the Advanced area, set the success criteria to (or startsWith)
eq(root['artifacts'][0]['definitionReference']['branch']['id'],'refs/heads/master')
Now, if you try to deploy an artifact not from the master branch, the deployment will fail
There is a workaround:
In the QC/Prod stages add a custom condition that the job will be executed only where the artifacts source branch is release/*:
startsWith(variables['Release.Artifacts.{Artifacts-Alias}.SourceBranch'], 'refs/heads/release')
Now, when you manually run the QC/Prod stages and the artifacts not came from the release the job not will be executed:
This works
and(contains(variables['build.sourceBranch'], 'refs/heads/release'), succeeded())

Chef - need a light weight "queue" (for the life of the recipe)

Presently I have a chef recipe whereby I post messages to chat, inside a loop:
artifacts.each do |artifactItem|
# Deploy the artifact
#...
# Post to chat
chat_post "deployed artifact #{artifact_name}"
end
The result on my chat is like this:
chef [BOT]
deployed artifact A
chef [BOT]
deployed artifact B
chef [BOT]
deployed artifact C
I am wondering - is there an easy "queue" mechanism in chef, where I can queue up my deployment messages, and post them all at once (when my recipe completes) ? If so how would the code look.
The easiest way to do this would be to use the delayed notifications system.
artifacts.each do |artifactItem|
# Deploy the artifact
#...
# Post to chat
r = chat_post "deployed artifact #{artifact_name}" do
action :nothing
end
ruby_block "notification for #{artifact_name}" do
block { }
notifies :someaction, r
end
end
Or something like that, make sure you check what action to use for the notification (whatever the default action on the chat_post resource is. Also this assumes chat_post is a resource and not some kind of helper method. If it's not a resource, you might need two ruby_blocks.
You can use node.run_state to save transient data that is accessible for the current chef run.

Capistrano: organizing folders for different environments

I'm approaching to Capistrano and I want to understand better how I have to organize the folder structure on the server.
Let's suppose I have two branch:
master
develop
That are visible respectively on:
www.example.org
develop.example.org
Actually (no-Capistrano), on the server I have:
/home/sites/example.org/www
/home/sites/example.org/develop
But, with Capistrano, I will have only /home/sites/example.org/current.
How can I manage the "production/development" situation with Capistrano?
Thanks
You can override the deployment folder in your environment config. For example, you have the default deployment location in config/deploy.rb using set :deploy_to, '/home/sites/example.org/www'. Then you set up config/develop.rb and config/production.rb (these names are arbitrary and don't need to map to the branch names):
server 'servername', user: 'username', roles: %w(app db web)
set :deploy_to, '/home/sites/example.org/develop'
In general, anything you set in deploy.rb can be overridden in deploy/[env].rb.

Prevent mercurial push during a jenkins build

I have a jenkins job that runs some tests on a mercurial repo, and if successful tags the local repo with a 'stable' tag and then pushes this back to the main repo. The issue I'm having is that if someone pushes changesets while the build is running, then I cannot push the 'stable' tag.
I was wondering if there was a way to set the remote repo to read-only while the build is running, then make it 'push-able' once the build finishes?
Thanks,
Vackar
Preventing the push is probably not what you want (and it's almost pretty much impossible). The promise of a DVCS like Mercurial or git is that there's no locking -- it's a step forward.
Have you considered having Jenkins just pull and update before it merges? You can still tag the proper revision. Something like this:
jenkins checks out the code and notes the revision id it's building
jenkins does the build, runs the tests, etc. and everything goes well
jenkins does a hg pull to get the latest from the server
jenkins does a hg tag -m "build number $BUILD_NUMBER" --revision X --force stable
jenkins does a hg push
Then there's (almost) no time between that final pull, tag, and push, but the tag still goes on the revision that was actually build -- because you saved that revision hash id from when you first pulled.
I've just been looking for something similar. In our case, Jenkins is performing a merge, running an extensive suite of tests and once they all pass, pushing the merged code back to the repository. So it takes ~1 hour and fails if a developer pushes while the job is executing (it can't do the final push).
I couldn't find a ready-made solution, so ended up writing a mercurial hook which checks whether the job is building (using the REST API) before allowing the push.
You'll need access to your remote mercurial repository, but other than that, it's not too complex.
Add the following to your-remote-repo/.hg/hgrc:
[hooks]
pretxnchangegroup.DisablePushDuringJenkinsBuild= python:.hg/disable_push_if_building_hook.py:check_jenkins
[jenkins]
url=http://path-to-jenkins
jobs=jenkins-job-name[,comma-separated, for-multiple, jobs]
And make sure this python script is in your-remote-repo/.hg/
import json, urllib2
from mercurial import util
TEN_SECONDS = 10
def check_jenkins(ui, repo, node, **kwargs):
jenkins_url = ui.config('jenkins', 'url', default=None, untrusted=False)
jenkins_jobs = ui.config('jenkins', 'jobs', default=None, untrusted=False)
if not jenkins_url:
raise util.Abort('Jenkins hook has not been configured correctly. Cannot find Jenkins url in .hg/hgrc.')
if not jenkins_jobs:
raise util.Abort('Jenkins hook has not been configured correctly. Cannot find Jenkins jobs in .hg/hgrc.')
jenkins_jobs = [x.strip() for x in jenkins_jobs.split(',')]
for job in jenkins_jobs:
job_url = jenkins_url + '/job/' + job + '/lastBuild/api/json'
ui.write('Checking if job is running at URL: %s\n' % job_url)
try:
job_metadata = json.load(urllib2.urlopen(job_url, timeout = TEN_SECONDS))
if 'building' in job_metadata and job_metadata['building']:
raise util.Abort('Jenkins build "%s" is in progress. Pushing is disabled until it completes.' % job_metadata['fullDisplayName'])
except urllib2.URLError, e:
raise util.Abort('Error while trying to poll Jenkins: "%s"' % e)
return False # Everything is OK, push can be accepted

Capistrano in Open Source Projects and different environments

I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default