Chef - need a light weight "queue" (for the life of the recipe) - queue

Presently I have a chef recipe whereby I post messages to chat, inside a loop:
artifacts.each do |artifactItem|
# Deploy the artifact
#...
# Post to chat
chat_post "deployed artifact #{artifact_name}"
end
The result on my chat is like this:
chef [BOT]
deployed artifact A
chef [BOT]
deployed artifact B
chef [BOT]
deployed artifact C
I am wondering - is there an easy "queue" mechanism in chef, where I can queue up my deployment messages, and post them all at once (when my recipe completes) ? If so how would the code look.

The easiest way to do this would be to use the delayed notifications system.
artifacts.each do |artifactItem|
# Deploy the artifact
#...
# Post to chat
r = chat_post "deployed artifact #{artifact_name}" do
action :nothing
end
ruby_block "notification for #{artifact_name}" do
block { }
notifies :someaction, r
end
end
Or something like that, make sure you check what action to use for the notification (whatever the default action on the chat_post resource is. Also this assumes chat_post is a resource and not some kind of helper method. If it's not a resource, you might need two ruby_blocks.

You can use node.run_state to save transient data that is accessible for the current chef run.

Related

Jenkinsfile - how to access other github files?

I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.
You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}

Azure Pipelines how filter artifacts per stage for "Manual only" triggered Releases

Let's say I have these 3 Stages: Dev, QC, Prod.
My requirements are:
Artifacts only from specific branches(release/*) can be deployed to QC/Prod
Artifacts from all branches can be deployed to Dev
I can achieve what I want using Artifact filters for "After stage" triggered Releases but I need this for "Manual only".
Is there a workaround that will let me control/filter which artifacts are available for deployment for specific stages/environments?
Basically, I need the Azure DevOps equivalent of Octopus Channels.
Update
I think I'm close to a solution.
In the "Pre-deployment conditions", I can add a new Deployment Gate which makes a Rest API call.
e.g URL suffix=/Release/releases/76
Now, I just need to correctly parse the ApiResponse because the below Success criteria doesn't work
eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')
Evaluation of expression 'eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')' failed.
As you said, you can do this using Deployment gates on your stages.
Create a new Generic service connection from Project Settings -> Pipelines -> Service Connections.
For service URL something like https://vsrm.dev.azure.com/{OrgName}/{ProjectName}/_apis
On your stage, open the Pre-Deployment Conditions
Enable the Gates option.
Add a new Invoke REST API gate and set the Delay before evaluation to 0 minutes.
4.1 Set the connection type to Generic.
4.2 Select the service connection you created in step 1.
4.3 Set the method to GET.
4.4 Set the URL suffix to /Release/releases/$(Release.ReleaseId)
4.5 On the Advanced area, set the Completion Event to ApiResponse.
4.6 On the Advanced area, set the success criteria to (or startsWith)
eq(root['artifacts'][0]['definitionReference']['branch']['id'],'refs/heads/master')
Now, if you try to deploy an artifact not from the master branch, the deployment will fail
There is a workaround:
In the QC/Prod stages add a custom condition that the job will be executed only where the artifacts source branch is release/*:
startsWith(variables['Release.Artifacts.{Artifacts-Alias}.SourceBranch'], 'refs/heads/release')
Now, when you manually run the QC/Prod stages and the artifacts not came from the release the job not will be executed:
This works
and(contains(variables['build.sourceBranch'], 'refs/heads/release'), succeeded())

Concourse CI: Use Metadata (Build number, URL etc) in on_success/on_failure

How is it possible to use Metadata in on_success/on_failure? For example, to send emails via https://github.com/pivotal-cf/email-resource?
I haven't found a way, as I can't change content of files where email resources reside (subject/body), as the metadata is not available to tasks.
And yep, that might be a duplicate for Concourse CI and Build number
But still my question IMHO is a valid use case for notifications.
The metadata you are referring to, I assume, is the environment variables provided to resources, not tasks.
This can be used with the slack resource to provide information about what build failed.
For example:
on_failure:
put: slack-alert
params:
text: |
The `science` pipeline has failed. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
The email resource, you're referencing has an open PR to support these environment variables. I'd discuss your need for that feature there.

Deployment with Chef

I am using the chef resource "deploy_revision" to deploy a python code on my nodes. I started this initially for a dev environment but now slowly there is a need for this to expand and i am not sure - if this is a good choice. Below is the code.
data_bag = Chef::EncryptedDataBagItem.load("#{node.chef_environment}", "#{node.chef_environment}")
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
This pulls down the new code whenever there is one, during the automatic chef-run every 30mins on the nodes. This is cool but not so cool in other nodes which are not a part of the development environment. I have 4 environments:
dev
test
stage
prod
If i create 4 remote branches on the git, Is there a way on how to make this deploy from specific branch on specific environments? Something like, the dev nodes deploy the dev remote branch, test deploys the test remote branch and so on.. This way, i can put a gate on the auto deploys that happening every 30mins. i referred the chef docs, there is this "deploy_branch" but i not sure it just says its the same as "deploy_revision".
https://docs.chef.io/resource_deploy.html#layout-modifiers
There is an attribute branch available as per the chef document. So adding the attribute like is what i need?
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
branch "node.chef_environment"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
Then, i came across this bug report (closed): https://tickets.opscode.com/browse/CHEF-5084. It seems to be specifying branches with the attribute, "revision". So, can i use this attribute with the node environment as the parameter? Like this
revision "node.chef_environments"
If you guys, think deployment using chef is not a good idea. Do you think- i should look out for Capistrano?
we do this all the time. Just add:
deploy_revision '/path' do
...
revision node['chef_environment']
...
end
Or in our case, we calculate the branch from the chef_environment
deploy_revision '/path' do
...
revision node['chef_environment'].match(/develop|staging|production|test/)[0]
...
end
This is how it worked for me.
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
revision node.chef_environment
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/