Update environment variables from TeamCity build step - powershell

In my build configuration I have environment variables for major, minor and patch version numbers.
I am trying to write a build step that checks the name of the branch and if it is a release branch with a higher version than the current env vars, I want to update them.
I have tried setting the variables, but when I go the 'Parameters' tab it still shows the old value.
I am writing a Powershell script, and have tried:
Write-Host "##teamcity[setParameter name='major.version' value='2']"
Write-Host "##teamcity[setParameter name='env.major.version' value='2']"
$Env:major.version = 2

If you want to update the settings of the TeamCity build configuration, you need to use REST API.
e.g. curl -u username:password "https://teamcity.corp.com/app/rest/buildTypes/id:%system.teamcity.buildType.id%/parameters/major.version" --request PUT --header "Content-Type: text/plain"
You will need to provide creentials of a user who has "Edit Project" permission.
Note: ##teamcity[setParameter... changes the parameter only for the following steps of the same build.

Related

Is there a way to delete a github workflow

So I tried to put a docker-compose.yml file in the .github/workflows directory, of course it tried to pick that up and run it... which didn't work. However now this always shows up as a workflow, is there any way to delete it?
Yes, you can delete the results of a run. See the documentation for details.
To delete a particular workflow on your Actions page, you need to delete all runs which belong to this workflow. Otherwise, it persists even if you have deleted the YAML file that had triggered it.
If you have just a couple of runs in a particular action, it's easier to delete them manually. But if you have a hundred runs, it might be worth running a simple script. For example, the following python script uses GitHub API:
Before you start, you need to install the PyGithub package (like pip install PyGithub) and define three things:
PAT: create a new personal access GitHub token;
your repo name
your action name (even if you got deleted it already, just hover over the action on the actions page):
from github import Github
import requests
token = "ghp_1234567890abcdefghij1234567890123456" # your PAT
repo = "octocat/my_repo"
action = "my_action.yml"
g = Github(token)
headers = {'Accept': 'application/vnd.github.v3',
'Authorization': f'token {token}'}
for run in g.get_repo(repo).get_workflow(id_or_name=action).get_runs():
response = requests.delete(url=run.url, headers=headers)
if response.status_code == 204:
print(f"Run {run.id} got deleted")
After all the runs are deleted, the workflow automatically disappears from the page.
Yes, you can delete all the workflow runs in the workflow which you want to delete, then this workflow will disappear.
https://docs.github.com/en/rest/reference/actions#delete-a-workflow-run
To delete programmatically
Example (from the docs)
curl \
-X DELETE \
-H "Authorization: token <PERSONAL_ACCESS_TOKEN>"
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/octocat/hello-world/actions/runs/42

Can/How to use project configuration keys as job parameters in Rundeck

I'm using the API to add some project configuration keys and would like to use them as job parameters. Is this possible? If so, how can I do it? I've looked in the official documentation but am not seeing much.
Indeed, that is achievable, from the documentation, you will need to update the project’s configuration with a “Project global execution variable” key and value, then that variable will be available in all execution contexts as ${globals.X} and can be referenced in scripts and commands. You can send the project’s configuration key as JSON, xml or plain text via curl or as a file directly via the RD CLI. e.g:
If you use the “rd” cli, you need to create a file, which can be a .properties, JSON or YAML file. We will create a JSON file named test.json, that contains the following ‘KEY’ and ‘VALUE’:
{ "project.globals.test" : "testvalue" }
Then, you can update the project configuration with this rd command syntax:
rd projects configure update -f [/path/to/test.json] -p [project_name]
That will update your projects configuration. Then you can reference it as follows:
Via bash: $RD_GLOBALS_TEST
Via command: ${globals.test}
In a script content: #globals.test#
Alternatively, you could use the API directly with curl. For this example I’m using an API token to authenticate with Rundeck’s API and sending the same key and value, but as xml:
curl -H "X-Rundeck-Auth-Token: INSERT_TOKEN" -H "Content-Type: application/xml" -d '<property key="project.globals.test" value="valuetest"/>' -X PUT http://[RD_HOST]:[PORT]/api/23/project/[PROJECT_NAME]/config/[KEY]
Hope it helps.

How to deploy releases automatically to gitlab using ci

Im currently trying to figure out how to deploy an gitlab project automatically using ci. I managed to run the building stage successfully, but im unsure how to retrieve and push those builds to the releases.
As far as I know it is possibile to use rsync or webhooks (for example Git-Auto-Deploy) to get the build. However I failed to apply these options successfully.
For publishing releases I did read https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/api/tags.md#create-a-new-release, but im not sure if I understand the required pathing schema correctly.
Is there any simple complete example to try out this process?
A way is indeed to use webhooks:
There are tons of different possible solutions to do that. I'd go with a sh script which is invoked by the hook.
How to intercept your webhook is up to the configuration of your server, if you have php-fpm installed you can use a PHP script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.
The PHP script can be something like that:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.

Github-plugin for Jenkins get committer and author name

If I understand well, git plugin exposes committer and author names and emails to environmental variables GIT_AUTHOR_NAME, GIT_COMMITTER_NAME, GIT_AUTHOR_EMAIL and GIT_COMMITTER_EMAIL based on the global configuration of git. Is there a way to get that info using Github-plugin? Does Github-plugin exposes payload info, getting from github-webhook, to environmental variables or to something else?
In reality these variables are available just when you overwrite the Author Name and Author Email on the Advanced features of the SCM configuration.
"Additional Behaviours" -> "Custom user name/email address"
This is described on the source code:
https://github.com/jenkinsci/git-plugin/tree/master/src/main/java/hudson/plugins/git
Solution: In order to retrieve the author name and email I suggest scripting this:
GIT_NAME=$(git --no-pager show -s --format='%an' $GIT_COMMIT)
GIT_EMAIL=$(git --no-pager show -s --format='%ae' $GIT_COMMIT)
Being $GIT_COMMIT the SHA1 commit id.
You can use this workaround in your scripted pipeline file:
env.GIT_COMMITTER_EMAIL = sh(
script: "git --no-pager show -s --format='%ae'",
returnStdout: true
).trim()
You can try for below command, it worked for me:
git log -n 1 --pretty=format:'%ae'
You need check who is contributing this variables, github plugin only triggers git build that runs Git SCM (that is git-plugin). This variables probably injected by git-plugin.

Passing parameters to Capistrano

I'm looking into the possibility of using Capistrano as a generic deploy solution. By "generic", I mean not-rails. I'm not happy with the quality of the documentation I'm finding, though, granted, I'm not looking at the ones that presume you are deploying rails. So I'll just try to hack up something based on a few examples, but there are a couple of problems I'm facing right from the start.
My problem is that cap deploy doesn't have enough information to do anything. Importantly, it is missing the tag for the version I want to deploy, and this has to be passed on the command line.
The other problem is how I specify my git repository. Our git server is accessed by SSH on the user's account, but I don't know how to change deploy.rb to use the user's id as part of the scm URL.
So, how do I accomplish these things?
Example
I want to deploy the result of the first sprint of the second release. That's tagged in the git repository as r2s1. Also, let's say user "johndoe" gets the task of deploying the system. To access the repository, he has to use the URL johndoe#gitsrv.domain:app. So the remote URL for the repository depends on the user id.
The command lines to get the desired files would be these:
git clone johndoe#gitsrv.domain:app
cd app
git checkout r2s1
Update: For Capistrano 3, see scieslak's answer below.
Has jarrad has said, capistrano-ash is a good basic set of helper modules to deploy other project types, though it's not required as at the end of the day. It's just a scripting language and most tasks are done with the system commands and end up becoming almost shell script like.
To pass in parameters, you can set the -s flag when running cap to give you a key value pair. First create a task like this.
desc "Parameter Testing"
task :parameter do
puts "Parameter test #{branch} #{tag}"
end
Then start your task like so.
cap test:parameter -s branch=master -s tag=1.0.0
For the last part. I would recommend setting up passwordless access using ssh keys to your server. But if you want to take it from the current logged in user. You can do something like this.
desc "Parameter Testing"
task :parameter do
system("whoami", user)
puts "Parameter test #{user} #{branch} #{tag}"
end
UPDATE: Edited to work with the latest versions of Capistrano. The configuration array is no longer available.
Global Parameters: See comments Use set :branch, fetch(:branch, 'a-default-value') to use parameters globally. (And pass them with -S instead.)
Update. Regarding passing parameters to Capistrano 3 task only.
I know this question is quite old but still pops up first on Google when searching for passing parameters to Capistrano task. Unfortunately, the fantastic answer provided by Jamie Sutherland is no longer valid with Capistrano 3. Before you waste your time trying it out except the results to be like below:
cap test:parameter -s branch=master
outputs :
cap aborted!
OptionParser::AmbiguousOption: ambiguous option: -s
OptionParser::InvalidOption: invalid option: s
and
cap test:parameter -S branch=master
outputs:
invalid option: -S
The valid answers for Capistrano 3 provided by #senz and Brad Dwyer you can find by clicking this gold link:
Capistrano 3 pulling command line arguments
For completeness see the code below to find out about two option you have.
1st option:
You can iterate tasks with the key and value as you do with regular hashes:
desc "This task accepts optional parameters"
task :task_with_params, :first_param, :second_param do |task_name, parameter|
run_locally do
puts "Task name: #{task_name}"
puts "First parameter: #{parameter[:first_param]}"
puts "Second parameter: #{parameter[:second_param]}"
end
end
Make sure there is no space between parameters when you call cap:
cap production task_with_params[one,two]
2nd option:
While you call any task, you can assign environmental variables and then call them from the code:
set :first_param, ENV['first_env'] || 'first default'
set :second_param, ENV['second_env'] || 'second default'
desc "This task accepts optional parameters"
task :task_with_env_params do
run_locally do
puts "First parameter: #{fetch(:first_param)}"
puts "Second parameter: #{fetch(:second_param)}"
end
end
To assign environmental variables, call cap like bellow:
cap production task_with_env_params first_env=one second_env=two
Hope that will save you some time.
I'd suggest to use ENV variables.
Somethings like this (command):
$ GIT_REPO="johndoe#gitsrv.domain:app" GIT_BRANCH="r2s1" cap testing
Cap config:
#deploy.rb:
task :testing, :roles => :app do
puts ENV['GIT_REPO']
puts ENV['GIT_BRANCH']
end
And take a look at the https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension, may be this approach will be useful for you as well.
As Jamie already showed, you can pass parameters to tasks with the -s flag. I want to show you how you additionally can use a default value.
If you want to work with default values, you have to use fetch instead of ||= or checking for nil:
namespace :logs do
task :tail do
file = fetch(:file, 'production') # sets 'production' as default value
puts "I would use #{file}.log now"
end
end
You can either run this task by (uses the default value production for file)
$ cap logs:tail
or (uses the value cron for file
$ cap logs:tail -s file=cron
Check out capistrano-ash for a library that helps with non-rails deployment. I use it to deploy a PyroCMS app and it works great.
Here is a snippet from my Capfile for that project:
# deploy from git repo
set :repository, "git#git.mygitserver.com:mygitrepo.git"
# tells cap to use git
set :scm, :git
I'm not sure I understand the last two parts of the question. Provide some more detail and I'd be happy to help.
EDIT after example given:
set :repository, "#{scm_user}#gitsrv.domain:app"
Then each person with deploy priveledges can add the following to their local ~/.caprc file:
set :scm_user, 'someuser'