How to deploy releases automatically to gitlab using ci - deployment

Im currently trying to figure out how to deploy an gitlab project automatically using ci. I managed to run the building stage successfully, but im unsure how to retrieve and push those builds to the releases.
As far as I know it is possibile to use rsync or webhooks (for example Git-Auto-Deploy) to get the build. However I failed to apply these options successfully.
For publishing releases I did read https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/api/tags.md#create-a-new-release, but im not sure if I understand the required pathing schema correctly.
Is there any simple complete example to try out this process?

A way is indeed to use webhooks:
There are tons of different possible solutions to do that. I'd go with a sh script which is invoked by the hook.
How to intercept your webhook is up to the configuration of your server, if you have php-fpm installed you can use a PHP script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.
The PHP script can be something like that:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.

Related

Pushing a signed image to ACR from Azure Release pipeline

I'm following this documentation to push signed images to ACR from Azure pipelines.
However, this only describes the changes needed in yaml tasks. I'm using a classic release pipeline, and I'm facing some issues.
I'm trying to push the image using an Azure CLI script. Before the script task, I'm using the Secure files in pipeline to download the private key file and used the below CLI script -
echo '---------Create Private Delegate Key for signing--------'
mkdir -p ./docker/trust/private
echo 'Created Trust Directory'
echo 'Copying $(privateKey.secureFilePath) to ./docker/trust/private'
cp $(privateKey.secureFilePath) ./docker/trust/private
I'm getting the below error on running
echo $(SigningPassphrase) | docker push --disable-content-trust=false $(registry)/$REPOSITORY_NAME:$BUILD_TAG
Error:
no valid signing keys for delegation roles
I added the following lines in the script to load the private key -
chmod 600 ./docker/trust/private/$(KeyFileName)
echo '-----Loading Key-----'
docker trust key load ./docker/trust/private/$(KeyFileName)
But signing of the image is still failing after loading the key. I also tried changing the key file name to the repository key.
Am I placing the file in an incorrect location? It's being placed in /home/vsts/.docker/trust/private.
What should be the location to place the private key file in, so that docker can recognize it to sign the images?

Polymer 2.0 upload to GitHub-Pages

I have problem with uploading my Polymer component into gh pages.
I'm try this from tutorial:
# git clone the Polymer tools repository somewhere outside of your
# element project
git clone git://github.com/Polymer/tools.git
# Create a temporary directory for publishing your element and cd into it
mkdir temp && cd temp
# Run the gp.sh script. This will allow you to push a demo-friendly
# version of your page and its dependencies to a GitHub pages branch
# of your repository (gh-pages). Below, we pass in a GitHub username
# and the repo name for our element
../tools/bin/gp.sh <username> <test-element>
# Finally, clean-up your temporary directory as you no longer require it
cd ..
rm -rf temp
But it's not working.
In terminal I have this errors:
There is something I'm, missing?
Here is your problem:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
For the script to run as intended, you need to add your public ssh key to your github project. Settings -> Deploy keys -> Add deploy Key.
Alternatively, you can manually execute the steps in gp.sh that involve pulling from and pushing to github.
If you don't feel like splitting up the script, try running the commands manually, that should work. The only multi-line command in the script is this one:
echo "{
\"directory\": \"components\"
}
" > .bowerrc
Good luck.

Prevent downtime using lftp mirror

I'm using lftp to deploy a website via Travis CI. There is a build process before the deployment, for that reason a build directory is present and pushed to the root of the ftp server.
lftp $FTP_URL -e "glob -d mirror build . --reverse --delete-first --parallel=10 && exit"
It works quite well, but I dislike to have a downtime / temporary PHP parse errors because of missing files on my website. What is the best way to work arround that issue?
My first approach was an option to set a temporary directory, but the lftp man page says there is only a options for temporary files. I still tried the option but it didn't help.
My second approach was to use "mirror build temp" to use a temporary folder and then replace the root with it. The problem here is, that I cannot exclude the temp folder while deleting the old files and folders like rm -rf *.
For small changes not involving adding/removing php files set xfer:use-temp-file should be sufficient. Also don't use --remove-first, as it causes lftp to delete obsolete files before uploading.
For larger changes I'd create a separate directory for each version of the site and redirect the web server to the directory using .htaccess mod_rewrite or some other configuration file. This technique will allow atomic switch to the new version (and back if needed). Besides, you will be able to do final pre-production testing of the new version if you redirect to the new version conditionally based on your IP address or using some other rule.
If you don't want to re-upload whole site for each new version and the FTP server supports FXP with itself, then you can copy old version to a new directory using mirror old_directory ftp://user#example.com/new_directory, then update the new directory using mirror -eR local_dir new_directory.
This is a zero downtown pattern - each placeholder should be replaced:
lftp $FTP_URL -e "mirror {SOURCE} {TARGET}-new-{TIMESTAMP} --reverse --delete-first;
mv {TARGET} {TARGET}-old-{TIMESTAMP};
mv {TARGET}-new-{TIMESTAMP} {TARGET};
rm -rf {TARGET}-old-{TIMESTAMP};
exit"

gsutil AccessDeniedException: 401 Login Required

So I run the following:
gsutil -m cp -R file.png gs://bucket/file.png
And I get the following error message:
Copying file://file.png [Content-Type=application/pdf]...
Uploading file.png: 42.59 KiB/42.59 KiB
AccessDeniedException: 401 Login Required
CommandException: 1 files/objects could not be transferred.
I'm not sure what the problem is since I ran config and I can see all my buckets. Does anyone know what I need to do?
Note: I do not have gcloud, I just installed gsutil and ran the config.
Login to Google Cloud is needed for accessing any Cloud service. You need to use below command which will guide you through login steps like typing verification code you generate by opening browser link given in console.
gcloud auth login
I was getting a similar response, and was able to solve this problem by looking at the read permissions on the .boto file. In my case, I was using a service account and the .boto file that was created by
gsutil config -e
only had read permissions set for user. Since it was being read by a service running as a different user, it wasn't able to read the file and yielding a 401 Login Required error. I fixed it by adding read permissions for the service's group.
In the least sophisticated case, you could fix it by giving any user read permission with
chmod a+r .boto
A more detailed explanation for troubleshooting
To get more information, run the same command with a -D flag, like:
gsutil -m -D cp ....
In the debug output, look at:
Command being run: /path/to/gsutil
config_file_list: /path/to/boto/config
Create your login credentials using the executable at /path/to/gsutil, not gcloud auth or any other gsutil executable on the machine, using:
/path/to/gsutil config
For a service account, use:
/path/to/gsutil config -e
These should create a .boto config file in your home directory, $HOME/.boto. If you are running the gsutil command this file should be referenced in the config_file_list variable in the debug output. If not, see below to change it.
Running gsutil under a service account or as another user
If you are running as another user, and need to reference a newly-created config file, set the environment variable BOTO_CONFIG (don't forget to export it):
BOTO_CONFIG=/path/to/$HOME/.boto
export BOTO_CONFIG
By setting this variable, when you execute gsutil, it will reference the config file you have placed in BOTO_CONFIG. You can confirm that you are referencing the correct config file by looking at the config_file_list variable in the gsutil -D command's output.
make sure the referenced .boto file is readable by the user who is executing the gsutil command
Running the /path/to/gsutil with the BOTO_CONFIG variable set allowed me to execute gsutil as another user, referencing an arbitrary BOTO_CONFIG file that was set up with a service-account's credentials.
To set up the service account:
https://console.cloud.google.com/permissions/serviceaccounts
The key file from the service account set-up process needs to be downloaded, and the path to it is requested during the gsutil config -e step.
This may be an issue with how gsutil/boto handles the OS path separators on Windows, as referenced here. This should eventually get merged into the sdk tools, but until then the following should work:
Go to
google-cloud-sdk\platform\gsutil\third_party\boto\boto\pyami\config.py
and replace the line:
for path in os.environ['BOTO_PATH'].split(':'):
with:
for path in os.environ['BOTO_PATH'].split(os.path.pathsep):
Next, go to
google-cloud-sdk\bin\bootstrapping\gsutil.py
replace the lines that use ':'
if boto_config:
boto_path = ':'.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = ':'.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = ':'.join(path_parts)
with
if boto_config:
boto_path = os.path.pathsep.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = os.path.pathsep.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = os.path.pathsep.join(path_parts)
Restart cmd and now the error should go away.

downloading lastfinished build from teamcity

I'm using Perl's File::Fetch to download a file from the lastfinished build in Teamcity. This is working fine except the file is versioned, but I'm not getting the version number.
sub GetTeamcityFiles {
my $latest_version = "C:/dowloads"
my $uri = "http://<teamcity>/guestAuth/repository/download/bt11/.lastFinished/MyApp.{build.number}.zip";
# fetch the uri to extract directory
my $ff = File::Fetch->new(uri => "$uri");
my $where = $ff->fetch( to => "$latest_version" );
This gives me a file:
C:\downloads\MyApp.{build.number}.zip.
However, the name of the file downloaded has a build number in the name. Unfortunately there is no version file within the zip, so this is the only way I have of telling what file i've downloaded. Is there any way to get this build number?
c:\downloads\MyApp.12345.zip
With build configs modification
If you have the ability to modify the build configs in TeamCity, you can easily embed the build number into the zip file.
Create a new build step - choose command line
For the script, do something like: echo %build.number% > version.txt
That will put version.txt at the root directory of your build folder in TeamCity, which you can include in your zip later when you create it.
You can later read that file in.
I'm not able to access my servers right now so I don't have the exact name of the parameter, but typing %build will pull up a list of TeamCity parameters to choose from, and I think it is %build.number% that you're after.
Without build configs modification
If you're not able to modify the configs, you're going to need something like egrep:
$ echo MyApp.12.3.4.zip | egrep -o '([0-9]+.){2}[0-9]+'
> 12.3.4
$ echo MyApp.1234.zip | egrep -o '[0-9]+'
> 1234
It looks like you're running on Windows; in those cases I use UnxUtils & UnxUpdates to get access to utilities like this. They're very lightweight and do not install to the registry, just add them to your system PATH.