I'm following this documentation to push signed images to ACR from Azure pipelines.
However, this only describes the changes needed in yaml tasks. I'm using a classic release pipeline, and I'm facing some issues.
I'm trying to push the image using an Azure CLI script. Before the script task, I'm using the Secure files in pipeline to download the private key file and used the below CLI script -
echo '---------Create Private Delegate Key for signing--------'
mkdir -p ./docker/trust/private
echo 'Created Trust Directory'
echo 'Copying $(privateKey.secureFilePath) to ./docker/trust/private'
cp $(privateKey.secureFilePath) ./docker/trust/private
I'm getting the below error on running
echo $(SigningPassphrase) | docker push --disable-content-trust=false $(registry)/$REPOSITORY_NAME:$BUILD_TAG
Error:
no valid signing keys for delegation roles
I added the following lines in the script to load the private key -
chmod 600 ./docker/trust/private/$(KeyFileName)
echo '-----Loading Key-----'
docker trust key load ./docker/trust/private/$(KeyFileName)
But signing of the image is still failing after loading the key. I also tried changing the key file name to the repository key.
Am I placing the file in an incorrect location? It's being placed in /home/vsts/.docker/trust/private.
What should be the location to place the private key file in, so that docker can recognize it to sign the images?
Related
I'm having trouble using a multiline Azure Key Vault value inside an Azure Release Pipeline...
I put a multiline value (RSA private key) into Azure Key Vault using the CLI:
az keyvault secret set --vault-name "vault" --name "secret" --file "pk.pem"
This works and I can see the multiline secret in the portal.
Locally using CLI I can also do:
pk=$(az keyvault secret show \
--name "ssh-private-key" \
--vault-name $vault \
--query "value")
This returns a somewhat crappy value (yes including the double quotes):
"-----BEGIN RSA PRIVATE KEY-----\nMIIG4wIBAA .... JtpyW\n-----END RSA PRIVATE KEY-----\n"
I can manage to work with this and send the value to a file like so:
pk="${pk%\"}" #remove first quote
pk="${pk#\"}" #remove last quote
echo $pk | sed 's|\\n|\n|g' | # replace with actual newlines
while IFS= read -r line; do # loop through lines
echo "$line" >> pk.pem # write to file per line
done
This works and I can login to my server using ssh -i pk.pem user#server
But when running the same script in the Azure Devops Release pipeline (also using Bash on a Linux agent) the exact same script fails... I'm also having trouble inspecting the actual value as the log masks all values related to the secret...
Any guide on how to debug or work with actually reading multiline values instead of just storing them would be hugely appreciated!
Here is a troubleshooting advice:
The error "Host key verification failed." doesn't just occur when the key is incorrect. Most of the time, it doesn't refer to your key.
So I recommend you firstly try the connection with a simple value to see if it works on Azure DevOps.
What's more, maybe an SSH service connection can help you with what you're doing. Go to Project Settings -> Service connections -> Create service connection -> SSH to create one.
I have an Azure CLI task which references a PowerShell script (via build artifact) running az commands. Most of these commands work successfully, but when attempting to execute the following command:
az appconfig kv import --name $resourceName -s file --path appconfig.json --format json
I've noticed that the information was not present against the Azure resource and the log file has "File is not available".
I must be referencing the file incorrectly from the build artifact but if anyone could provide some clarity around this that would be great.
I must be referencing the file incorrectly from the build artifact
You can try to add $(System.ArtifactsDirectory) to the json file path. For example: --path $(System.ArtifactsDirectory)/appconfig.json.
System.ArtifactsDirectory: The directory to which artifacts are downloaded during deployment of a release. Example: C:\agent\_work\r1\a
For details ,please refer to predefined variables .
This can be a little tricky to figure out.
System.ArtifactsDirectory is the default variable that indicates the directory to which artifacts are downloaded during deployment of a release.
However, to use a default variable in your script, you must first replace the . in the default variable names with _. For example, to print the value of artifact variable System.ArtifactsDirectory in a PowerShell script, you would have to use $env:SYSTEM_ARTIFACTSDIRECTORY.
I have a similar setup and do it this way within my PowerShell script:
# Define the path to the file
$appSettingsFile="$env:SYSTEM_ARTIFACTSDIRECTORY\<rest_of_the_path>\appconfig.json"
# Pass it to the Azure CLI command
az appconfig kv import -n $appConfigName -s file --path $appSettingsFile --format json --separator . --yes
It is also helpful to view the current values of all variables to see what they contain before using them.
References:
Default variables - System
Using default variables
I wish to create an environment file not "variable" and get a path to it in the TravisCI pipeline.
Attached is the image of how we do the same in gitlab
gitlab environment file image
I need to store secrets in a file refer is via a path in travisci pipeline.
Ex: this is how we can do the same in Jenkins:
"KUBECONFIG=/var/lib/jenkins/.kube/filename"
I am not will to upload my secrets file to github private repo.
The encrypt-file command will encrypt an entire file using symmetric (AES-256) encryption and stores the secret in a file. Let us create a file called secret.txt and add the following entries into it: SECRET_VALUE=ABCDE12345 CLIENT_ID=rocky123 CLIENT_SECRET=abc222222!
travis encrypt-file secret.txt -> give this command after creating secret.txt file and it will store result as secret.txt.enc and also shows ->add the following to your build script (before_install stage in your .travis.yml , for instance): - openssl aes-256-cbc -K $encrypted_74945c17fbe2_key -iv $encrypted_74945c17fbe2_iv -in secret.txt.enc -out secret.txt -d
Now add this entry into our .travis.yml script: ( before_install: - openssl aes-256-cbc -K $encrypted_74945c17fbe2_key -iv $encrypted_74945c17fbe2_iv -in secret.txt.enc -out secret.txt -d ) , It can then decrypt values in the secret text file for us
So it is to create a file and use command travis encrypt-file secret.txt, it will then produces an entry, copy that entry and add it into our .travis.yml file in before_install stage
make sure to add the secret.txt.enc to the git repository and make sure NOT to add the secret.txt to the git repository
Generally, we cannot keep both the encryption key and encrypted file in the same place(i.e repo). So, we store the file somewhere else. Where are you storing it? How will you fetch it?
I have problem with uploading my Polymer component into gh pages.
I'm try this from tutorial:
# git clone the Polymer tools repository somewhere outside of your
# element project
git clone git://github.com/Polymer/tools.git
# Create a temporary directory for publishing your element and cd into it
mkdir temp && cd temp
# Run the gp.sh script. This will allow you to push a demo-friendly
# version of your page and its dependencies to a GitHub pages branch
# of your repository (gh-pages). Below, we pass in a GitHub username
# and the repo name for our element
../tools/bin/gp.sh <username> <test-element>
# Finally, clean-up your temporary directory as you no longer require it
cd ..
rm -rf temp
But it's not working.
In terminal I have this errors:
There is something I'm, missing?
Here is your problem:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
For the script to run as intended, you need to add your public ssh key to your github project. Settings -> Deploy keys -> Add deploy Key.
Alternatively, you can manually execute the steps in gp.sh that involve pulling from and pushing to github.
If you don't feel like splitting up the script, try running the commands manually, that should work. The only multi-line command in the script is this one:
echo "{
\"directory\": \"components\"
}
" > .bowerrc
Good luck.
Im currently trying to figure out how to deploy an gitlab project automatically using ci. I managed to run the building stage successfully, but im unsure how to retrieve and push those builds to the releases.
As far as I know it is possibile to use rsync or webhooks (for example Git-Auto-Deploy) to get the build. However I failed to apply these options successfully.
For publishing releases I did read https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/api/tags.md#create-a-new-release, but im not sure if I understand the required pathing schema correctly.
Is there any simple complete example to try out this process?
A way is indeed to use webhooks:
There are tons of different possible solutions to do that. I'd go with a sh script which is invoked by the hook.
How to intercept your webhook is up to the configuration of your server, if you have php-fpm installed you can use a PHP script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.
The PHP script can be something like that:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.